在嵌套列表上使用For循环

时间:2017-07-13 16:31:02

标签: list python-3.x for-loop

我正在使用嵌套列表来保存笛卡尔坐标类型系统中的数据。

数据是一个类别列表,可能是0,1,2,3,4,5,255(只有7个类别)。

数据保存在格式化的列表中:

stack = [[0,1,0,0],
         [2,1,0,0],
         [1,1,1,3]]

每个列表代表一行,行的每个元素代表一个数据点。

我很想继续使用这种格式,因为我使用它来生成图像,到目前为止它非常容易使用。

但是,我在运行以下代码时遇到了问题:

for j in range(len(stack)):
    stack[j].append(255)
    stack[j].insert(0, 255)

这是为了遍历每一行,在每行的开头和结尾添加一个单独的元素255。不幸的是,它在开始和结束时都添加了12个255实例!

这对我没有意义。据推测,我错过了一些非常微不足道的东西,但我看不出它可能是什么。据我所知,它与循环有关:如果我在循环之外写stack[0].append(255),它的行为正常。

代码显然是更大脚本的一部分。该脚本运行多个For循环,其中一些是范围(12)但在调用此循环时应该关闭。

所以 - 我错过了一些微不足道的东西,还是比那更邪恶?

编辑:完整代码

step_size = 12,上面的代码是插入“左右边框”的部分

def classify(target_file, output_file):

import numpy
import cifar10_eval      # want to hijack functions from the evaluation script

target_folder = "Binaries/"   # finds target file in "Binaries"
destination_folder = "Binaries/Maps/"   # destination for output file

# open the meta file to retrieve x,y dimensions
file = open(target_folder + target_file + "_meta" + ".txt", "r")
new_x = int(file.readline())
new_y = int(file.readline())
orig_x = int(file.readline())
orig_y = int(file.readline())
segment_dimension = int(file.readline())
step_size = int(file.readline())
file.close()

# run cifar10_eval and create predictions vector (formatted as a list)
predictions = cifar10_eval.map_interface(new_x * new_y)

del predictions[(new_x * new_y):]     # get rid of excess predictions (that are an artefact of the fixed batch size)

print("# of predictions: " + str(len(predictions)))    

# check that we are mapping the whole picture! (evaluation functions don't necessarily use the full data set)
if len(predictions) != new_x * new_y:
    print("Error: number of predictions from cifar10_eval does not match metadata for this file")
    return

# copy predictions to a nested list to make extraction of x/y data easy
# also eliminates need to keep metadata - x/y dimensions are stored via the shape of the output vector
stack = []
for j in range(new_y):
    stack.append([])
    for i in range(new_x):
        stack[j].append(predictions[j*new_x + i])

predictions = None # clear the variable to free up memory

# iterate through map list and explode each category to cover more pixels
# assigns a step_size x step_size area to each classification input to achieve correspondance with original image
new_stack = []
for j in range(len(stack)):
    row = stack[j]
    new_row = []
    for i in range(len(row)):
        for a in range(step_size):
            new_row.append(row[i])
    for b in range(step_size):
        new_stack.append(new_row)
stack = new_stack
new_stack = None
new_row = None     # clear the variables to free up memory

# add a border to the image to indicate that some information has been lost
# border also ensures that map has 1-1 correspondance with original image which makes processing easier

# calculate border dimensions
top_and_left_thickness = int((segment_dimension - step_size) / 2)
right_thickness = int(top_and_left_thickness + (orig_x - (top_and_left_thickness * 2 + step_size * new_x)))
bottom_thickness = int(top_and_left_thickness + (orig_y - (top_and_left_thickness * 2 + step_size * new_y)))

print(top_and_left_thickness)
print(right_thickness)
print(bottom_thickness)

print(len(stack[0]))
# add the right then left borders
for j in range(len(stack)):
    for b in range(right_thickness):
    stack[j].append(255)
    for b in range(top_and_left_thickness):
    stack[j].insert(0, 255)
print(stack[0])
print(len(stack[0]))
# add the top and bottom borders
row = []
for i in range(len(stack[0])):
    row.append(255)          # create a blank row
for b in range(top_and_left_thickness):
    stack.insert(0, row)    # append the blank row to the top x many times
for b in range(bottom_thickness):
    stack.append(row)       # append the blank row to the bottom of the map

# we have our final output
# repackage this as a numpy array and save for later use

output = numpy.asarray(stack,numpy.uint8)
numpy.save(destination_folder + output_file + ".npy", output)

print("Category mapping complete, map saved as numpy pickle: " + output_file + ".npy")

0 个答案:

没有答案