使用非符号张量的输入调用了density_66层

时间:2019-03-05 11:21:10

标签: python keras deep-learning concatenation conv-neural-network

我有一个N folder(N ID)的数据集,每个N ID folder的内部都有M folder,每个M文件夹的内部都有8张图像。我想用2D-CNN训练数据集。我的模型包含8 CNNs,每个包含一个M文件夹图像,结束该ID的第一个文件夹后,该模型包含下一个包含8个图像的文件夹,每个图像进入8个模型之一,依此类推。最后,我连接了8个模型的输出,但是当我要连接所有数据集时遇到了一个问题。如何将前8个模型的输出与后8个模型的输出连接起来,依此类推,直到数据集结束。我的模型设计如下图所示: enter image description here

我的python代码如下:

model_out = []
input_list = []
model_list = []
for fold_Path in listing:
image_fold = os.listdir(ID_Paths + "\\" + fold_Path)
for file in image_fold:
    segments = os.listdir(ID_Paths + "\\" + fold_Path + "\\" + file)
    segments_list = []
    input_list = []
    output_list = []
    model_out = []
    for seg in segments:
        im = (ID_Paths + "\\" + fold_Path + "\\" + file + "\\" + seg)
        image = cv2.imread(im)
        image = cv2.resize(image, (60, 60))
        segments_list.append(image)

        if len(segments_list) == 8:
            seg1 = Input(shape=segments_list[0].shape, name="seg1")
            input_list.append(seg1)
            conv0_1 = Conv2D(32, (3, 3), padding="same")(seg1)
            act0_1 = Activation("relu")(conv0_1)
            batch0_1 = BatchNormalization(axis=-1)(act0_1)
            pool0_1 = MaxPooling2D(pool_size=(2, 2))(batch0_1)
            drop0_1 = Dropout(0.25)(pool0_1)

            conv0_2 = Conv2D(64, (3, 3), padding="same")(drop0_1)
            act0_2 = Activation("relu")(conv0_2)
            batch0_2 = BatchNormalization(axis=-1)(act0_2)
            pool0_2 = MaxPooling2D(pool_size=(2, 2))(batch0_2)
            drop0_2 = Dropout(0.25)(pool0_2)
            out1 = Flatten()(drop0_2)
            output_list.append(out1)

# the same design until model 8
.
.
.
            seg8 = Input(shape=segments_list[7].shape, name="seg8")
            input_list.append(seg8)
            conv7_1 = Conv2D(32, (3, 3), padding="same")(seg8)
            act7_1 = Activation("relu")(conv7_1)
            batch7_1 = BatchNormalization(axis=-1)(act7_1)
            pool7_1 = MaxPooling2D(pool_size=(2, 2))(batch7_1)
            drop7_1 = Dropout(0.25)(pool7_1)

            conv7_2 = Conv2D(64, (3, 3), padding="same")(drop7_1)
            act7_2 = Activation("relu")(conv7_2)
            batch7_2 = BatchNormalization(axis=-1)(act7_2)
            pool7_2 = MaxPooling2D(pool_size=(2, 2))(batch7_2)
            drop7_2 = Dropout(0.25)(pool7_2)
            out8 = Flatten()(drop7_2)
            output_list.append(out8)
# -----------Now Concatenation of 8 models will be start-----------------------------------------------------------------------------
            merge = Concatenate()(output_list)
            print("Concatenation Ended...Dense will be done...")
            den1 = Dense(128)(merge)
            act = Activation("relu")(den1)
            bat = BatchNormalization()(act)
            drop = Dropout(0.5)(bat)

            model_out.append(drop)

        else:
            continue

        small_model = Model(inputs=input_list, outputs=model_out)
        model_list.append(small_model)
        print("Concatenation done")
        segments_list = []
        input_list = []
        output_list = []
        model_out = []
# it is OK till here, after this step I don't know how can I concatenate the output of each concatenated result

den2 = Dense(128)(model_list) # the error in this line
act2 = Activation("relu")(den2)
bat2 = BatchNormalization()(act2)
drop2 = Dropout(0.5)(bat2)

# softmax classifier
print("Classification will be start")
final_out1 = Dense(classes)(drop2)
final_out = Activation('softmax')(final_out1)
#inp = Input(shape=den2.shape)
#big_model = Model(inputs=inp, outputs=final_out)
final_out.compile(loss="categorical_crossentropy", optimizer= opt, metrics=["accuracy"])
final_out.fit_generator(aug.flow(trainX, trainY, batch_size=BS),validation_data=(testX, testY),steps_per_epoch=len(trainX) // BS, epochs=EPOCHS, verbose=1)

当我运行程序时,它给我以下错误:

ValueError: Layer dense_66 was called with an input that isn't a symbolic tensor.

任何人都可以帮助我。如何连接,编译和训练所有数据集。任何提示都可能会有所帮助,谢谢。

1 个答案:

答案 0 :(得分:0)

这是因为要传递的不是张量的模型对象model_list的列表,它们包装了给定输入产生张量的计算图。相反,您应该收集张量输出,类似于:

  #...
  model_ins.append(seg1)
  # ...
  model_outs.append(drop)
# ...
all_model_outs = Concatenate(model_outs)
flat_model_outs = Flatten()(all_model_outs)
den2 = Dense(128)(flat_model_outs) # the error in this line
# ...
big_model= Model(model_ins, final_out)
big_model.compile(loss="categorical_crossentropy", optimizer= opt, metrics=["accuracy"])
big_model.fit_generator(aug.flow(trainX, trainY, batch_size=BS),validation_data=(testX, testY),steps_per_epoch=len(trainX) // BS, epochs=EPOCHS, verbose=1)

这个想法是,您可以对较大图形进行任何输入和输出计算,然后转换为模型进行训练。在这里,大型模型是您计算出的最终输出的所有输入,这些输出将一起训练所有较小的模型。您仍然可以使用较小的模型来稍后进行单独预测。