输入0与图层flatten_2不兼容:预期的min_ndim = 3,找到的ndim = 2

时间:2019-02-07 23:11:01

标签: python machine-learning keras deep-learning

我有下面所示的Keras模型,试图将图像输入与数值特征向量合并,但出现以下错误:

  

ValueError:输入0与图层flatten_2不兼容:预期   min_ndim = 3,找到的ndim = 2

出现在以下语句上:

  

value_model.add(Flatten(input_shape =(12,)))

关于如何解决此问题的任何想法?

image_input = Input((512, 512, 1))
vector_input = Input((12,))

image_model = Sequential()
image_model.add(Convolution2D(32,8,8, subsample=(4,4), input_shape=(512,512,1)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64,4,4, subsample=(2,2)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64,3,3, subsample=(1,1)))
image_model.add(Activation('relu'))
image_model.add(Flatten())
image_model.add(Dense(512))
image_model.add(Activation('relu'))

value_model = Sequential()
value_model.add(Flatten(input_shape=(12,)))
value_model.add(Dense(16))
value_model.add(Activation('relu'))
value_model.add(Dense(16))
value_model.add(Activation('relu'))
value_model.add(Dense(16))
value_model.add(Activation('relu'))

merged = Concatenate([image_model, value_model])

final_model = Sequential()
final_model.add(merged)
final_model.add(Dense(1, activation='sigmoid'))

model = Model(inputs=[image_input, vector_input], outputs=output)
model.compile(loss='binary_crossentropy', optimizer='adam',metrics=['acc'])
model.fit([images, features], y, epochs=5)

EDIT-1

这是完整的脚本:

from keras.layers import Input, Concatenate, Conv2D, Flatten, Dense, Convolution2D, Activation
from keras.models import Model, Sequential
import pandas as pd
import numpy as np
import cv2
import os

def label_img(img):
    word_label = img.split('.')[-3]
    if word_label == 'r':
        return 1
    elif word_label == 'i':
        return 0

train_directory = '/train'
images = []
y = []

dataset = pd.read_csv('results.csv')

dataset = dataset[[ 'first_value',
                    'second_value']]

features = dataset.iloc[:,0:12].values

for root, dirs, files in os.walk(train_directory):
    for file in files:
        image = cv2.imread(root + '/' + file)
        image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
        image = cv2.resize(image,(512,512),interpolation=cv2.INTER_AREA)
        image = image/255
        images.append(image)
        label = label_img(file)
        y.append(label)

images = np.asarray(images)
images = images.reshape((-1,512,512,1))

image_input = Input((512, 512, 1))
vector_input = Input((12,))

image_model = Sequential()
image_model.add(Convolution2D(32,8,8, subsample=(4,4), input_shape=(512,512,1)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64,4,4, subsample=(2,2)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64,3,3, subsample=(1,1)))
image_model.add(Activation('relu'))
image_model.add(Flatten())
image_model.add(Dense(512))
image_model.add(Activation('relu'))

value_model = Sequential()
#value_model.add(Flatten(input_shape=(12,)))
value_model.add(Dense(16))
value_model.add(Activation('relu'))
value_model.add(Dense(16))
value_model.add(Activation('relu'))
value_model.add(Dense(16))
value_model.add(Activation('relu'))

merged = Concatenate([image_model, value_model])

final_model = Sequential()
final_model.add(merged)
final_model.add(Dense(1, activation='sigmoid'))

model = Model(inputs=[image_input, vector_input], outputs=output)
model.compile(loss='binary_crossentropy', optimizer='adam',metrics=['acc'])
model.fit([images, features], y, epochs=5)

EDIT-2

当我执行以下操作时:

output = final_model.add(Dense(1, activation='sigmoid'))

我仍然收到相同的错误。

1 个答案:

答案 0 :(得分:2)

您可以更改代码以反映新的Keras 2 API,如下所示。在您的代码中,您正在尝试使用旧版keras API和Keras 2 API的混合方法。

我还建议与Keras 2 API一起使用新的Conv2D层而不是Convolution2D层。 subsample自变量在strides中被称为Conv2D

image_input = Input((512, 512, 1))
vector_input = Input((12,))

image_model = Conv2D(32,(8,8), strides=(4,4))(image_input)
image_model = Activation('relu')(image_model)
image_model = Conv2D(64,(4,4), strides=(2,2))(image_model)
image_model = Activation('relu')(image_model)
image_model = Conv2D(64,(3,3), strides=(1,1))(image_model)
image_model = Activation('relu')(image_model)
image_model = Flatten()(image_model)
image_model = Dense(512)(image_model)
image_model = Activation('relu')(image_model)

value_model = Dense(16)(vector_input)
value_model = Activation('relu')(value_model)
value_model = Dense(16)(value_model)
value_model = Activation('relu')(value_model)
value_model = Dense(16)(value_model)
value_model = Activation('relu')(value_model)

merged = concatenate([image_model, value_model])

output = Dense(1, activation='sigmoid')(merged)

model = Model(inputs=[image_input, vector_input], outputs=output)

model.compile(loss='binary_crossentropy', optimizer='adam')

考虑一个玩具数据集,

I = np.random.rand(100, 512, 512, 1)
V = np.random.rand(100, 12, )

y = np.random.rand(100, 1, )

培训

model.fit([I, V], y, epochs=10, verbose=1)


Epoch 1/10
100/100 [==============================] - 9s 85ms/step - loss: 3.4615
Epoch 2/10
 32/100 [========>.....................] - ETA: 4s - loss: 0.9696