ValueError:检查模型目标时出错:Numpy数组列表 您传递给模型的信息不是模型期望的大小。 预计将看到1个数组,但得到以下2个列表 数组。
import numpy as np import scipy import matplotlib
from sklearn.model_selection import train_test_split
import tensorflow as tf import keras, keras.layers as L
from keras.layers.merge import concatenate
from skimage import io import glob x=glob.glob('/home/tschaub/img_input/img_sets/118_18333/*.png') y=[io.imread(i) for i in x] import numpy as np len(y) z=np.stack(y, axis=0)
z=np.reshape(z,[len(y),384,384,1]) X=z img_shape=(384,384,1) code_size=32 from keras.models import Model from keras.layers import Input
t=glob.glob('/home/tschaub/img_input/img_sets/118_18333/*.png') j=[io.imread(i) for i in t[20:]]
len(j) s=np.stack(j, axis=0)
s=np.reshape(s,[len(j),384,384,1]) Z=s
#------------------------------------------------------------------------------------------------------
#encoder_1 input_tensor_1 = L.Input(shape=(384,384,1),name='Input_1')
conv1_1=L.Conv2D(32, kernel_size=(3, 3),strides=1, padding='same', activation='elu', name='conv1_1')(input_tensor_1)
mp1_1=L.MaxPool2D(pool_size=(2, 2), name='mp1_1')(conv1_1)
conv1_2=L.Conv2D(64, kernel_size=(3, 3),strides=1, padding='same', activation='elu',name='conv1_2')(mp1_1)
mp1_2=L.MaxPool2D(pool_size=(2, 2), name='mp1_2')(conv1_2)
conv1_3=L.Conv2D(128, kernel_size=(3, 3),strides=1, padding='same', activation='elu', name='conv1_3')(mp1_2)
mp1_3=L.MaxPool2D(pool_size=(2, 2), name='mp1_3')(conv1_3)
conv1_4=L.Conv2D(256, kernel_size=(3, 3),strides=1, padding='same', activation='elu', name='conv1_4')(mp1_3)
mp1_4=L.MaxPool2D(pool_size=(2, 2), name='mp1_4')(conv1_4)
vector1=L.Flatten(name='vector1')(mp1_4)
target_vector1=L.Dense(code_size, name='target_1')(vector1)
#encoder_2 input_tensor_2 = L.Input(shape=(384,384,1),name='Input_2')
conv2_1=L.Conv2D(32, kernel_size=(3, 3),strides=1, padding='same', activation='elu', name='conv2_1')(input_tensor_2)
mp2_1=L.MaxPool2D(pool_size=(2, 2), name='mp2_1')(conv2_1)
conv2_2=L.Conv2D(64, kernel_size=(3, 3),strides=1, padding='same', activation='elu', name='conv2_2')(mp2_1)
mp2_2=L.MaxPool2D(pool_size=(2, 2), name='mp2_2')(conv2_2)
conv2_3=L.Conv2D(128, kernel_size=(3, 3),strides=1, padding='same', activation='elu', name='conv2_3')(mp2_2)
mp2_3=L.MaxPool2D(pool_size=(2, 2), name='mp2_3')(conv2_3)
conv2_4=L.Conv2D(256, kernel_size=(3, 3),strides=1, padding='same', activation='elu', name='conv2_4')(mp2_3)
mp2_4=L.MaxPool2D(pool_size=(2, 2), name='mp2_4')(conv2_4)
vector2=L.Flatten(name='vector2')(mp2_4)
target_vector2=L.Dense(code_size, name='target_2')(vector2)
#merge code from encoders merge=concatenate([target_vector1, target_vector2])
# decoder
d_input_1=L.Dense(64, name='de_in')(merge)
dense_1=L.Dense(147456, name='de_dense1')(d_input_1)
reshape_1=L.Reshape((24,24,256), name='reshape')(dense_1)
conTR1_1=L.Conv2DTranspose(128, kernel_size=(3, 3),strides=2, padding='same', activation='elu', name='up_1')(reshape_1)
conTR1_2=L.Conv2DTranspose(filters=64, kernel_size=(3, 3), strides=2, activation='elu', padding='same', name='up_2')(conTR1_1)
conTR1_3=L.Conv2DTranspose(filters=32, kernel_size=(3, 3), strides=2, activation='elu', padding='same', name='up_3')(conTR1_2)
conTR1_4=L.Conv2DTranspose(filters=1, kernel_size=(3, 3), strides=2, activation=None, padding='same', name='up_4')(conTR1_3)
X_train, X_test = train_test_split(X, test_size=0.1, random_state=42) Z_train, Z_test = train_test_split(Z, test_size=0.1, random_state=42)
from keras.utils import multi_gpu_model
autoencoder = Model(inputs=[input_tensor_1, input_tensor_2],outputs=[conTR1_4])
autoencoder.summary()
parallel_model=multi_gpu_model(autoencoder,gpus=4)
parallel_model.compile('adamax','mse')
from keras.utils import plot_model plot_model(autoencoder, to_file='time.png')
parallel_model.fit([X_train,Z_train], [X_train,Z_train], epochs=120, batch_size=10)
我认为我指定了两个输入:
autoencoder = Model(inputs=[input_tensor_1, input_tensor_2],outputs=[conTR1_4])
并适合两个输入:
parallel_model.fit([X_train,Z_train], [X_train,Z_train], epochs=120, batch_size=10)
但是由于某些原因,我仍然收到错误消息,表明我的模型只需要一个输入。在实例化模型和拟合过程中,我尝试了各种语法,但似乎无法弄清楚发生了什么!
请帮助! (提前感谢)