大精度差异:Conv2D和ConvLSTM2D

时间:2018-03-27 16:49:23

标签: scikit-learn keras

我试图比较Conv2DConvLSTM2D架构来估算低分辨率图像的高分辨率图像。但预测显示两种架构之间存在巨大差异。是什么导致了这种错误的预测?是因为架构的实施不正确?

如果是ConvLSTM2D:

import numpy as np, scipy.ndimage, matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, ConvLSTM2D, MaxPooling2D, UpSampling2D
from sklearn.metrics import accuracy_score, confusion_matrix, cohen_kappa_score
from sklearn.preprocessing import MinMaxScaler, StandardScaler
np.random.seed(123)

raw = np.arange(96).reshape(8,3,4)
data1 = scipy.ndimage.zoom(raw, zoom=(1,100,100), order=1, mode='nearest') #low res
print (data1.shape)
#(8, 300, 400)

data2 = scipy.ndimage.zoom(raw, zoom=(1,100,100), order=3, mode='nearest') #high res
print (data2.shape)
#(8, 300, 400)

X_train = data1.reshape(1, data1.shape[0], data1.shape[1], data1.shape[2], 1)
Y_train = data2.reshape(1, data2.shape[0], data2.shape[1], data2.shape[2], 1)

model = Sequential()
input_shape = (data1.shape[0], data1.shape[1], data1.shape[2], 1)
model.add(ConvLSTM2D(16, kernel_size=(3, 3), activation='sigmoid', padding='same',input_shape=input_shape,return_sequences=True))
model.add(ConvLSTM2D(1, kernel_size=(3, 3), activation='sigmoid', padding='same',return_sequences=True))
model.compile(loss='mse', optimizer='adam')

model.fit(X_train, Y_train, 
      batch_size=1, epochs=10, verbose=1)

y_predict = model.predict(X_train)
y_predict = y_predict.reshape(data1.shape[0], data1.shape[1], data1.shape[2])
slope, intercept, r_value, p_value, std_err = linregress(data2[0,:,:].reshape(-1), y_predict[0,:,:].reshape(-1))
print (r_value**2)

0.26

如果是Conv2D:

X_train = data1.reshape(data1.shape[0], data1.shape[1], data1.shape[2], 1)
Y_train = data2.reshape(data2.shape[0], data2.shape[1], data2.shape[2], 1)

model = Sequential()
input_shape = (data1.shape[1], data1.shape[2], 1)
model.add(Convolution2D(64, kernel_size=(3,3), activation='sigmoid',padding='same',input_shape=input_shape))        
model.add(Convolution2D(1, kernel_size=(3,3), activation='sigmoid',padding='same'))

model.compile(loss='mse', optimizer='adam')

model.fit(X_train, Y_train, 
          batch_size=1, epochs=10, verbose=1)
y_predict = model.predict(X_train)
y_predict = y_predict.reshape(data1.shape[0], data1.shape[1], data1.shape[2])

slope, intercept, r_value, p_value, std_err = linregress(data2[0,:,:].reshape(-1), y_predict[0,:,:].reshape(-1))
print (r_value**2)

0.93

1 个答案:

答案 0 :(得分:1)

两个重要的事情可能会严重影响结果:

  • 您有16个针对16个ConvLSTM2D过滤器的Conv2D过滤器
  • LSTM层正在尝试按顺序理解所有图像的“电影”,这肯定比处理单个图像更复杂。

对于第二点,您可以尝试使用(8,1,300,400,1)的形状。这将消除时间步骤(如果我们正确理解它们,应该使LSTM完全像Conv2D一样工作)。但是,这作为一个复发层是无用的。

不幸的是,这是“比较”它们的唯一方法。 LSTM图层适用于“电影”(图像是按顺序排列的帧),但这似乎不是你的情况。