时间分布层:不兼容的形状:[32,3,3]与[32,3]

时间:2019-07-29 13:22:08

标签: python tensorflow machine-learning neural-network tf.keras

我有一个以前可以使用的模型,但是现在不起作用。它给出以下错误:

  

InvalidArgumentError:不兼容的形状:[32,3,3]与[32,3]        [[{{节点Nadam / gradients / loss / time_distributed_loss / SquaredDifference_grad / BroadcastGradientArgs}}]]

旧型号

class Model:
def set_the_model(self, look_back):
    self.model = Sequential()
    self.model.add(LSTM(16, activation="relu", input_shape=(look_back, 3),return_sequences=True)) #, stateful=True
    self.model.add(Dropout(0.2))
    self.model.add(LSTM(32, activation="relu", return_sequences=True))
    self.model.add(Dropout(0.2))
    self.model.add(TimeDistributed(Dense(look_back)))
    #self.model.add(Dense(3))
    self.model.compile(loss="mse", optimizer="nadam", metrics=['acc']) #mse
    self.model.summary()

def start_train(self, trainD1, trainD2):
    es = EarlyStopping(monitor='loss', patience = 2, mode='min')
    #for i, j in trainD1, trainD2:
    self.model.fit(trainD1, trainD2, epochs=200, batch_size = 32, verbose=1,  callbacks=[es])
    #  self.model.reset_states()

def predict_result(self, test_case):
    value = self.model.predict(test_case, verbose=0)
    return value

当我在start_train方法中取消两行代码的注释并且将代码转换为以下代码(当前模型)时,错误开始:

class Model:
def set_the_model(self, look_back):
    self.model = Sequential()
    self.model.add(LSTM(16, activation="relu", batch_input_shape=(1,look_back, 3),return_sequences=True, stateful=True))
    self.model.add(Dropout(0.2))
    self.model.add(LSTM(32, activation="relu", return_sequences=True))
    self.model.add(Dropout(0.2))
    self.model.add(TimeDistributed(Dense(look_back)))
    #self.model.add(Dense(3))
    self.model.compile(loss="mse", optimizer="nadam", metrics=['acc']) #mse
    self.model.summary()

def start_train(self, trainD1, trainD2):
    es = EarlyStopping(monitor='loss', patience = 2, mode='min')
    for i, j in zip(trainD1, trainD2):
        self.model.fit(i, j, epochs=200, batch_size = 1, verbose=1,  callbacks=[es])
        self.model.reset_states()

def predict_result(self, test_case):
    value = self.model.predict(test_case, verbose=0)
    return value

然后,我想回到旧型号。当我开始训练模型时,它给出了上面的错误。

这是模型的样子:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm (LSTM)                  (None, 3, 16)             1280      
_________________________________________________________________ 
dropout (Dropout)            (None, 3, 16)             0         
_________________________________________________________________
lstm_1 (LSTM)                (None, 3, 32)             6272      
_________________________________________________________________
dropout_1 (Dropout)          (None, 3, 32)             0         
_________________________________________________________________
time_distributed (TimeDistri (None, 3, 3)              99        
=================================================================
Total params: 7,651
Trainable params: 7,651
Non-trainable params: 0

编辑:我在旧模型上将batch_size更改为1,现在可以使用了,但是现在,它将非常慢。

0 个答案:

没有答案