喀拉拉邦的堆叠式GRU模型

时间:2019-06-19 23:21:16

标签: keras stacked gated-recurrent-unit

我愿意创建一个3层的GRU模型,其中每层将分别具有32、16、8个单位。该模型将模拟量作为输入,并产生模拟量作为输出。

我写了以下代码:

def getAModelGRU(neuron=(10), look_back=1, numInputs = 1, numOutputs = 1):
    model = Sequential()
    if len(neuron) > 1:
        model.add(GRU(units=neuron[0], input_shape=(look_back,numInputs)))
        for i in range(1,len(neuron)-1):
            model.add(GRU(units=neuron[i]))
        model.add(GRU(units=neuron[-1], input_shape=(look_back,numInputs)))
    else:
    model.add(GRU(units=neuron, input_shape=(look_back,numInputs)))
    model.add(Dense(numOutputs))
    model.compile(loss='mean_squared_error', optimizer='adam')
    return model

而且,我将此函数称为:

chkEKF = getAModelGRU(neuron=(32,16,8), look_back=1, numInputs=10, numOutputs=6)

而且,我获得了以下内容:

Traceback (most recent call last):
  File "/home/momtaz/Dropbox/QuadCopter/quad_simHierErrorCorrectionEstimator.py", line 695, in <module>
    Single_Point2Point()
  File "/home/momtaz/Dropbox/QuadCopter/quad_simHierErrorCorrectionEstimator.py", line 74, in Single_Point2Point
    chkEKF = getAModelGRU(neuron=(32,16,8), look_back=1, numInputs=10, numOutputs=6)
  File "/home/momtaz/Dropbox/QuadCopter/rnnUtilQuad.py", line 72, in getAModelGRU
    model.add(GRU(units=neuron[i]))
  File "/home/momtaz/PycharmProjects/venv/lib/python3.6/site-packages/keras/engine/sequential.py", line 181, in add
    output_tensor = layer(self.outputs[0])
  File "/home/momtaz/PycharmProjects/venv/lib/python3.6/site-packages/keras/layers/recurrent.py", line 532, in __call__
    return super(RNN, self).__call__(inputs, **kwargs)
  File "/home/momtaz/PycharmProjects/venv/lib/python3.6/site-packages/keras/engine/base_layer.py", line 414, in __call__
    self.assert_input_compatibility(inputs)
  File "/home/momtaz/PycharmProjects/venv/lib/python3.6/site-packages/keras/engine/base_layer.py", line 311, in assert_input_compatibility
    str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer gru_2: expected ndim=3, found ndim=2

我在网上尝试过,但没有找到与“ ndim”相关问题的任何解决方案。

请让我知道我在这里做错了。

1 个答案:

答案 0 :(得分:0)

您需要确保input_shape参数仅在第一层中定义,并且除可能的最后一层以外(取决于您的模型),每一层都具有return_sequences=True

下面的代码用于通常的情况,即您要堆叠几层,而每一层中的单位数量只会发生变化。

model = tf.keras.Sequential()

gru_options = [dict(units = units,
                    time_major=False,
                    kernel_regularizer=0.01,
                    # ... potentially more options
                    return_sequences=True) for units in [32,16,8]]
gru_options[0]['input_shape'] = (n_timesteps, n_inputs)
gru_options[-1]['return_sequences']=False # optionally disable sequences in the last layer. 
                                          # If you want to return sequences in your last
                                          # layer delete this line, however it is necessary
                                          # if you want to connect this to a dense layer
                                          # for example.
for opts in gru_options:
    model.add(tf.keras.layers.GRU(**opts))

model.add(tf.keras.Dense(6))

由于在else子句之后没有缩进,因此代码中有错误。另外,如果您使用列表而不是元组作为图层单位,则不必进行类似C的迭代。