添加两个密集层的keras

时间:2019-08-14 12:14:17

标签: python tensorflow keras

有两个输入x和u生成输出y。 x,u和y之间存在线性关系,即y = x wx + u wx。我正在尝试根据数据计算wx和wu。这是模型构建/拟合的代码。

    n_train = 400
    n_val = 100
    train_u = u[:(n_train+n_val)]
    train_x = x[:(n_train+n_val)]
    train_y = y[:(n_train+n_val)]
    test_u = u[(n_train+n_val):]
    test_x = x[(n_train+n_val):]
    test_y = y[(n_train+n_val):]
    val_u = train_u[-n_val:]
    val_x = train_x[-n_val:]
    val_y = train_y[-n_val:]
    train_u = train_u[:-n_val]
    train_x = train_x[:-n_val]
    train_y = train_y[:-n_val]

    # RNN derived classes want a shape of (batch_size, timesteps, input_dim)
    # batch_size. One sequence is one sample. A batch is comprised of one or more samples.
    # timesteps. One time step is one point of observation in the sample.
    # input_dim. number of observation at a time step.
    # I believe n_train = one_epoch = batch_size * time_steps, features = nx_lags or nu_lags
    # I also thing an epoch is one pass through the training data

    n_batches_per_epoch = 8
    n_iterations_per_batch = round(n_train / n_batches_per_epoch)
    batch_size = n_batches_per_epoch
    time_steps = n_iterations_per_batch
    features_x = train_x.shape[1]
    features_u = train_u.shape[1]
    features_y = train_y.shape[1]

    keras_train_u = train_u.values.reshape((batch_size, time_steps, features_u))
    keras_train_x = train_x.values.reshape((batch_size, time_steps, features_x))
    keras_train_y = train_y.reshape((batch_size, time_steps, features_y))
    keras_val_u = val_u.values.reshape((2, time_steps, features_u))
    keras_val_x = val_x.values.reshape((2, time_steps, features_x))
    keras_val_y = val_y.reshape((2, time_steps, features_y))
    keras_test_u = test_u.values.reshape((1, test_u.shape[0], features_u))
    keras_test_x = test_x.values.reshape((1, test_u.shape[0], features_x))
    keras_test_y = test_y.reshape((1, test_u.shape[0], features_y))

    print('u.values.shape: ', u.values.shape)
    # Now try a tensorflow model
    # x_input = keras.Input(shape=(batch_size, time_steps, features_x), name='x_input')
    # u_input = keras.Input(shape=(batch_size, time_steps, features_u), name='u_input')
    x_input = keras.Input(shape=(time_steps, features_x), name='x_input')
    u_input = keras.Input(shape=(time_steps, features_u), name='u_input')
    da = layers.Dense(ny, name='dense_a', use_bias=False)(x_input)
    db = layers.Dense(ny, name='dense_b', use_bias=False)(u_input)
    output = layers.Add()([da, db])

    model = keras.Model(inputs=[x_input, u_input], outputs=output)

    model.compile(optimizer=keras.optimizers.RMSprop(),  # Optimizer
                  # Loss function to minimize
                  loss=keras.losses.SparseCategoricalCrossentropy(),
                  # List of metrics to monitor
                  metrics=[keras.metrics.SparseCategoricalAccuracy()])
    print(model.summary())
    print('keras_train_x.shape: ', keras_train_x.shape)
    print('keras_train_u.shape: ', keras_train_u.shape)
    print('keras_train_y.shape: ', keras_train_y.shape)
    print('keras_val_x.shape: ', keras_val_x.shape)
    print('keras_val_u.shape: ', keras_val_u.shape)
    print('keras_val_y.shape: ', keras_val_y.shape)
    history = model.fit([keras_train_x, keras_train_u], keras_train_y,
                        batch_size=64,
                        epochs=3,
                        # We pass some validation for
                        # monitoring validation loss and metrics
                        # at the end of each epoch
                        validation_data=([keras_val_x, keras_val_u], keras_val_y))

这是输出,有错误。

Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
x_input (InputLayer)            [(None, 50, 7)]      0                                            
__________________________________________________________________________________________________
u_input (InputLayer)            [(None, 50, 7)]      0                                            
__________________________________________________________________________________________________
dense_a (Dense)                 (None, 50, 2)        14          x_input[0][0]                    
__________________________________________________________________________________________________
dense_b (Dense)                 (None, 50, 2)        14          u_input[0][0]                    
__________________________________________________________________________________________________
add (Add)                       (None, 50, 2)        0           dense_a[0][0]                    
                                                                 dense_b[0][0]                    
==================================================================================================
Total params: 28
Trainable params: 28
Non-trainable params: 0
__________________________________________________________________________________________________
None
keras_train_x.shape:  (8, 50, 7)
keras_train_u.shape:  (8, 50, 7)
keras_train_y.shape:  (8, 50, 2)
keras_val_x.shape:  (2, 50, 7)
keras_val_u.shape:  (2, 50, 7)
keras_val_y.shape:  (2, 50, 2)
Train on 8 samples, validate on 2 samples

Epoch 1/3
Traceback (most recent call last):
  File "arx_rnn.py", line 487, in <module>
    main()
  File "/arx_rnn.py", line 481, in main
    rnn_prediction = x.rnn_n_steps(y_measured, u_control, n_to_predict)
  File "arx_rnn.py", line 387, in rnn_n_steps
    validation_data=([keras_val_x, keras_val_u], keras_val_y))
  File "venv\lib\site-packages\tensorflow\python\keras\engine\training.py", line 780, in fit
    steps_name='steps_per_epoch')
  File "venv\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 363, in model_iteration
    batch_outs = f(ins_batch)
  File "venv\lib\site-packages\tensorflow\python\keras\backend.py", line 3292, in __call__
    run_metadata=self.run_metadata)
  File "venv\lib\site-packages\tensorflow\python\client\session.py", line 1458, in __call__
    run_metadata_ptr)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Can not squeeze dim[2], expected a dimension of 1, got 2
     [[{{node metrics/sparse_categorical_accuracy/Squeeze}}]]

Process finished with exit code 1

错误消息告诉我什么,以及如何纠正?

1 个答案:

答案 0 :(得分:1)

Keras分类准确度指标期望输出和标签的形状为(batch_size,num_classes)。错误消息中的dim[2]指示输出形状为3d:(None,50,2)

简单的解决方法是通过任何方式确保输出层为每个类每批次提供一个预测-即具有形状(batch_size,num_classes)-可以通过{{1 }}或Reshape

更好的解决方法是根据设计需求更改输入输出拓扑-即,您到底要分类什么??您的数据维度建议您尝试对各个时间步进行分类-在这种情况下,一次一次输入数据 :Flatten。另外,在 batch轴上一次进给时间步长,一次进给一次,因此1000个时间步长将对应于(batch_size,features)-但是,如果模型有任何时间步长,则不要这样做(1000,features)层,将每个批处理轴条目视为独立序列

再次使用stateful对序列进行分类,请确保图层数据流最终产生2d输出。