您必须为编码器-解码器模型输入占位符张量的值

时间:2019-05-02 16:36:56

标签: tensorflow keras lstm

我有一个(?,12,256)形状的层,我想将此形状减小为(?,256),我用了tf。重塑,但我在model.fit()中发现错误:

InvalidArgumentError:您必须使用dtype float和shape [?,12,256]为占位符张量'Placeholder_53'提供一个值      [[{{node Placeholder_53}}]]

latent_dim =256
x=encoder_outputs
x=tf.placeholder(dtype=tf.float32, shape=(None, 12, 256))
input2 = tf.reshape(x, shape=[tf.shape(x)[0],256]) +
# output layer for mean and log variance
z_mu = Dense(latent_dim)(input2)  #remplacer h
z_log_var = Dense(latent_dim)(input2)

def sampling(args):
  batch_size=1
  z_mean, z_log_sigma = args
  epsilon = K.random_normal(shape=(batch_size, latent_dim),
                          mean=0., stddev=1.)
  return z_mean + K.exp(z_log_sigma/2) * epsilon

  z = Lambda(sampling, output_shape=(None,))([z_mu, z_log_var])
  state_h= z
  state_c = z
  encoder_states = [state_h, state_c] 

 def vae_loss(y_true, y_pred):
    recon = K.sum(K.binary_crossentropy(y_pred, y_true), axis=-1)
    kl = 0.5 * K.sum(K.exp(z_log_var) + K.square(z_mu) - 1. - z_log_var, axis=-1)
    return recon + kl[:, None]


    decoder_inputs = Input(shape=(None,))
    decoder_emb = Embedding(input_dim=vocab_out_size, 
    output_dim=embedding_dim)
    decoder_lstm = LSTM(units=units, return_sequences=True, return_state=True)

    decoder_lstm_out, _, _ = decoder_lstm(decoder_emb(decoder_inputs), initial_state=encoder_states)
    # Attention layer
    attn_layer = AttentionLayer(name='attention_layer')
    attn_out, attn_states = attn_layer([encoder_outputs, decoder_outputs])
    decoder_concat_input = Concatenate(axis=-1, name='concat_layer') 
     ([decoder_outputs, attn_out])
    decoder_d2 = Dense(vocab_out_size, activation="softmax")
    dense_time = TimeDistributed(decoder_d2 , name='time_distributed_layer')
    decoder_out = dense_time(decoder_concat_input)
    model = Model([encoder_inputs, decoder_inputs], decoder_out)
    model.compile(optimizer='adam', loss=vae_loss, metrics=['acc'])
    model.summary()
    history = model.fit([input_data, teacher_data], target_data,
             batch_size=BATCH_SIZE,
             epochs=3,
             validation_split=0.2)

encoder_outputs的形状为:(无,12,256) 那么,还有其他方法可以做到这一点吗?

预先感谢

0 个答案:

没有答案