Keras在第一个时期结束时显示形状错误

时间:2018-12-13 06:44:37

标签: python keras lstm autoencoder

我尝试用keras创建一个LSTM自动编码器

同时,它在第一个时期结束时显示值错误

 Session["additionalInfo"]="w1";
 int timeout = remember ? 525600 : 30;
 var ticket = new FormsAuthenticationTicket(1, userName, DateTime.Now, 
 DateTime.Now.AddMinutes(timeout), remember, userId);
 string encrypted = FormsAuthentication.Encrypt(ticket);
 var cookie = new HttpCookie("qwe", encrypted);
 cookie.Expires = System.DateTime.Now.AddMinutes(timeout);
 Response.Cookies.Add(authCookie);

模型输入的形状为(sample_size,20,31),其次是模型

采样功能:

//first times session comes null in aspnet webform, i use actual code if it is really exist.


   if (HttpContext.Current.Session != null)
        {
            var authCookie = HttpContext.Current.Request.Cookies["qwe"];
            if (authCookie != null)
            {
                FormsAuthenticationTicket authTicket = FormsAuthentication.Decrypt(authCookie.Value);
                if (authTicket != null && !authTicket.Expired)
                {
                    if (HttpContext.Current.Session["additionalInfo"] == null)
                    {
                        JavaScriptSerializer serializer = new JavaScriptSerializer();
                        var userId = serializer.Deserialize<int>(authTicket.UserData);
                        var dr = new UserFinder().GetUserDr(userId);
                        HttpContext.Current.Session["additionalInfo"] = "w1";
                    }
                }
            }
        }

编码器部分:

ValueError: operands could not be broadcast together with shapes (32,20) (20,20) (32,20) 

解码器部分:

def sampling(args):

    z_mean, z_log_var = args
    batch = K.shape(z_mean)[0]
    dim = K.int_shape(z_mean)[1]
    # by default, random_normal has mean=0 and std=1.0
    epsilon = K.random_normal(shape=(batch,dim))
    return z_mean + K.exp(0.5 * z_log_var) * epsilon 

失物招领处

inputs = Input(shape=(lag,data.shape[1],), name='encoder_input')
x = LSTM(30,activation='relu',return_sequences=True) (inputs)
x = LSTM(60,activation='relu') (x)
z_mean = Dense(60, name='z_mean')(x)
z_log_var = Dense(60, name='z_log_var')(x)
z_temp = Lambda(sampling, output_shape=(60,), name='z')([z_mean, z_log_var])
z = RepeatVector(lag)(z_temp)
encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')

它将导致此错误:

latent_inputs = Input(shape=(lag,60), name='z_sampling')
x_2 = LSTM(60, activation='relu',return_sequences= True)(latent_inputs)
x_2 = LSTM(data.shape[1], activation='relu',return_sequences= True)(x_2)
decoder = Model(latent_inputs, x_2, name='decoder')
outputs = decoder(encoder(inputs)[2])
vae = Model(inputs, outputs)

如果存在形状误差,则剂量模型如何在上一步工作。那是我的主要问题,谢谢您的回答

1 个答案:

答案 0 :(得分:0)

您正在使用批量大小为 32 的批处理,但最终操作数将得到一个仅包含 20 个元素的张量,因为此数量在632256之后从632276保留:

632276 - 632256 = 20

基本上,这是关于此错误消息,这就是前面的步骤起作用的原因。

最简单的解决方案:

使用fit()方法的steps_per_epoch选项:

  

steps_per_epoch :整数或无。
  总步骤数(批次   样本)在声明一个纪元完成并开始下一个纪元之前   时代。使用输入张量(例如TensorFlow数据)进行训练时   张量,默认值None等于您的样本数量   数据集除以批次大小;如果无法确定,则为1。

steps_per_epoch = np.ceil(total_samples / batch_size)

在这种情况下,您基本上删除了最后20个样本。