如何使用类似于我的示例的语法从自动编码器中提取解码器部分?

时间:2019-05-17 12:15:01

标签: python tensorflow keras pca eigenvector

我建立了一个自动编码器,并尝试提取解码部分,以便通过给隐藏层(或解码器中的输入层)输入将1分配给单个节点的输入来可视化“特征脸”,并且对其他人0。我的代码做错了,我不知道该怎么办。

我已经尝试了其他一些事情,例如尝试获取第0层的输出,但是显然我做得不好。

def build_linear_autoencoder(model_name: str, x_tr: np.ndarray, x_va: 
   np.ndarray, encoding_dim=32, epochs=100) -> \
        (Sequential, Sequential, float):
    """
    Build a linear autoencoder.

    :param model_name: name used in file creation
    :param x_tr: training images
    :param x_va: validation images
    :param encoding_dim: number of nodes in encoder layer (i.e. the bottleneck)
    :return: autoencoder model, encoder model, compression factor
    """

    # Get image dimension
    image_dim = x_tr.shape[1]

    # Full model name for file output
    full_model_name = model_name + '_im_dim' + str(image_dim) + '-en_dim' + str(encoding_dim)

    # Build model path
    model_path = os.path.join(os.pardir, "models", full_model_name + ".h5")
    plot_path = os.path.join(os.pardir, "models", full_model_name + ".png")

    # Try loading the model, ...
    try:

        autoencoder = load_model(model_path)
        log("Found model \'", model_name, "\' locally.", lvl=3)

    # ... otherwise create it
    except:

        log("Training linear autoencoder.", lvl=2)

        # Flatten
        x_tr = x_tr.reshape((len(x_tr), image_size))
        x_va = x_va.reshape((len(x_va), image_size))
        input_shape = (image_size,)

        # Build model
        autoencoder = Sequential()
        autoencoder.add(Dense(encoding_dim, input_shape=input_shape, activation='linear',
                              kernel_initializer='random_uniform', bias_initializer='zeros'))
        autoencoder.add(Dense(image_size, activation='linear',
                              kernel_initializer='random_uniform', bias_initializer='zeros'))

        # Build encoder part
        '''input_img = Input(shape=input_shape)
        encoder_layer = autoencoder.layers[0]
        encoder = Model(input_img, encoder_layer(input_img))'''

        # Train model
        autoencoder.compile(optimizer='adam', loss='mean_squared_error')
        autoencoder.fit(x_tr, x_tr,
                        epochs=epochs,
                        batch_size=32,
                        verbose=1,
                        validation_data=(x_va, x_va),
                        callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])

        # Save model
        autoencoder.save(model_path)

        # Visual aid
        plot_model(autoencoder, to_file=plot_path, show_layer_names=True, show_shapes=True)

    # Get intermediate output at encoded layer
    encoder = Model(inputs=autoencoder.input, outputs=autoencoder.get_layer(index=0).output)

    # See effect of encoded representations on output (eigenfaces?)
    decoder = Model(inputs=Input(shape=(encoding_dim,)), outputs=autoencoder.get_layer(index=1).output)
    plot_model(autoencoder, to_file=os.path.join(os.pardir, "models", "decoder.png"), show_layer_names=True, show_shapes=True)

    # Size of encoded representation
    compression_factor = np.round(float(image_size / encoding_dim), decimals=2)
    log("Compression factor is {}".format(compression_factor), lvl=3)

    return autoencoder, encoder, decoder, compression_factor

这是自动编码器的全部功能。问题出在解码器线上。此特定代码的错误是:

  

ValueError:图形已断开:无法获得张量的值   层上的Tensor(“ dense_1_input:0”,shape =(?, 768),dtype = float32)   “ dense_1_input”。访问以下先前的层时没有   问题:[]

0 个答案:

没有答案