如何将张量转换为numpy数组

时间:2016-12-08 09:11:08

标签: python numpy tensorflow autoencoder

我是张量流的初学者。我帮忙做了简单的自动编码器。我想将最终decoded张量转换为numpy数组。我尝试使用.eval()但我无法工作。如何将张量转换为numpy?

我的输入图像尺寸为512 * 512 * 1,数据类型为原始图像格式。

#input
image_size = 512
hidden = 256
input_image = np.fromfile('PATH',np.float32)

# Variables
x_placeholder = tf.placeholder("float", (image_size*image_size))

x = tf.reshape(x_placeholder, [image_size * image_size, 1])
w_enc = tf.Variable(tf.random_normal([hidden, image_size * image_size], mean=0.0, stddev=0.05))
w_dec = tf.Variable(tf.random_normal([image_size * image_size, hidden], mean=0.0, stddev=0.05))
b_enc = tf.Variable(tf.zeros([hidden, 1]))
b_dec = tf.Variable(tf.zeros([image_size * image_size, 1]))

#model
encoded = tf.sigmoid(tf.matmul(w_enc, x) + b_enc)
decoded = tf.sigmoid(tf.matmul(w_dec,encoded) + b_dec)

# Cost Function
cross_entropy = -1. * x * tf.log(decoded) - (1. - x) * tf.log(1. - decoded)
loss = tf.reduce_mean(cross_entropy)
train_step = tf.train.AdagradOptimizer(0.1).minimize(loss)

# Train
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    print('Training...')
    for _ in xrange(10):
        loss_val, _ = sess.run([loss, train_step], feed_dict = {x_placeholder: input_image})
        print loss_val

1 个答案:

答案 0 :(得分:0)

您可以将解码添加到sess.run()返回的张量列表中,如下所示。 decode_val将由numpy数组生成,您可以对其进行整形以获得原始图像形状。

或者,您可以在训练循环之外执行sess.run()以获得生成的解码图像。

import tensorflow as tf
import numpy as np

tf.reset_default_graph()

#load_image
image_size = 16
k = 64
temp = np.zeros((image_size, image_size))


# Variables
x_placeholder = tf.placeholder("float", (image_size, image_size))

x = tf.reshape(x_placeholder, [image_size * image_size, 1])
w_enc = tf.Variable(tf.random_normal([k, image_size * image_size], mean=0.0, stddev=0.05))
w_dec = tf.Variable(tf.random_normal([image_size * image_size, k], mean=0.0, stddev=0.05))
b_enc = tf.Variable(tf.zeros([k, 1]))
b_dec = tf.Variable(tf.zeros([image_size * image_size, 1]))

#model
encoded = tf.sigmoid(tf.matmul(w_enc, x) + b_enc)
decoded = tf.sigmoid(tf.matmul(w_dec,encoded) + b_dec)


# Cost Function
cross_entropy = -1. * x * tf.log(decoded) - (1. - x) * tf.log(1. - decoded)
loss = tf.reduce_mean(cross_entropy)
train_step = tf.train.AdagradOptimizer(0.1).minimize(loss)

# Train
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    print('Training...')
    for _ in xrange(10):
      loss_val, decoded_val, _ = sess.run([loss, decoded, train_step], feed_dict = {x_placeholder: temp})
      print loss_val
    print('Done!')