张量流初始化在封装时出现问题

时间:2019-05-24 08:30:20

标签: tensorflow initialization encapsulation

我正在封装一个自动编码器成本计算,以便允许与群算法一起使用。目的是获取发送一些参数的自动编码器的成本摘要,因此该方法可以创建模型,对其进行训练并返回其成本张量

def getAECost(dfnormalized, adamParam, iterations):
    N_VISIBLE = 31
    N_HIDDEN = 20
    DEVICE = '/gpu:0' #Or '/cpu:0'

    ITERATIONS = 1 + iterations

    with tf.device(DEVICE):
        # create node for input data(entiendo none columns and N_VISIBLE rows)
        X = tf.placeholder("float", [None, N_VISIBLE], name='X')

        # create nodes for hidden variables
        W_init_max = 4 * np.sqrt(6. / (N_VISIBLE + N_HIDDEN))
        W_init = tf.random_uniform(shape=[N_VISIBLE, N_HIDDEN])#,
        #                            minval=-W_init_max,
        #                            maxval=W_init_max)
        #Inicialite our weight and bias
        #W [784,500]
        W = tf.Variable(W_init, name='W')
        #Inicializate only bias of hidden layer
        b = tf.Variable(tf.zeros([N_HIDDEN]), name='b')
        #W_prime[500,784]
        W_prime = tf.transpose(W)  # tied weights between encoder and decoder
        b_prime = tf.Variable(tf.zeros([N_VISIBLE]), name='b_prime')

        #model that take our variables parameters 
        #Comportamiento de la red neuronal
        def model(X, W, b, W_prime, b_prime):
            tilde_X = X
            #To decode ?
            Y = tf.nn.sigmoid(tf.matmul(tilde_X, W) + b)  # hidden state
            #to reconstructed the input
            Z = tf.nn.sigmoid(tf.matmul(Y, W_prime) + b_prime)  # reconstructed input 
            return Z

        # build model graph
        pred = model(X, W, b, W_prime, b_prime)

        # create cost function
        #Sum of squared error
        cost = tf.reduce_sum(tf.pow(X - pred, 2))  # minimize squared error
        #Tensor to parameter learning rate
        learning = tf.placeholder("float", name='learning')
        train_op = tf.train.AdamOptimizer(learning).minimize(cost) # construct an optimizer

    with tf.Session() as sess:
        # you need to initialize all variables
        tf.global_variables_initializer()
        RATIO = adamParam

        for i in range(ITERATIONS):
            #Prepare input(minibach) from feed autoencoder 
            input_ = dfnormalized
            # train autoencoder
            sess.run(train_op, feed_dict={X: input_, learning: RATIO})
            #Save last epoch and test
            if(i == ITERATIONS-1):
                #Get output as dataframe after training(Z is a array, we cast to list to append with a dataframe)
                costAE = sess.run(cost, feed_dict={X: input_})
        return costAE

几天前它起作用了(也许我在后台又进行了一次会议),返回该方法一个浮点数,但是现在不起作用,出现了初始化错误

FailedPreconditionError: Attempting to use uninitialized value W
     [[{{node W/read}}]]

在培训步骤中

sess.run(train_op, feed_dict={X: input_, learning: RATIO})

关于如何解决此初始化问题的任何建议,或者如何封装张量流模型和会话?

谢谢

2 个答案:

答案 0 :(得分:0)

您必须实际运行变量初始化程序,tf.global_variables_initializer()返回要执行的 op ,它不会为您运行初始化。因此,解决您的问题的方法应该是替换行

tf.global_variables_initializer()

使用

sess.run(tf.global_variables_initializer())

答案 1 :(得分:0)

我尝试了@Addy所说的内容,并重新构造了代码以使其更清晰易懂,现在可以正常运行了

class Model:
    N_VISIBLE = 31
    N_HIDDEN = 20
    DEVICE = '/gpu:0' #Or '/cpu:0'

    with tf.device(DEVICE):
        # create node for input data(entiendo none columns and N_VISIBLE rows)
        X = tf.placeholder("float", [None, N_VISIBLE], name='X')

        # create nodes for hidden variables
        W_init_max = 4 * np.sqrt(6. / (N_VISIBLE + N_HIDDEN))
        W_init = tf.random_uniform(shape=[N_VISIBLE, N_HIDDEN])#,
        #                            minval=-W_init_max,
        #                            maxval=W_init_max)
        #Inicialite our weight and bias
        #W [784,500]
        W = tf.Variable(W_init, name='W')
        #Inicializate only bias of hidden layer
        b = tf.Variable(tf.zeros([N_HIDDEN]), name='b')
        #W_prime[500,784]
        W_prime = tf.transpose(W)  # tied weights between encoder and decoder
        b_prime = tf.Variable(tf.zeros([N_VISIBLE]), name='b_prime')

        #model that take our variables parameters 
        #Comportamiento de la red neuronal
        def model(X, W, b, W_prime, b_prime):
            tilde_X = X
            #To decode ?
            Y = tf.nn.sigmoid(tf.matmul(tilde_X, W) + b)  # hidden state
            #to reconstructed the input
            Z = tf.nn.sigmoid(tf.matmul(Y, W_prime) + b_prime)  # reconstructed input 
            return Z

        # build model graph
        pred = model(X, W, b, W_prime, b_prime)

        # create cost function
        #Sum of squared error
        cost = tf.reduce_sum(tf.pow(X - pred, 2))  # minimize squared error
        #Tensor to parameter learning rate
        learning = tf.placeholder("float", name='learning')
        train_op = tf.train.AdamOptimizer(learning).minimize(cost) # construct an optimizer
        sess = tf.InteractiveSession()
        sess.run(tf.global_variables_initializer())


    def train (self, data, adamParam, iterations):
        input_ = data
        RATIO = adamParam
        for i in range(iterations):
            # train autoencoder
            _= self.sess.run(self.train_op, feed_dict={self.X: input_, self.learning: RATIO})
        #print ("Model trained")


    def getAECost(self, data):
        input_ = data
        return self.sess.run(self.cost, {self.X: data})

    def trainAndGetCost (self, dataTrain, dataCost, adamParam, iterations):
        self.train(dataTrain, adamParam, iterations)
        return self.getAECost(dataCost)