在当前会话中训练/更改重量大小

时间:2018-06-15 11:50:05

标签: python tensorflow size

我正在尝试使用Tensor 1.6和Python 2.7来改进我的最终学位项目。

我正在使用此代码进行测试:

batch_size = 50
var_pred = 1
x_pred = 1

hidden_1 = 100
hidden_2 = var_pred

with graph.as_default():
   Train_data_ph = tf.placeholder(tf.float32, shape=(var_pred, batch_size ))
   Original_data_ph = tf.placeholder(tf.float32, shape=(var_pred, var_pred ))

   '''
   # Read Tensor and save to int
   W_1_hidden =  tf.Variable(initial_value = hidden_1, dtype = tf.int32, trainable=True)
   with tf.Session() as sessi:
      tf.global_variables_initializer().run()
      W_1_hidden_i = sessi.run(W_1_hidden)
      sessi.close()
   '''

   W_1 = tf.Variable(tf.random_normal([ batch_size, hidden_1 ], stddev = W_desv), dtype = tf.float32, validate_shape = False)
   b_1 = tf.Variable(tf.random_normal(shape = [ hidden_1 ], stddev = b_desv), dtype = tf.float32, validate_shape = False)

   W_2 = tf.Variable(tf.random_normal([ hidden_1, hidden_2 ], stddev = W_desv), dtype = tf.float32, validate_shape = False)
   b_2 = tf.Variable(tf.random_normal(shape = [ hidden_2 ], stddev = b_desv), dtype = tf.float32, validate_shape = False)

   Capa_1 = tf.matmul(Train_data_ph, W_1) + b_1
   Capa_2 = tf.matmul(Capa_1, W_2) + b_2

   prediction = Capa_2
   loss = (0.5)*(tf.norm(prediction-Original_data_ph))**2

   global_step_lr = tf.Variable(0, dtype = tf.float32)
   learning_rate = tf.train.exponential_decay(start_learning_rate, global_step_lr, stair_steps_learning_rate, multiply_learning_rate, staircase=True)
   optimizer = tf.train.GradientDescentOptimizer(learning_rate)
   gradients = optimizer.compute_gradients(loss)
   def ClipIfNotNone(grad):
      if grad is None:
      return grad
      return tf.clip_by_value(grad, min_gradient, max_gradient)

   clipped_gradients = [(ClipIfNotNone(grad), var) for grad, var in gradients]
   train = optimizer.apply_gradients(clipped_gradients, global_step=global_step_lr)

with tf.Session(graph=graph) as sess:
   tf.global_variables_initializer().run()
   actual_loss = 0  
   for epoch in range(num_epochs):

      batch_data = np.zeros((var_pred, batch_size))
      original_data = np.zeros((var_pred, var_pred))

      for step in tqdm(range(num_steps)):
      #for step in tqdm(np.random.permutation(num_steps_b)):
          try:
              offset = (step*batch_size) % (total_len - batch_size)

              batch_data[0, :] = sin[offset:offset+batch_size]
              original_data[0,0] = sin_o[offset+batch_size+x_pred]

              feed_dict = {Train_data_ph : batch_data, Original_data_ph : original_data}
              _,loss_train, W_1_ = sess.run([train, loss, W_1], feed_dict = feed_dict)

              if(loss_train>0):
                  hidden_1 += 10
              actual_loss = loss_train

              time.sleep(0.0001)

我想在每个循环上更改权重矩阵大小。有几件事:

  1. 我尝试手动完成,但每个变量都像前缀模型一样使用,因此无法更新或更改(至少我没有找到任何方法)。

  2. 之后,我尝试制作一个具有可训练尺寸的模型,但包含该形状的变量不会训练并出现与之前相同的问题[1]]

  3. 是否可以训练变量的大小?或者每次循环改变它的大小?

0 个答案:

没有答案