固定时间表的自适应学习率

时间:2017-04-20 18:24:27

标签: python-3.x machine-learning tensorflow conv-neural-network

我正在尝试使用自适应学习速率和基于Adam梯度的优化来实现卷积神经网络。我有以下代码:

# learning rate schedule
schedule = np.array([0.0005, 0.0005,
       0.0002, 0.0002, 0.0002,
       0.0001, 0.0001, 0.0001,
       0.00005, 0.00005, 0.00005, 0.00005,
       0.00001, 0.00001, 0.00001, 0.00001, 0.00001, 0.00001, 0.00001, 0.00001])

# define placeholder for variable learning rate
learning_rates = tf.placeholder(tf.float32, (None),name='learning_rate')

# training operation
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, 
labels=one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rates)
training_operation = optimizer.minimize(loss_operation)

运行会话的代码:

.
.
.
_, loss = sess.run([training_operation, loss_operation], 
               feed_dict={x: batch_x, y: batch_y, learning_rate: schedule[i]})
.
.
.

i表示初始化为0的纪元计数,因此技术上应该使用计划中的第一个值。

每当我尝试运行此操作时,我都会收到以下错误:

InvalidArgumentError:您必须使用dtype float为占位符张量'learning_rate_2'提供值      [[节点:learning_rate_2 = Placeholderdtype = DT_FLOAT,shape = [],_ device =“/ job:localhost / replica:0 / task:0 / cpu:0”]]

有没有人有同样的问题?我尝试重新初始化会话,重命名变量但无济于事。

1 个答案:

答案 0 :(得分:0)

找到一个中间解决方案。

.
.
.
for i in range(EPOCHS):
    XX_train, yy_train = shuffle(X_train, y_train)

    # code for adaptive rate
    optimizer = tf.train.AdamOptimizer(learning_rate = schedule[i])

    for offset in range(0, num_examples, BATCH_SIZE):
        end = offset + BATCH_SIZE
        batch_x, batch_y = XX_train[offset:end], yy_train[offset:end]
        _, loss = sess.run([training_operation, loss_operation], feed_dict={x: batch_x, y: batch_y})
.
.
.

不是很优雅,但至少它有效。