我正在尝试调整梯度下降算法的学习率。我希望能够确认我对sed ': load
# load lines (1 by 1) until Not last one
$ !{N;b load
}
# keep a copy into holding buffer
h
# Extracting the number to substitute
s/^\([[:alnum:][:blank:][:punct:]]*\n\)\{9\}.\{6\}\(....\).*/\2/
# add all the line after this (1st line is the number)
G
: repl
# replace last occurence of External with the new reference
s/^\(....\)\(.*\)\(ExternalENodeBFunction=\)1\([^0-9]\)/\1\2\3\1\4/
# if replacement occur, (re)try another one
t repl
# cleanning by removing first number (4 digit + 1 new line)
s/.....//
' YourFile
的更改是否实际上对我的theano训练功能有影响。
示例代码:
learning_rate
这是否符合我的要求?我该如何确认?
基于这个小小的测试,似乎这不起作用:
#set up the updates
for param in params:
updates.append((param, param-learning_rate*T.grad(cost, param)))
#set up the training function
train = theano.function(inputs=[index], outputs=[cost], updates=updates, givens={x:self.X[index:index+mini_batch_size,:]})
#run through the minibatches
for epoch in range(n_epochs):
for row in range(0,self.m, mini_batch_size):
cost = train(row)
#occasionally adjust the learning rate
learning_rate = learning_rate/2.0
解决这个问题的正确方法是什么?
答案 0 :(得分:2)
问题在于您的代码将学习率作为常量编译到计算图中。如果要更改速率,则需要使用Theano变量在计算图中表示它,然后在执行函数时提供值。这可以通过两种方式完成:
每次执行函数时都将速率作为输入值处理,并在计算图中将其表示为标量张量。
将费率存储在Theano共享变量中。在执行函数之前手动更改变量。
第二种方法有两种变体。在第一步中,您在执行前手动调整速率值。在第二步中,您指定一个符号表达式,解释每次执行时应如何更新速率。
这个示例代码基于问题的编辑部分演示了这三种方法。
import theano as th
import theano.tensor
# Original version (changing rate doesn't affect theano function output)
x = th.tensor.dscalar()
rate=5.0
f = th.function(inputs=[x], outputs=2*x*rate)
print(f(10))
rate=0.0
print(f(10))
# New version using an input value
x = th.tensor.dscalar()
rate=th.tensor.scalar()
f = th.function(inputs=[x, rate], outputs=2*x*rate)
print(f(10, 5.0))
print(f(10, 0.0))
# New version using a shared variable with manual update
x = th.tensor.dscalar()
rate=th.shared(5.0)
f = th.function(inputs=[x], outputs=2*x*rate)
print(f(10))
rate.set_value(0.0)
print(f(10))
# New version using a shared variable with automatic update
x = th.tensor.dscalar()
rate=th.shared(5.0)
updates=[(rate, rate / 2.0)]
f = th.function(inputs=[x], outputs=2*x*rate, updates=updates)
print(f(10))
print(f(10))
print(f(10))
print(f(10))
答案 1 :(得分:1)
受到@Daniel Renshaw答案的启发,您可以尝试以下方法:
learning_rate = theano.shared(0.01)
for param in params:
updates.append((param, param-learning_rate*T.grad(cost, param)))
#set up the training function
train = theano.function(inputs=[index], outputs=[cost], updates=updates, givens={x:self.X[index:index+mini_batch_size,:]})
#run through the minibatches
for epoch in range(n_epochs):
for row in range(0,self.m, mini_batch_size):
cost = train(row)
#occasionally adjust the learning rate
learning_rate.set_value(learning_rate.get_value()/ 2)
基本上,您使用共享变量并在每次迭代时手动更新它。