我正在尝试实施一个基于斯坦福在第一次分配到cs224n时给予的支架的SGD。实现是在python中。脚手架如下:
def load_saved_params():
'''A helper function that loads previously saved parameters and resets
iteration start.'''
return st, params, state #st = starting iteration
def save_params(iter, params):
'''saves the parameters'''
现在是主要功能(我已经跟随了多个哈希符号的感兴趣的语句)
def sgd(f, x0, step, iterations, postprocessing=None, useSaved=False,
PRINT_EVERY=10):
""" Stochastic Gradient Descent
Implement the stochastic gradient descent method in this function.
Arguments:
f -- the function to optimize, it should take a single
argument and yield two outputs, a cost and the gradient
with respect to the arguments
x0 -- the initial point to start SGD from
step -- the step size for SGD
iterations -- total iterations to run SGD for
postprocessing -- postprocessing function for the parameters
if necessary. In the case of word2vec we will need to
normalize the word vectors to have unit length.
PRINT_EVERY -- specifies how many iterations to output loss
Return:
x -- the parameter value after SGD finishes
"""
# Anneal learning rate every several iterations
ANNEAL_EVERY = 20000
if useSaved:
start_iter, oldx, state = load_saved_params()
if start_iter > 0:
x0 = oldx
step *= 0.5 ** (start_iter / ANNEAL_EVERY)
if state:
random.setstate(state)
else:
start_iter = 0
x = x0
if not postprocessing:
postprocessing = lambda x: x
expcost = None ######################################################
for iter in xrange(start_iter + 1, iterations + 1):
# Don't forget to apply the postprocessing after every iteration!
# You might want to print the progress every few iterations.
cost = None
### END YOUR CODE
if iter % PRINT_EVERY == 0:
if not expcost:
expcost = cost
else:
expcost = .95 * expcost + .05 * cost ########################
print "iter %d: %f" % (iter, expcost)
if iter % SAVE_PARAMS_EVERY == 0 and useSaved:
save_params(iter, x)
if iter % ANNEAL_EVERY == 0:
step *= 0.5
return x
为了我的目的,我没有使用expcost。但是代码中expcost的目的是什么。在什么情况下可以使用它?为什么它用于修改成本函数计算的成本?
答案 0 :(得分:1)
如果您注意到,expcost
仅用于打印费用。它只是一种平滑成本函数的方法,因为它可以在批次之间显着跳跃,尽管模型有所改进