我想使用scipy.optimize.fmin_l_bfgs_b
查找成本函数的最小值。
为此,我想首先创建一个one_batch
的实例(下面给出了one_batch
的代码),以指定一批训练示例以及那些未包含的参数在损失函数中,但必须计算损失。
由于模块loss_calc
旨在同时返回损失和损失素数,因此我面临着分离scipy.optimize.fmin_l_bfgs_b
的损失函数和损失函数素数的问题。
从one_batch
的代码中可以看到,给定了一批转换示例,[loss, dloss/dParameters]
将为每个示例并行计算。我不想对get_loss
和get_loss_prime
进行两次完全相同的计算。
那我该如何设计方法get_loss
和get_loss_prime
,这样我只需要做一次并行计算?
这是one_batch
from calculator import loss_calc
class one_batch:
def __init__(self,
auxiliary_model_parameters,
batch_example):
# auxiliary_model_parameters are parameters need to specify
# the loss calculator but are not included in the loss function.
self.auxiliary_model_parameters = auxiliary_model_parameters
self.batch_example = batch_example
def parallel(self, func, args):
pool = multiprocessing.Pool(multiprocessing.cpu_count())
result = pool.map(func, args)
return result
def one_example(self, example):
temp_instance = loss_calc(self.auxiliary_model_parameters,
self.model_vector)
loss, dloss = temp_instance(example).calculate()
return [loss, dloss]
def main(self, model_vector):
self.model_vector = model_vector
# model_vector and auxiliary_model_parameters are necessary
# for creating an instance of loss function calculator
result_list = parallel(self.one_example,
self.batch_examples)
# result_list is a list of sublists, each sublist is
# [loss, dloss/dParameter] for each training example
def get_loss(self):
?
def get_loss_prime(self):
?
答案 0 :(得分:1)
您可以使用目标函数直接将两个函数的值都返回为error
的输入:
fmin_l_bfgs_b
(array([-0.5]),array([0.5]),{'grad':array([[-3.55271368e-15]]),
'task':b'CONVERGENCE:NORM_OF_PROJECTED_GRADIENT _ <= _ PGTOL',
'funcalls':4,4,'nit':2,'warnflag':0})