对于需要很长时间(几十分钟,几小时,几天或几个月)的优化,在程序出现故障或电源故障的情况下,必须能够使用优化历史记录再次启动优化,但是fmin算法(和其他几个)不提供历史作为输出或输入。如何保存和使用fmin优化历史记录的结果,以确保您的所有计算投资都不会丢失?
我昨天早上有这个问题,无法在网上找到答案,所以我把自己的解决方案放在一起。见下文。
答案 0 :(得分:1)
基本上,实时监控,历史记录和恢复fmin的答案导致使用可调用函数将函数输入和输出存储到查找表中。以下是它的完成方式:
import numpy as np
import scipy as sp
import scipy.optimize
要存储历史记录,请为输入创建全局历史记录向量,并为给定输入的目标函数值创建全局历史记录向量。我也在这里初始化了初始输入向量:
x0 = np.array([1.05,0.95])
x_history = np.array([[1e8,1e8]])
fx_history = np.array([[1e8]])
我在这里优化Rosenbrock功能,因为它是典型的优化算法:
def rosen(x):
"""The Rosenbrock function"""
return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
为要优化的函数创建包装器,并在优化函数请求新输入并计算目标函数值时写入全局变量。对于每次迭代,搜索历史记录以查找先前是否已计算所请求矢量的目标函数值,并使用先前计算的值(如果已经过)。要模拟"电源故障时,我创建了一个名为powerFailure的变量,在完成收敛之前结束优化。然后我关闭powerFailure以查看优化完成情况。
def f(x):
global firstPoint, iteration, x_history, fx_history
iteration = iteration + 1
powerFailure = True
failedIteration = 10
previousPoint = False
eps = 1e-12
if powerFailure == True:
if iteration == failedIteration:
raise Exception("Optimization Ended Early Due to Power Failure")
for i in range(len(x_history)):
if abs(x_history[i,0]-x[0])<eps and abs(x_history[i,1]-x[1])<eps:
previousPoint = True
firstPoint=False
fx = fx_history[i,0]
print "%d: f(%f,%f)=%f (using history)" % (iteration,x[0],x[1],fx)
if previousPoint == False:
fx = rosen(x)
print "%d: f(%f,%f)=%f" % (iteration,x[0],x[1],fx)
if firstPoint == True:
x_history = np.atleast_2d([x])
fx_history = np.atleast_2d([fx])
firstPoint = False
else:
x_history = np.concatenate((x_history,np.atleast_2d(x)),axis=0)
fx_history = np.concatenate((fx_history,np.atleast_2d(fx)),axis=0)
return fx
最后,我们运行优化。
firstPoint = True
iteration = 0
xopt, fopt, iter, funcalls, warnflag, allvecs = sp.optimize.fmin(f,x0,full_output=True,xtol=0.9,retall=True)
电源故障时,优化在迭代9结束。&#34;重新打开电源&#34;功能打印
1: f(1.050000,0.950000)=2.328125 (using history)
2: f(1.102500,0.950000)=7.059863 (using history)
3: f(1.050000,0.997500)=1.105000 (using history)
4: f(0.997500,0.997500)=0.000628 (using history)
5: f(0.945000,1.021250)=1.647190 (using history)
6: f(0.997500,1.045000)=0.249944 (using history)
7: f(0.945000,1.045000)=2.312665 (using history)
8: f(1.023750,1.009375)=0.150248 (using history)
9: f(1.023750,0.961875)=0.743420 (using history)
10: f(1.004063,1.024219)=0.025864
11: f(0.977813,1.012344)=0.316634
12: f(1.012266,1.010117)=0.021363
13: f(1.005703,0.983398)=0.078659
14: f(1.004473,1.014014)=0.002569
15: f(0.989707,1.001396)=0.047964
16: f(1.006626,1.007937)=0.002916
17: f(0.995347,1.003577)=0.016564
18: f(1.003806,1.006847)=0.000075
19: f(0.996833,0.990333)=0.001128
20: f(0.998743,0.996253)=0.000154
21: f(1.005049,1.005600)=0.002072
22: f(0.999387,0.999525)=0.000057
Optimization terminated successfully.
Current function value: 0.000057
Iterations: 11
Function evaluations: 22