这个问题很简单,也许很愚蠢,但是我们开始:
如此处(https://esa.github.io/pagmo2/docs/python/algorithms/py_algorithms.html)
如果只进化一个种群,则可以按以下方式获取algo.evolve()调用的日志:
from pygmo import *
algo = algorithm(de1220(gen = 500))
algo.set_verbosity(100)
prob = problem(rosenbrock(10))
pop = population(prob, 20)
pop = algo.evolve(pop)
uda = algo.extract(de1220)
uda.get_log()
[(1, 20, 285652.7928977573, 0.551350234239449, 0.4415510963067054, 16, 43.97185788345982, 2023791.5123259544), ...
如果您利用pygmo的功能使用群岛并行化进化,您将执行以下操作:
archi = archipelago(n = 8, algo = algo, prob = rosenbrock(5), pop_size = 10, seed = 32)
archi.evolve()
但是,群岛没有一个extract()方法(如算法那样),也没有一个get_algorithm()一个(如岛屿一样),或者在文档中没有足够明显的其他东西(至少对我来说)工作...
archi.extract(de1220)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'archipelago' object has no attribute 'extract'
archi.get_algorithm()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'archipelago' object has no attribute 'get_algorithm'
那么,如何将algo.set_verbosity(100)
的输出获取到文件中,而不仅仅是在标准输出上?
而且,一旦到达这里,有没有办法让它按岛组织,而不是像打印在stdout上那样交错呢?
(我知道在运行每个孤岛时,到达给定触发器时都会打印报告,但是如果存储了所有内容,应该可以对它进行排序)
谢谢!
答案 0 :(得分:0)
相当糟糕的记录,我在上面浪费了很多时间...
我不确定这是最好/正确/更快的方法,但是可以:
按代码分类:
# set up a dummy archipelago
algo = algorithm(de1220(gen = 50))
algo.set_verbosity(25)
prob = problem(rosenbrock(10))
archi = pg.archipelago(n=5,algo=algo, prob=prob, pop_size=10)
# evolve the archipelago
archi.evolve()
archi.wait()
# set up df
tot_df = pd.DataFrame(columns = ["Gen", "F.evals.", "Best fit", "mutation", "crossing over", "Variant", "dx", "df", "island_#"])
# here's the 'magic'
for i, island in enumerate(archi): # iterate through islands
a = island.get_algorithm() # get algorithm from island
uda = a.extract(de1220) # extract algorithm from algorithm object
log = uda.get_log() # get the log. Comes as list of tuples
# reshape log
df = pd.DataFrame(np.asarray(log), columns = ["Gen", "F.evals.", "Best fit","mutation", "crossing over", "Variant", "dx", "df"])
df["island_#"] = i # add island ID
tot_df = pd.concat([tot_df,df], axis='index', ignore_index=True) # merge with total df
tot_df.head(10)
Gen F.evals. Best fit mutation crossing over Variant dx \
0 1.0 10.0 345333.467771 0.789858 0.816435 13.0 39.714168
1 26.0 260.0 1999.841182 0.164231 0.212773 13.0 17.472183
2 1.0 10.0 78311.447221 0.789858 0.816435 13.0 52.486000
3 26.0 260.0 5487.221927 0.265201 0.293801 13.0 18.667831
4 1.0 10.0 232299.337923 0.789858 0.816435 13.0 82.268328
5 26.0 260.0 1428.355411 0.125830 0.849527 13.0 23.221746
6 1.0 10.0 52560.966403 0.789858 0.816435 13.0 21.125350
7 26.0 260.0 368.076713 0.379755 0.896231 3.0 19.487683
8 1.0 10.0 147318.705997 0.821884 0.527160 2.0 42.190744
9 26.0 260.0 1869.989020 0.326712 0.924639 16.0 19.501904
df island_#
0 1.912363e+06 0
1 8.641547e+03 0
2 1.148887e+06 1
3 4.478749e+04 1
4 1.952969e+06 2
5 3.955732e+04 2
6 1.345214e+06 3
7 4.682571e+04 3
8 1.114900e+06 4
9 5.839716e+04 4
我希望这样可以节省某人在等待文档更新时的时间...