在每次预测后实施NEAT python进行再训练

时间:2018-12-29 15:07:21

标签: python neural-network neat

我想了解如何实现整洁的python,以便在每次预测后重新训练,因此每次预测后训练集的大小都会增加。

我正在尝试通过配置文件设置整洁的python,以在每次预测测试/看不见的设置后进行重新训练。例如,如果按我的理解,可以对XOR的“进化最小”示例进行调整,使其对部分数据进行训练(达到特定适应水平,获得最佳基因组),那么它将对设置的其他数据进行预测作为测试集。请参阅下面的代码,以了解我的意思:

from __future__ import print_function
import neat
import visualize

# 2-input XOR inputs and expected outputs. Training set
xor_inputs = [(0.0, 0.0, 0.0), (0.0, 1.0, 0.0), (1.0, 1.0, 1.0), (0.0, 0.0, 1.0), (1.0, 1.0, 0.0)]
xor_outputs = [(1.0,), (1.0,), (1.0,), (0.0,), (0.0,)]

# Test set
xor_inputs2 = [(1.0, 0.0, 1.0), (1.0, 1.0, 0.0), (1.0, 0.0, 0.0)]
xor_outputs2 = [(1.0,), (0.0,), (0.0,)]


def eval_genomes(genomes, config):
 for genome_id, genome in genomes:
    genome.fitness = 5
    net = neat.nn.FeedForwardNetwork.create(genome, config)
  for xi, xo in zip(xor_inputs, xor_outputs):
    output = net.activate(xi)
    genome.fitness -= (output[0] - xo[0]) ** 2


# Load configuration.
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
                 neat.DefaultSpeciesSet, neat.DefaultStagnation,
                 'config-feedforward')

# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)

# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(True))
stats = neat.StatisticsReporter()
p.add_reporter(stats)

# Run until a solution is found.
winner = p.run(eval_genomes) 

# Display the winning genome.
 print('\nBest genome:\n{!s}'.format(winner))

# Show output of the most fit genome against training data.
print('\nOutput:')
winner_net = neat.nn.FeedForwardNetwork.create(winner, config)
count = 0

#To make predictions using the best genome
for xi, xo in zip(xor_inputs2, xor_outputs2):
  prediction = winner_net.activate(xi)
  print("  input {!r}, expected output {!r}, got {!r}".format(
  xi, xo[0], round(prediction[0])))
  #to get prediction accuracy
  if int(xo[0]) == int(round(prediction[0])):
    count = count + 1
accuracy = count / len(xor_outputs2)
print('\nAccuracy: ', accuracy)

node_names = {-1: 'A', -2: 'B', 0: 'A XOR B'}
visualize.draw_net(config, winner, True, node_names=node_names)
visualize.plot_stats(stats, ylog=False, view=True)
visualize.plot_species(stats, view=True)

配置文件为:

#--- parameters for the XOR-2 experiment ---#

[NEAT]
fitness_criterion     = max
fitness_threshold     = 4.8
pop_size              = 150
reset_on_extinction   = True

[DefaultGenome]
# node activation options
activation_default      = sigmoid
activation_mutate_rate  = 0.0
activation_options      = sigmoid

# node aggregation options
aggregation_default     = sum
aggregation_mutate_rate = 0.0
aggregation_options     = sum

# node bias options
bias_init_mean          = 0.0
bias_init_stdev         = 1.0
bias_max_value          = 30.0
bias_min_value          = -30.0
bias_mutate_power       = 0.5
bias_mutate_rate        = 0.7
bias_replace_rate       = 0.1

# genome compatibility options
compatibility_disjoint_coefficient = 1.0
compatibility_weight_coefficient   = 0.5

# connection add/remove rates
conn_add_prob           = 0.5
conn_delete_prob        = 0.5

# connection enable options
enabled_default         = True
enabled_mutate_rate     = 0.01

feed_forward            = True
initial_connection      = full_direct

# node add/remove rates
node_add_prob           = 0.2
node_delete_prob        = 0.2

# network parameters
num_hidden              = 0
num_inputs              = 3
num_outputs             = 1

# node response options
response_init_mean      = 1.0
response_init_stdev     = 0.0
response_max_value      = 30.0
response_min_value      = -30.0
response_mutate_power   = 0.0
response_mutate_rate    = 0.0
response_replace_rate   = 0.0

# connection weight options
weight_init_mean        = 0.0
weight_init_stdev       = 1.0
weight_max_value        = 30
weight_min_value        = -30
weight_mutate_power     = 0.5
weight_mutate_rate      = 0.8
weight_replace_rate     = 0.1

[DefaultSpeciesSet]
compatibility_threshold = 3.0

[DefaultStagnation]
species_fitness_func = max
max_stagnation       = 20
species_elitism      = 2

[DefaultReproduction]
elitism            = 2
survival_threshold = 0.2

但是,这里的问题是,在测试集中进行每次预测后都不会进行任何重新训练。我相信配置文件中的参数是静态的,并且在训练过程开始后不能更改,因此,如果您的适应水平是基于训练集的正确分类的数量(这就是我要实现的目标,非常类似于这将是一个问题,因此,我想了解是否可以通过调整config文件中的设置来实现重新训练的模型。还是还有更多呢?

1 个答案:

答案 0 :(得分:1)

如果我理解您的要求正确,那么就不能简单地在config_file中完成此操作。

config_file中定义的参数只是在更改模型直接遍历数据时发生的情况,从而进行预测而无需任何重新训练。

如果要在每次预测后对模型进行重新训练,则必须在eval_genomes和/或run函数中实现此功能。您可以在一个遍历每个基因组的循环中添加另一个for循环,以获取每个输出并重新训练模型。但是,这可能会大大增加计算时间,因为您不仅要获取输出,还要为每个输出运行另一组训练代。