训练神经网络添加

时间:2010-11-17 13:22:27

标签: neural-network

我需要训练一个网络来增加或增加2个输入,但是对于20000之后的所有点来说似乎并不合适 迭代。更具体地说,我在整个数据集上训练它,并且它似乎很接近最后几点 就像它对第一个端点没有任何好转。我规范化数据,使其介于-0.8和0.8之间。该 网络本身由2个输入3个隐藏神经元和1个输出神经元组成。我还将网络的学习率设置为0.25, 并用作学习函数tanh(x)。

对于在数据集中最后训练的点而言,它非常接近,但对于第一点似乎是这样 不能近似。我不知道它是什么,它无法帮助它调整好,无论是我正在使用的拓扑,还是 别的什么?

此网络的隐藏层中有多少神经元适合?

5 个答案:

答案 0 :(得分:9)

由单个神经元组成的网络,权重= {1,1},偏差= 0,线性激活函数执行两个输入数字的加法。

乘法可能更难。以下是网络可以使用的两种方法:

  1. 将其中一个数字转换为数字(例如二进制),并像在小学一样进行乘法运算。 a*b = a*(b0*2^0 + b1*2^1 + ... + bk*2^k) = a*b0*2^0 + a*b1*2^1 + ... + a*bk*2^k。这种方法很简单,但需要与输入b的长度(对数)成比例的可变数目的神经元。
  2. 取输入的对数,添加它们并对结果取幂。 a*b = exp(ln(a) + ln(b))此网络可以处理任意长度的数字,只要它可以很好地接近对数和指数。

答案 1 :(得分:7)

可能为时已晚,但一个简单的解决方案是使用 RNN Recurrent Neural Network)。

RNN SUM TWO DIGITS

将您的号码转换为数字后,您的NN将从左到右的数字序列中取几个数字。

RNN必须循环其中一个输出,以便它可以自动理解有一个数字要携带(如果总和为2,则写入0并携带1)。

要训练它,你需要给它输入由两个数字组成的输入(一个来自第一个数字,第二个来自第二个数字)和所需的输出。 RNN最终会找到如何做到这一点。

请注意,此RNN只需知道以下8个案例即可了解如何将两个数字相加:

  • 1 + 1,0 + 0,1 + 0,0 + 1带进位
  • 1 + 1,0 + 0,1 + 0,0 + 1无携带

答案 2 :(得分:2)

如果你想保持神经(链接有权重,神经元根据权重计算输入的误差总和,并根据总和的sigmoid回答0或1,你使用渐变的反向传播),那么您应该将隐藏层的神经元视为分类器。它们定义了一个将输入空间分隔为类的行:1个类对应于神经元响应1的部分,另一个对应于响应0的部分。隐藏层的第二个神经元将定义另一个分离,依此类推。输出神经元通过调整其输出的权重来组合隐藏层的输出,以与您在学习期间呈现的输出相对应。 因此,单个神经元将输入空间分为2类(可能对应于根据学习数据库的添加)。两个神经元将能够定义4个类。三个神经元8个等等。将隐藏神经元的输出视为2的幂:h1*2^0 + h2*2^1+...+hn*2^n,其中hi是隐藏神经元i的输出。注意:你需要n个输出神经元。这回答了关于要使用的隐藏神经元数量的问题 NN没有计算加法。它将其视为基于所学内容的分类问题。它永远无法为超出其学习基础的值生成正确的答案。在学习阶段,它调整权重以放置分隔符(线条为2D),以便产生正确的答案。如果您的输入位于[0,10],则会学习如何生成以更正[0,10]^2中的值添加的答案,但永远不会为12 + 11提供良好的答案。
如果您的最后一个值得到充分学习并且第一个被遗忘,请尝试降低学习率:最后一个示例的权重修改(取决于渐变)可以覆盖第一个(如果您使用随机反向支持) 。确保您的学习基础是公平的。您还可以更频繁地展示经过深思熟虑的示例。并尝试许多学习率的值,直到找到一个好的值。

答案 3 :(得分:1)

我试图这样做。训练有素的2,3,4位数,能够达到97%的准确率。您可以使用其中一种神经网络类型实现

Sequence to Sequence Learning with Neural Networks

以下链接提供了来自keras的Juypter Notebook的示例程序,

https://github.com/keras-team/keras/blob/master/examples/addition_rnn.py

希望它有所帮助。

在此处附加代码以供参考。

from __future__ import print_function
from keras.models import Sequential
from keras import layers
import numpy as np
from six.moves import range


class CharacterTable(object):
    """Given a set of characters:
    + Encode them to a one hot integer representation
    + Decode the one hot integer representation to their character output
    + Decode a vector of probabilities to their character output
    """
    def __init__(self, chars):
        """Initialize character table.
        # Arguments
            chars: Characters that can appear in the input.
        """
        self.chars = sorted(set(chars))
        self.char_indices = dict((c, i) for i, c in enumerate(self.chars))
        self.indices_char = dict((i, c) for i, c in enumerate(self.chars))

    def encode(self, C, num_rows):
        """One hot encode given string C.
        # Arguments
            num_rows: Number of rows in the returned one hot encoding. This is
                used to keep the # of rows for each data the same.
        """
        x = np.zeros((num_rows, len(self.chars)))
        for i, c in enumerate(C):
            x[i, self.char_indices[c]] = 1
        return x

    def decode(self, x, calc_argmax=True):
        if calc_argmax:
            x = x.argmax(axis=-1)
        return ''.join(self.indices_char[x] for x in x)


class colors:
    ok = '\033[92m'
    fail = '\033[91m'
    close = '\033[0m'

# Parameters for the model and dataset.
TRAINING_SIZE = 50000
DIGITS = 3
INVERT = True

# Maximum length of input is 'int + int' (e.g., '345+678'). Maximum length of
# int is DIGITS.
MAXLEN = DIGITS + 1 + DIGITS

# All the numbers, plus sign and space for padding.
chars = '0123456789+ '
ctable = CharacterTable(chars)

questions = []
expected = []
seen = set()
print('Generating data...')
while len(questions) < TRAINING_SIZE:
    f = lambda: int(''.join(np.random.choice(list('0123456789'))
                    for i in range(np.random.randint(1, DIGITS + 1))))
    a, b = f(), f()
    # Skip any addition questions we've already seen
    # Also skip any such that x+Y == Y+x (hence the sorting).
    key = tuple(sorted((a, b)))
    if key in seen:
        continue
    seen.add(key)
    # Pad the data with spaces such that it is always MAXLEN.
    q = '{}+{}'.format(a, b)
    query = q + ' ' * (MAXLEN - len(q))
    ans = str(a + b)
    # Answers can be of maximum size DIGITS + 1.
    ans += ' ' * (DIGITS + 1 - len(ans))
    if INVERT:
        # Reverse the query, e.g., '12+345  ' becomes '  543+21'. (Note the
        # space used for padding.)
        query = query[::-1]
    questions.append(query)
    expected.append(ans)
print('Total addition questions:', len(questions))

print('Vectorization...')
x = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool)
y = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool)
for i, sentence in enumerate(questions):
    x[i] = ctable.encode(sentence, MAXLEN)
for i, sentence in enumerate(expected):
    y[i] = ctable.encode(sentence, DIGITS + 1)

# Shuffle (x, y) in unison as the later parts of x will almost all be larger
# digits.
indices = np.arange(len(y))
np.random.shuffle(indices)
x = x[indices]
y = y[indices]

# Explicitly set apart 10% for validation data that we never train over.
split_at = len(x) - len(x) // 10
(x_train, x_val) = x[:split_at], x[split_at:]
(y_train, y_val) = y[:split_at], y[split_at:]

print('Training Data:')
print(x_train.shape)
print(y_train.shape)

print('Validation Data:')
print(x_val.shape)
print(y_val.shape)

# Try replacing GRU, or SimpleRNN.
RNN = layers.LSTM
HIDDEN_SIZE = 128
BATCH_SIZE = 128
LAYERS = 1

print('Build model...')
model = Sequential()
# "Encode" the input sequence using an RNN, producing an output of HIDDEN_SIZE.
# Note: In a situation where your input sequences have a variable length,
# use input_shape=(None, num_feature).
model.add(RNN(HIDDEN_SIZE, input_shape=(MAXLEN, len(chars))))
# As the decoder RNN's input, repeatedly provide with the last hidden state of
# RNN for each time step. Repeat 'DIGITS + 1' times as that's the maximum
# length of output, e.g., when DIGITS=3, max output is 999+999=1998.
model.add(layers.RepeatVector(DIGITS + 1))
# The decoder RNN could be multiple layers stacked or a single layer.
for _ in range(LAYERS):
    # By setting return_sequences to True, return not only the last output but
    # all the outputs so far in the form of (num_samples, timesteps,
    # output_dim). This is necessary as TimeDistributed in the below expects
    # the first dimension to be the timesteps.
    model.add(RNN(HIDDEN_SIZE, return_sequences=True))

# Apply a dense layer to the every temporal slice of an input. For each of step
# of the output sequence, decide which character should be chosen.
model.add(layers.TimeDistributed(layers.Dense(len(chars))))
model.add(layers.Activation('softmax'))
model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])
model.summary()

# Train the model each generation and show predictions against the validation
# dataset.
for iteration in range(1, 200):
    print()
    print('-' * 50)
    print('Iteration', iteration)
    model.fit(x_train, y_train,
              batch_size=BATCH_SIZE,
              epochs=1,
              validation_data=(x_val, y_val))
    # Select 10 samples from the validation set at random so we can visualize
    # errors.
    for i in range(10):
        ind = np.random.randint(0, len(x_val))
        rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])]
        preds = model.predict_classes(rowx, verbose=0)
        q = ctable.decode(rowx[0])
        correct = ctable.decode(rowy[0])
        guess = ctable.decode(preds[0], calc_argmax=False)
        print('Q', q[::-1] if INVERT else q, end=' ')
        print('T', correct, end=' ')
        if correct == guess:
            print(colors.ok + '☑' + colors.close, end=' ')
        else:
            print(colors.fail + '☒' + colors.close, end=' ')
        print(guess)

答案 4 :(得分:-3)

想想如果用{x}的线性函数替换tanh(x)阈值函数 - 将其称为a.x会发生什么 - 并将a视为每个神经元中唯一的学习参数。这实际上是您的网络将优化的目标;它是tanh函数的过零点的近似值。

现在,当你对这种线性类型的神经元进行分层时会发生什么?当脉冲从输入到输出时,你乘以每个神经元的输出。你试图通过一组乘法来近似加法。正如他们所说,这不会计算。