神经网络无法学习

时间:2016-02-14 20:50:03

标签: python python-2.7 numpy neural-network softmax

我正在尝试使用python和numpy实现神经网络。问题是当我尝试训练我的网络时,错误库存大约为0.5。它无法进一步学习。我试过学习率0.001和1.我想我在反向传播过程中做错了。但我没想到有什么不对劲。

P.S。我遇到了很多溢出问题,然后我开始使用np.clip()方法。

这是我的反向传播代码:

# z2 is softmax output
def calculateBackpropagation(self, z1, z2, y):
    delta3 = z2
    delta3[range(self.numOfSamples), y] -= 1
    dW2 = (np.transpose(z1)).dot(delta3)
    db2 = np.sum(delta3, axis=0, keepdims=True)
    delta2 = delta3.dot(np.transpose(self.W2)) * ActivationFunction.DRELU(z1)
    dW1 = np.dot(np.transpose(self.train_data), delta2)
    db1 = np.sum(delta2, axis=0)

    self.W1 += -self.alpha * dW1
    self.b1 += -self.alpha * db1
    self.W2 += -self.alpha * dW2
    self.b2 += -self.alpha * db2

# RELU can be approximated with soft max function
# so the derivative of this function is g(x) = log(1+exp(x))
# Source: https://imiloainf.wordpress.com/2013/11/06/rectifier-nonlinearities/
@staticmethod
def DRELU(x):
    x = np.clip( x, -500, 500 )
    return np.log(1 + np.exp(x))

def softmax(self, x):
    """Compute softmax values for each sets of scores in x."""
    x = np.clip( x, -500, 500 )
    e = np.exp(x)
    return e / np.sum(e, axis=1, keepdims=True)

def train(self):
    X = self.train_data
    Y = self.train_labels
    (row, col) = np.shape(self.train_data)
    for i in xrange(self.ephocs):
        [p1, z1, p2, z2] = self.feedForward(X)
        probs = z2
        self.backPropagate(X, Y, z1, probs)

        self.learning_rate = self.learning_rate * (self.learning_rate / (self.learning_rate + (self.learning_rate * self.rate_decay)))

def softmax(self, x):
    """Compute softmax values for each sets of scores in x."""
    x = np.clip( x, -500, 500 )
    e = np.exp(x)
    return e / np.sum(e, axis=1, keepdims=True)
def feedForward(self, X):

    p1 = X.dot(self.W1) + self.b1
    z1 = self.neuron(p1)
    p2 = z1.dot(self.W2) + self.b2
    # z2 = self.neuron(p2)
    z2 = self.softmax(p2)
    return [p1, z1, p2, z2]

def predict(self, X):
    [p1, z1, p2, z2] = self.feedForward(X)
    return np.argmax(z2, axis=1)

# Calculates the cross-entropy loss
# P.S. In some cases true distribution is unknown so cross-entropy cannot be directly calculated.
# hence, I will use the cross entropy estimation formula on wikipedia
# https://en.wikipedia.org/wiki/Cross_entropy
def calculateLoss(self, x):
    [p1, z1, p2, z2] = self.feedForward(x)
    softmax_probs = self.softmax(p2)
    # Calculates the estimated loss based on wiki
    return np.sum(-np.log(softmax_probs[range(self.numOfSamples), self.train_labels]))

def neuron(self, p):
    return ActivationFunction.RELU(p)

def CreateRandomW(self, row, col):
    return np.random.uniform(low=-1.0, high=1.0, size=(row, col))

def normalizeData(self, rawpoints, high=255.0, low=0.0):
    return (rawpoints/128.0) - 1

@staticmethod
def RELU(x):
    # x = np.clip( x, -1, 1 )
    x = np.clip( x, -500, 500 )
    return np.maximum(0.001, x)

# RELU can be approximated with soft max function
# so the derivative of this function is g(x) = log(1+exp(x))
# Source: https://imiloainf.wordpress.com/2013/11/06/rectifier-nonlinearities/
@staticmethod
def DRELU(x):
    x = np.clip( x, -500, 500 )
    return np.log(1 + np.exp(x))

1 个答案:

答案 0 :(得分:0)

以下是我发现的一些问题:

  1. calculateLoss()中的数组切片softmax_probs [r,y]不正确。这种切片产生10000x10000矩阵并减慢代码速度。
  2. 同样,backPropagate()中delta3 [r,y]的切片不正确。
  3. 到目前为止,我不确定backprop是否正确完成(没有检查),但是,将渐变剪切为(-5,5),之后我能够获得低于70%的训练和测试精度100次迭代,
  4. 我使用以下效用函数的输出(标签的一对K编码)来修复问题1和2.

    (defun sorted-dictionary (input-word)
      (let ((list
              (stable-sort
                (loop for word in *dictionary*
                      collect (cons 
                                (distance input-word (string word))
                                word))
                 #'<
                 :key #'car)))
        (map-into list #'cdr list)))
    

    我在计算每个渐变后立即在backPropagte()中应用渐变剪裁。例如,

    def one_to_two_encoding(y):
        v = np.array([[1, 0] if y[i] == 0 else [0, 1] for i in range(len(y))])
        return v