如何将深度学习梯度下降方程转换为python

时间:2017-08-23 06:41:38

标签: neural-network deep-learning backpropagation gradient-descent propagation

我一直在关注深度学习的在线教程。它有一个关于梯度下降和成本计算的实际问题,一旦转换为python代码,我一直在努力获得给定的答案。希望你能帮助我得到正确的答案

请参阅以下链接了解所使用的公式 Click here to see the equations used for the calculations

以下是计算梯度下降,成本等的函数。需要在不使用for循环但使用矩阵操作操作的情况下找到这些值

import numpy as np

def propagate(w, b, X, Y):
"""
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size
  (1, number of examples)

Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b

Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""

m = X.shape[1]


# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A =                                      # compute activation
cost =                                   # compute cost
### END CODE HERE ###


# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = 
db = 
### END CODE HERE ###


assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())

grads = {"dw": dw,
         "db": db}

return grads, cost

以下是测试上述功能的数据

w, b, X, Y = np.array([[1],[2]]), 2, np.array([[1,2],[3,4]]), 
np.array([[1,0]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))

以下是上述

的预期输出
Expected Output:
dw  [[ 0.99993216] [ 1.99980262]]
db  0.499935230625
cost    6.000064773192205

对于上面的传播函数,我使用了以下替换,但输出不是预期的。请帮助了解如何获得预期的输出

A = sigmoid(X)
cost = -1*((np.sum(np.dot(Y,np.log(A))+np.dot((1-Y),(np.log(1-A))),axis=0))/m)
dw = (np.dot(X,((A-Y).T)))/m
db = np.sum((A-Y),axis=0)/m

以下是用于计算激活的sigmoid函数:

def sigmoid(z):
  """
  Compute the sigmoid of z

  Arguments:
  z -- A scalar or numpy array of any size.

  Return:
  s -- sigmoid(z)
  """

  ### START CODE HERE ### (≈ 1 line of code)
  s = 1 / (1+np.exp(-z))
  ### END CODE HERE ###

return s

希望有人可以帮助我理解如何解决这个问题,因为如果不理解这一点,我就无法继续学习其他教程。非常感谢

3 个答案:

答案 0 :(得分:1)

在完成代码和笔记后,几次终于能够找出错误。

首先,它需要计算Z然后将其传递给sigmoid函数,而不是X

Z = w(T)X + b的公式。所以在python中这计算如下

Z=np.dot(w.T,X)+b

然后通过将z传递给sigmoid函数来计算A

A = sigmoid(Z)

然后dw可以如下计算

dw=np.dot(X,(A-Y).T)/m

计算其他变量; b的成本和衍生物如下

cost = -1*((np.sum((Y*np.log(A))+((1-Y)*(np.log(1-A))),axis=1))/m) 
db = np.sum((A-Y),axis=1)/m

答案 1 :(得分:1)

您可以如下计算A,cost,dw,db:

A = sigmoid(np.dot(w.T,X) + b)     
cost = -1 / m * np.sum(Y*np.log(A)+(1-Y)*np.log(1-A)) 

dw = 1/m * np.dot(X,(A-Y).T)
db = 1/m * np.sum(A-Y)

sigmoid是:

def sigmoid(z):
    s = 1 / (1 + np.exp(-z))    
    return s

答案 2 :(得分:0)

def sigmoid(x):
      #You have it right
      return 1/(1 + np.exp(-x))

def derivSigmoid(x):
      return sigmoid(x) * (1 - sigmoid(x))

error = targetSample - output

#Make sure to keep the sigmoided value around.  For instance, an output that has already been sigmoided can be used to get the sigmoid derivative faster (output = sigmoid(x)):
dOutput = output * (1 - output)

看起来您已经在使用backprop了。我以为我会帮你简化一些前进道具。