Gradient Descent numpy Python - excel与计算数据之间的差异

时间:2018-04-06 09:57:24

标签: python numpy machine-learning gradient-descent

为我的最后一年项目编写此算法。使用渐变下降来找到最佳拟合线。我也尝试使用Multi-regression解决它。价值观不同。 csv文件附在https://drive.google.com/file/d/1-UaU34w3c5-VunYrVz9fD7vRb0c-XDqk/view?usp=sharing。前3列是独立变量(x1,x2,x3),最后一列是从属(y)。 这是一个不同的问题,如果你能解释为什么答案与excel值不同?

import numpy as np
import random
import pandas as pd

def gradientDescent(x, y, theta, alpha, m, numIterations):
    xTrans = x.transpose()
    for i in range(0, numIterations):
        hypothesis = np.dot(x, theta)
        loss = hypothesis - y
        cost = np.sum(loss ** 2) / (2 * m)
        print("Iteration %d | Cost: %f" % (i, cost))
        gradient = np.dot(xTrans, loss) / m
        theta = theta - alpha * gradient
    return theta

df = pd.read_csv(r'C:\Users\WELCOME\Desktop\FinalYearPaper\ConferencePaper\NewTrain.csv', 'rU', delimiter=",",header=None)

df.columns = ['x0','Speed','Feed','DOC','Roughness']

print(df)

y = np.array(df['Roughness'])
#x = np.array(d) 
x = np.array(df.drop(['Roughness'],1))
#x[:,2:3] = 1.0
print (x)
print(y)

m, n = np.shape(x)
print(m,n)
numIterations= 50000
alpha = 0.000001
theta = np.ones(n)
theta = gradientDescent(x, y, theta, alpha, m, numIterations)
print(theta)

0 个答案:

没有答案