在线性回归中实施梯度下降算法时,我的算法所做的预测以及所得的回归线以错误的输出形式出现。有人可以看看我的实施情况并为我提供帮助吗?另外,请指导我如何知道特定回归问题中要选择的“学习率”和“迭代次数”的值?
theta0 = 0 #first parameter
theta1 = 0 #second parameter
alpha = 0.001 #learning rate (denoted by alpha)
num_of_iterations = 100 #total number of iterations performed by Gradient Descent
m = float(len(X)) #total number of training examples
for i in range(num_of_iterations):
y_predicted = theta0 + theta1 * X
derivative_theta0 = (1/m) * sum(y_predicted - Y)
derivative_theta1 = (1/m) * sum(X * (y_predicted - Y))
temp0 = theta0 - alpha * derivative_theta0
temp1 = theta1 - alpha * derivative_theta1
theta0 = temp0
theta1 = temp1
print(theta0, theta1)
y_predicted = theta0 + theta1 * X
plt.scatter(X,Y)
plt.plot(X, y_predicted, color = 'red')
plt.show()
答案 0 :(得分:0)
您的学习率很高,我通过将学习率降低到alpha = 0.0001使它起作用。