我写了一些初学代码,用正规方程计算简单线性模型的系数。
# Modules
import numpy as np
# Loading data set
X, y = np.loadtxt('ex1data3.txt', delimiter=',', unpack=True)
data = np.genfromtxt('ex1data3.txt', delimiter=',')
def normalEquation(X, y):
m = int(np.size(data[:, 1]))
# This is the feature / parameter (2x2) vector that will
# contain my minimized values
theta = []
# I create a bias_vector to add to my newly created X vector
bias_vector = np.ones((m, 1))
# I need to reshape my original X(m,) vector so that I can
# manipulate it with my bias_vector; they need to share the same
# dimensions.
X = np.reshape(X, (m, 1))
# I combine these two vectors together to get a (m, 2) matrix
X = np.append(bias_vector, X, axis=1)
# Normal Equation:
# theta = inv(X^T * X) * X^T * y
# For convenience I create a new, tranposed X matrix
X_transpose = np.transpose(X)
# Calculating theta
theta = np.linalg.inv(X_transpose.dot(X))
theta = theta.dot(X_transpose)
theta = theta.dot(y)
return theta
p = normalEquation(X, y)
print(p)
使用此处的小数据集:
http://www.lauradhamilton.com/tutorial-linear-regression-with-octave
我使用上面的代码而不是[-0.34390603; 0.2124426 ]
获得了效率:[24.9660; 3.3058]
。任何人都可以帮助澄清我哪里出错了吗?
答案 0 :(得分:3)
您可以实现如下的正常公式:
import numpy as np
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance
y_predict = X_new_b.dot(theta_best)
y_predict
答案 1 :(得分:2)
假设X是一个m x n + 1的维矩阵,其中x_0始终为1,y是一个m维向量。
import numpy as np
step1 = np.dot(X.T, X)
step2 = np.linalg.pinv(step1)
step3 = np.dot(step2, X.T)
theta = np.dot(step3, y) # if y is m x 1. If 1xm, then use y.T
答案 2 :(得分:1)
您的实施是正确的。您只交换了X
和y
(仔细查看他们如何定义x
和y
),这就是您获得不同结果的原因。< / p>
致电normalEquation(y, X)
可以提供[ 24.96601443 3.30576144]
。
答案 3 :(得分:0)
以下是一行中的正规方程:
theta = np.dot(linalg.inv(np.dot(X.T,X)),np.dot(X.T,Y))