我需要在不使用scikit的情况下在python中建立线性回归模型。 您可以忽略涉及输入的部分,因为该部分根据提供给我的文件而定。我添加了我的整个代码,以防万一我做错了什么。
import pandas as pd
import numpy as np
import matplotlib.pyplot as mlt
from sklearn.cross_validation import train_test_split
data = pd.read_csv("housing.csv", delimiter = ' ', skipinitialspace = True, names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'])
df_x = data.drop('MEDV', axis = 1)
df_y = data['MEDV']
x_train, x_test, y_train, y_test = train_test_split(df_x.values, df_y.values, test_size = 0.2, random_state = 4)
theta = np.zeros((1, 13))
在上面的代码中,我刚刚输入了一个名为theta的参数数组。
def costfn(x, y, theta):
j = np.sum(x.dot(theta.T) - y) ** 2 / (2 * len(y))
return j
def gradient(x, y, theta, alpha, iterations):
cost_history = [0] * iterations
for i in range(iterations):
h = theta.dot(x.T) #hypothesis
loss = h - y
#print(loss)
g = loss.dot(x) / len(y)
#print(g)
theta = theta - alpha * g
cost_history[i] = costfn(x, y, theta)
#print(theta)
return theta, cost_history
theta, cost_history = gradient(x_train, y_train, theta, 0.001, 1000)
#print(theta)
我已注释的所有行均以适当大小的nan输出。
我使用的逻辑类似于on this blog 告诉我我是否错。
答案 0 :(得分:1)
我认为通常您的代码可以正常工作。您观察到的最有可能与alpha设置有关。好像太高了,所以θ发散了。在某个时候它会得到inf
或-inf
,之后,您将在下一次迭代中得到NaN
。我认识到同样的问题。
您可以使用简单的设置来验证:
# output theta in your function
def gradient(x, y, theta, alpha, iterations):
cost_history = [0] * iterations
for i in range(iterations):
h = theta.dot(x.T) #hypothesis
#print('h:', h)
loss = h - y
#print('loss:', loss)
g = loss.dot(x) / len(y)
#print('g:', g)
theta = theta - alpha * g
print('theta:', theta)
cost_history[i] = costfn(x, y, theta)
#print(theta)
return theta, cost_history
# set up example data with a simple linear relationship
# where we can play around with different numbers of parameters
# conveniently
# with some noise
num_params= 2 # how many params do you want to estimate (up to 5)
# take some fixed params (we only take num_params of them)
real_params= [2.3, -0.1, 8.5, -1.8, 3.2]
# now generate the data for the number of parameters chosen
x_train= np.random.randint(-100, 100, size=(80, num_params))
x_noise= np.random.randint(-100, 100, size=(80, num_params)) * 0.001
y_train= (x_train + x_noise).dot(np.array(real_params[:num_params]))
theta= np.zeros(num_params)
现在尝试以较高的学习率
theta, cost_history = gradient(x_train, y_train, theta, 0.1, 1000)
您很可能会发现,theta值的指数会越来越高,直到最终达到inf
或-inf
为止。之后,您获得NaN
值。
但是,如果将其设置为较低的值(如0.00001),则会看到它收敛:
theta: [ 0.07734451 -0.00357339]
theta: [ 0.15208803 -0.007018 ]
theta: [ 0.22431803 -0.01033852]
theta: [ 0.29411905 -0.01353942]
theta: [ 0.36157275 -0.01662507]
theta: [ 0.42675808 -0.01959962]
theta: [ 0.48975132 -0.02246712]
theta: [ 0.55062617 -0.02523144]
...
theta: [ 2.29993382 -0.09981407]
theta: [ 2.29993382 -0.09981407]
theta: [ 2.29993382 -0.09981407]
theta: [ 2.29993382 -0.09981407]
与实参2.3
和-0.1
非常接近。
因此,您可以尝试采用适合学习率的代码,从而使值收敛更快,并且发散风险更低。您还可以实施类似早期停止的操作,以便在误差没有变化或变化低于阈值的情况下停止对样本进行迭代。
例如您可以对函数进行以下修改:
def gradient(
x,
y,
theta=None,
alpha=0.1,
alpha_factor=0.1 ** (1/5),
change_threshold=1e-10,
max_iterations=500,
verbose=False):
cost_history = list()
if theta is None:
# theta was not passed explicitely
# so initialize it
theta= np.zeros(x.shape[1])
last_loss_sum= float('inf')
len_y= len(y)
for i in range(1, max_iterations+1):
h = theta.dot(x.T) #hypothesis
loss = h - y
loss_sum= np.sum(np.abs(loss))
if last_loss_sum <= loss_sum:
# the loss didn't decrease
# so decrease alpha
alpha= alpha * alpha_factor
if verbose:
print(f'pass: {i:4d} loss: {loss_sum:.8f} / alpha: {alpha}')
theta_old= theta
g= loss.dot(x) / len_y
if loss_sum <= last_loss_sum and last_loss_sum < float('inf'):
# only apply the change if the loss is
# finite to avoid infinite entries in theta
theta = theta - alpha * g
theta_change= np.sum(np.abs(theta_old - theta))
if theta_change < change_threshold:
# Maybe this seems a bit awkward, but
# the comparison of change_threshold
# takes the relationship between theta and g
# into account. Note that g will not have
# an effect if theta is orders of magnitude
# larger than g, even if g itself is large.
# (I mean if you consider g and theta elementwise)
cost_history.append(costfn(x, y, theta))
break
cost_history.append(costfn(x, y, theta))
last_loss_sum= loss_sum
return theta, cost_history
这些更改解决了提前停止,自动调整alpha
和避免使用theta
来取无限值的问题。您只需传递X
和y
(在最小情况下),所有其他参数都将成为默认值。如果要查看,请设置verbose=True
,以了解在每个迭代中损耗如何减少。