梯度下降线性回归后的thetas归一化

时间:2019-11-24 08:36:39

标签: machine-learning math gradient-descent

我有以下数据集:

km,price
240000,3650
139800,3800
150500,4400
185530,4450
176000,5250
114800,5350
166800,5800
89000,5990
144500,5999
84000,6200
82029,6390
63060,6390
74000,6600
97500,6800
67000,6800
76025,6900
48235,6900
93000,6990
60949,7490
65674,7555
54000,7990
68500,7990
22899,7990
61789,8290

将它们归一化之后,我将执行梯度下降操作,从而得出以下θ:

θ0 = 0.9362124793084768
θ1 = -0.9953762249792935

如果我提供了归一化的里程数,那么我可以正确地预测价格,然后对预测的价格进行归一化,即:

Asked price for a mileage of 50000km:
normalized mileage: 0.12483129971764294
normalized price: (mx + c) = 0.8119583714362707
real price: 7417.486843464296

我正在寻找的是将theta恢复为非标准化值,但是无论我尝试使用哪种公式,我都无法做到。有办法吗?

1 个答案:

答案 0 :(得分:0)

和往常一样,这需要我问一个关于stackoverflow的问题,以便稍后自己设法解决。...

这只是要解决的两个变量方程式,如您在此处看到的(请手写):https://ibb.co/178qWcQ

这是进行计算的python代码:

x0, x1 = self.training_set[0][0], self.training_set[1][0]
x0n, x1n = self.normalized_training_set[0][0], self.normalized_training_set[1][0]
y0n, y1n = self.hypothesis(x0n), self.hypothesis(x1n)
p_diff = self.max_price - self.min_price
theta0 = (x1 / (x1 - x0)) * (y0n * p_diff + self.min_price - (x0 / x1 * (y1n * p_diff + self.min_price)))
y0 = self.training_set[0][1]
theta1 = (y0 - theta0) / x0
print(theta0, theta1) //RESULT: 8481.172796984529 -0.020129886654102203