我正在尝试编写两个脚本来演示局部加权线性回归。我已经使用Numpy在第一个脚本中解决矩阵问题,如下所示:
trX = np.linspace(0, 1, 100)
trY= trX + np.random.normal(0,1,100)
xArr = []
yArr = []
for i in range(len(trX)):
xArr.append([1.0,float(trX[i])])
yArr.append(float(trY[i]))
xMat = mat(xArr);
yMat = mat(yArr).T
m = shape(xMat)[0]
weights = mat(eye((m)))
k = 0.01
yHat = zeros(m)
for i in range(m):
for j in range(m):
diffMat = xArr[i] - xMat[j,:]
weights[j,j] = exp(diffMat*diffMat.T/(-2.0*k**2))
xTx = xMat.T * (weights * xMat)
if linalg.det(xTx) == 0.0:
print("This matrix is singular, cannot do inverse")
ws = xTx.I * (xMat.T * (weights * yMat))
yHat[i] = xArr[i]*ws
plt.scatter(trX, trY)
plt.plot(trX, yHat, 'r')
plt.show()
在第二个脚本中,我使用TensorFlow解决了矩阵问题。该脚本如下所示:
trX = np.linspace(0, 1, 100)
trY= trX + np.random.normal(0,1,100)
sess = tf.Session()
xArr = []
yArr = []
for i in range(len(trX)):
xArr.append([1.0,float(trX[i])])
yArr.append(float(trY[i]))
xMat = mat(xArr);
yMat = mat(yArr).T
A_tensor = tf.constant(xMat)
b_tensor = tf.constant(yMat)
m = shape(xMat)[0]
weights = mat(eye((m)))
k = 0.01
yHat = zeros(m)
for i in range(m):
for j in range(m):
diffMat = xMat[i]- xMat[j,:]
weights[j,j] = exp(diffMat*diffMat.T/(-2.0*k**2))
weights_tensor = tf.constant(weights)
# Matrix inverse solution
wA = tf.matmul(weights_tensor, A_tensor)
tA_A = tf.matmul(tf.transpose(A_tensor), wA)
tA_A_inv = tf.matrix_inverse(tA_A)
wb = tf.matmul(weights_tensor, b_tensor)
tA_wb = tf.matmul(tf.transpose(A_tensor), wb)
solution = tf.matmul(tA_A_inv, tA_wb)
sol_val = sess.run(solution)
yHat[i] =sol_val[0][0]*xArr[i][1] + sol_val[1][0]
plt.scatter(trX, trY)
plt.plot(trX, yHat, 'r')
plt.show()
如果运行它:
什么使两个结果有所不同?也许我的脚本中有错误的内容?请帮助我。
答案 0 :(得分:2)
问题所在的代码行
yHat[i] =sol_val[0][0]*xArr[i][1] + sol_val[1][0]
Numpy数组乘法发生错误。
如果将上面的代码行替换为
,它将正常工作yHat[i] =sol_val[0][0]*xArr[i][0] + sol_val[1][0]*xArr[i][1]
完整的工作代码如下:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from numpy import *
import tensorflow as tf
trX = np.linspace(0, 1, 100)
trY= trX + np.random.normal(0,1,100)
#print('trY = ', trY)
sess = tf.Session()
xArr = []
yArr = []
for i in range(len(trX)):
xArr.append([1.0,float(trX[i])])
yArr.append(float(trY[i]))
xMat = mat(xArr);
yMat = mat(yArr).T
A_tensor = tf.constant(xMat)
b_tensor = tf.constant(yMat)
#print("A_Tensor = xMat = ", sess.run(A_tensor))
#print("B_Tensor = yMat = ", sess.run(b_tensor))
m = shape(xMat)[0]
weights = mat(eye((m)))
k = 0.01
yHat = zeros(m)
for i in range(m):
for j in range(m):
diffMat = xMat[i]- xMat[j,:]
weights[j,j] = exp(diffMat*diffMat.T/(-2.0*k**2))
weights_tensor = tf.constant(weights)
# Matrix inverse solution
wA = tf.matmul(weights_tensor, A_tensor)
tA_A = tf.matmul(tf.transpose(A_tensor), wA)
tA_A_inv = tf.matrix_inverse(tA_A)
wb = tf.matmul(weights_tensor, b_tensor)
tA_wb = tf.matmul(tf.transpose(A_tensor), wb)
solution = tf.matmul(tA_A_inv, tA_wb)
sol_val = sess.run(solution)
#plt.plot(sol_val, 'b')
#plt.show()
#print("Sol_Val = ", sol_val)
#print("Sol_Val[0][0] = ", sol_val[0][0])
#print("Sol_Val[1][0] = ", sol_val[1][0])
#print('xArr[i] = ', np.array(xArr[i]))
#print('xArr[i][0] = ', np.array(xArr[i][0]))
#print('xArr[i][1] = ', np.array(xArr[i][1]))
#yHat[i] =sol_val[0][0]*xArr[i][1] + sol_val[1][0]
yHat[i] =sol_val[0][0]*xArr[i][0] + sol_val[1][0]*xArr[i][1]
#print("Weights = ", sess.run(weights_tensor))
#yHat[i] = np.array(xArr[i])*sol_val
#print(sol_val)
plt.scatter(trX, trY)
plt.plot(trX, yHat, 'r')
plt.show()
该图显示如下: