我的Logistic回归实现没有得到正确的系数等高线图?

时间:2016-06-11 19:30:44

标签: python numpy machine-learning logistic-regression contour

我实施了逻辑回归并在数据集上使用它。 (这是使用python在Coursera的ML课程第3周(通常使用matlab和octave)练习(所以这不是作弊))。

我从sklearn中的实现开始,对本课程第3周(http://pastie.org/10872959)中使用的数据集进行分类。以下是一个小型,可重现的示例,供任何人试用我使用的内容(它仅依赖于numpysklearn):

它采用数据集,将其拆分为特征矩阵和输出矩阵,然后构造26个特征从原始2 (即来自

enter image description here )。然后我在sklearn中使用逻辑回归,但这并没有给出所需的等高线图(请参见下文)。

from sklearn.linear_model import LogisticRegression as expit
import numpy as np

def thetaFunc(y, theta, x):

    deg = 6

    spot = 0
    sum = 0
    for i in range(1, deg + 1):
    for j in range(i + 1):
        sum += theta[spot] * x**(i - j) * y**(j)
        spot += 1
    return sum


def constructVariations(X, deg):

    features = np.zeros((len(X), 27)) 
    spot = 0

    for i in range(1, deg + 1):
        for j in range(i + 1):

            features[:, spot] = X[:,0]**(i - j) * X[:,1]**(j)
            spot += 1

    return features

if __name__ == '__main__':
    data = np.loadtxt("ex2points.txt", delimiter = ",")
    X,Y = np.split(data, [len(data[0,:]) - 1], 1)
    X = reg.constructVariations(X, 6)

    oneArray = np.ones((len(X),1))
    X = np.hstack((oneArray, X))
    trial = expit(solver = 'sag')
    trial = trial.fit(X = X,y = np.ravel(Y))
    print(trial.coef_)

    # everything below has been edited in

    from matplotlib import pyplot as plt

    txt = open("RegLogTheta", "r").read()
    txt = txt.split()
    theta = np.array(txt, float)

    x = np.linspace(-1, 1.5, 100)
    y = np.linspace(-1,1.5,100)
    z = np.empty((100,100))


    xx,yy = np.meshgrid(x,y)
    for i in range(len(x)):
         for j in range(len(y)):
             z[i][j] = thetaFunc(yy[i][j], theta, xx[i][j])

    plt.contour(xx,yy,z, levels = [0])
    plt.show()

以下是通用特征术语的系数。 http://pastie.org/10872957(即系数enter image description here

及其产生的轮廓:

一个潜在的错误来源是我误解了trial._coeff中存储的7 X 4矩阵系数矩阵。我相信这28个值是28"变量"的系数。以上,我已经将系数映射到列和行的变化。按列,我的意思是[:][0]映射到前7个变体,[:][1]映射到下7个,依此类推,我的函数constructVariations解释了如何系统地创建变体。现在API维护的shape (n_classes, n_features)数组存储在trial._coeff中,所以我应该推断fit将数据分为四类吗?或者我是否以另一种方式很难解决这个问题? enter image description here

更新

我对重量的解释(和/或使用)必须是错误的:

我自己试图计算将以下内容设置为1/2

的值,而不是依赖sklearn中内置的预测。

enter image description here

enter image description here

theta的值是从打印trial._coeff找到的值,x和y是标量。然后绘制那些x,y以给出轮廓。

我使用的代码(但最初没有添加)试图这样做。背后的数学有什么问题?

1 个答案:

答案 0 :(得分:4)

  

一个潜在的错误来源是我误解了试验中存储的7 X 4矩阵系数矩阵._coeff

此矩阵不是7x4,它是1x28(检查print(trial.coef_.shape))。每个28个特征的一个系数(27个由constructVariations返回,1个手动添加)。

  

所以我应该推断出拟合将数据分为四类?

不,你错过了解释数组,它有一行(对于二进制分类,有两个没有意义)。

  

或者我是否以另一种方式很难解决这个问题?

代码很好,解释不是。特别是,从模型中看到实际决策边界(通过调用#34;预测"绘制轮廓绘制)

lr

from sklearn.linear_model import LogisticRegression as expit
import numpy as np

def constructVariations(X, deg):

    features = np.zeros((len(X), 27)) 
    spot = 0

    for i in range(1, deg + 1):
        for j in range(i + 1):

            features[:, spot] = X[:,0]**(i - j) * X[:,1]**(j)
            spot += 1

    return features

if __name__ == '__main__':
    data = np.loadtxt("ex2points.txt", delimiter = ",")
    X,Y = np.split(data, [len(data[0,:]) - 1], 1)
    rawX = np.copy(X)    
    X = constructVariations(X, 6)

    oneArray = np.ones((len(X),1))
    X = np.hstack((oneArray, X))
    trial = expit(solver = 'sag')
    trial = trial.fit(X = X,y = np.ravel(Y))
    print(trial.coef_)

    from matplotlib import pyplot as plt

    h = 0.01
    x_min, x_max = rawX[:, 0].min() - 1, rawX[:, 0].max() + 1
    y_min, y_max = rawX[:, 1].min() - 1, rawX[:, 1].max() + 1
    xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
                         np.arange(y_min, y_max, h))

    data = constructVariations(np.c_[xx.ravel(), yy.ravel()], 6)
    oneArray = np.ones((len(data),1))
    data = np.hstack((oneArray, data))
    Z = trial.predict(data)
    Z = Z.reshape(xx.shape)

    plt.figure()
    plt.scatter(rawX[:, 0], rawX[:, 1], c=Y, linewidth=0, s=50)
    plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
    plt.show()

更新

在提供的代码中,您忘记了(在可视化中)您已将" 1" s的列添加到您的数据表示中,因此您的thetas是一个" off",as theta [0 ]是偏见,theta 1与你的第0个变量等有关。

def thetaFunc(y, theta, x):

    deg = 6

    spot = 0
    sum = theta[spot]

    spot += 1
    for i in range(1, deg + 1):
        for j in range(i + 1):
            sum += theta[spot] * x**(i - j) * y**(j)
            spot += 1
    return sum

你也忘记了logisticregression本身的拦截术语,因此

xx,yy = np.meshgrid(x,y)
for i in range(len(x)):
     for j in range(len(y)):
         z[i][j] = thetaFunc(yy[i][j], theta, xx[i][j])
z -= trial.intercept_

enter image description here

(使用您的固定代码生成的图像)​​

import numpy as np
from sklearn.linear_model import LogisticRegression as expit

def thetaFunc(y, theta, x):

    deg = 6

    spot = 0
    sum = theta[spot]

    spot += 1
    for i in range(1, deg + 1):
        for j in range(i + 1):
            sum += theta[spot] * x**(i - j) * y**(j)
            spot += 1
    return np.exp(-sum)


def constructVariations(X, deg):

    features = np.zeros((len(X), 27)) 
    spot = 0

    for i in range(1, deg + 1):
        for j in range(i + 1):

            features[:, spot] = X[:,0]**(i - j) * X[:,1]**(j)
            spot += 1

    return features

if __name__ == '__main__':
    data = np.loadtxt("ex2points.txt", delimiter = ",")
    X,Y = np.split(data, [len(data[0,:]) - 1], 1)

    X = constructVariations(X, 6)
    rawX = np.copy(X)

    oneArray = np.ones((len(X),1))
    X = np.hstack((oneArray, X))
    trial = expit(solver = 'sag')
    trial = trial.fit(X = X,y = np.ravel(Y))

    from matplotlib import pyplot as plt

    theta = trial.coef_.ravel()

    x = np.linspace(-1, 1.5, 100)
    y = np.linspace(-1,1.5,100)
    z = np.empty((100,100))


    xx,yy = np.meshgrid(x,y)
    for i in range(len(x)):
         for j in range(len(y)):
             z[i][j] = thetaFunc(yy[i][j], theta, xx[i][j])
    z -= trial.intercept_

    plt.contour(xx,yy,z > 1,cmap=plt.cm.Paired, alpha=0.8)
    plt.scatter(rawX[:, 0], rawX[:, 1], c=Y, linewidth=0, s=50)
    plt.show()