使用虹膜数据集训练逻辑回归时权重未更新

时间:2018-09-10 13:51:29

标签: python-3.x machine-learning scikit-learn neural-network

Python代码:

我已如下使用 "Width" 。在这里,机器是通过使用 <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> </plugins> </build> Python code 进行培训的。在这里,问题在于权重没有得到更新。我不明白问题出在哪里。

Logistic Regression algorithm

在上面的代码中, wine dataset 如下所述

from sklearn import datasets
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split

dataset = datasets.load_wine()
x = dataset.data
y = dataset.target
y = y.reshape(178,1)

x_train,x_test,y_train,y_test =   train_test_split(x,y,test_size=0.15,shuffle=True)
print(x_train.shape)
class log_reg():
    def __init__(self):
        pass
    def sigmoid(self,x):
        return 1 / (1 + np.exp(-x))
    def train(self,x,y,w1,w2,alpha,iterations):
        cost_history = [0] * iterations
        Y_train = np.zeros([y.shape[0],3])
        for i in range(Y_train.shape[0]):
            for j in range(Y_train.shape[1]):
                if(y[i] == j):
                    Y_train[i,j] = 1
        for iteration in range(iterations):
            z1 = x.dot(w1)
            a1 = self.sigmoid(z1)
            z2 = a1.dot(w2)
            a2 = self.sigmoid(z2)
            sig_sum = np.sum(np.exp(a2),axis=1)
            sig_sum = sig_sum.reshape(a2.shape[0],1)
            op = np.exp(a2) / sig_sum
            loss = (Y_train * np.log(op))
            dl =  (op-Y_train)
            dz1 = ((dl*(self.sigmoid(z2))*(1-self.sigmoid(z2))).dot(w2.T))*(self.sigmoid(z1))*(1-self.sigmoid(z1))
            dz2 = (dl * (self.sigmoid(z2))*(1-self.sigmoid(z2)))
            dw1 = x.T.dot(dz1)
            dw2 = a1.T.dot(dz2)
            w1 += alpha * dw1 
            w2 += alpha * dw2 
            cost_history[iteration] = (np.sum(loss)/len(loss))
        return w1,w2,cost_history
    def predict(self,x,y,w1,w2):
        z1 = x.dot(w1)
        a1 = self.sigmoid(z1)
        z2 = a1.dot(w2)
        a2 = self.sigmoid(z2)
        sig_sum = np.sum(np.exp(a2),axis=1)
        sig_sum = sig_sum.reshape(a2.shape[0],1)
        op = np.exp(a2) / sig_sum
        y_preds = np.argmax(op,axis=1)
        acc = self.accuracy(y_preds,y)
        return y_preds,acc
    def accuracy(self,y_preds,y):
        y_preds = y_preds.reshape(len(y_preds),1)
        correct = (y_preds == y)
        accuracy = (np.sum(correct) / len(y)) * 100
        return (accuracy)

if __name__ == "__main__":
    network = log_reg()
    w1 = np.random.randn(14,4) * 0.01
    w2 = np.random.randn(4,3) * 0.01
    X_train = np.ones([x_train.shape[0],x_train.shape[1]+1])
    X_train[:,:-1] = x_train
    X_test = np.ones([x_test.shape[0],x_test.shape[1]+1])
    X_test[:,:-1] = x_test
    new_w1,new_w2,cost = network.train(X_train,y_train,w1,w2,0.0045,10000)
    y_preds,accuracy = network.predict(X_test,y_test,new_w1,new_w2)
    print(y_preds,accuracy)

我使用了 parameters

我正在尝试从 x--training set, y--target(output), w1--weights for first layer, w2--weights for second layer, 训练数据集葡萄酒。我不知道问题出在哪里,但是权重没有更新。任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:0)

您的权重正在更新,但是我认为您看不到它们的变化,因为您是在执行后打印它们的。 Python有一个用于numpy数组的对象引用方法,因此当您传递w1时,其值也会更改值,因此new_w1和w1变为相同。 举个例子

import numpy as np
x=np.array([1,2,3,4])
def change(x):
    x+=3
    return x
print(x)
change(x)
print(x)

如果看到输出,则显示为

[1 2 3 4]
[4 5 6 7]

我建议您添加偏差并修正精度功能,因为我的精度为1000。

运行代码enter image description here时执行我的操作

w1和w2的值确实在变化。 我唯一更改的是主代码并启用了原始数据集,请执行相同操作并告诉您权重是否仍未更新

if __name__ == "__main__":
    network = log_reg()
    w1 = np.random.randn(13,4) * 0.01
    w2 = np.random.randn(4,3) * 0.01
    print(w1)
    print(" ")
    print(w2)
    print(" ")
    new_w1,new_w2,cost = network.train(x_train,y_train,w1,w2,0.0045,10000)
    print(w1)
    print(" ")
    print(w2)
    print(" ")
    y_preds,accuracy = network.predict(x_test,y_test,new_w1,new_w2)
    print(y_preds,accuracy)