逻辑回归 Python 实现

时间:2021-02-04 18:01:46

标签: python numpy machine-learning

我尝试在 Python 中仅使用 numpy 实现逻辑回归,但结果并不令人满意。预测似乎不正确,损失也没有改善,所以代码可能有问题。有谁知道什么可以解决它?非常感谢!

这是算法:

import numpy as np


# training data and labels
X = np.concatenate((np.random.normal(0.25, 0.1, 50), np.random.normal(0.75, 0.1, 50)), axis=None)
Y = np.concatenate((np.zeros((50,), dtype=np.int32), np.ones((50,), dtype=np.int32)), axis=None)

def logistic_sigmoid(a):
    return 1 / (1 + np.exp(-a))

# forward pass
def forward_pass(w, x):
    return logistic_sigmoid(w * x)

# gradient computation
def backward_pass(x, y, y_real):
    return np.sum((y - y_real) * x)

# computing loss
def loss(y, y_real):
    return -np.sum(y_real * np.log(y) + (1 - y_real) * np.log(1 - y))

# training
def train():
    w = 0.0
    learning_rate = 0.01
    i = 200
    test_number = 0.3

    for epoch in range(i):
        y = forward_pass(w, X)
        gradient = backward_pass(X, y, Y)
        w = w - learning_rate * gradient

        print(f'epoch {epoch + 1}, x = {test_number}, y = {forward_pass(w, test_number):.3f}, loss = {loss(y, Y):.3f}')


train()

1 个答案:

答案 0 :(得分:0)

乍一看,您错过了截取项(通常称为 b_0 或偏差)及其梯度更新。同样在 backward_pass 和 loss 计算中,您没有除以数据样本量。

您可以在此处查看如何从头开始实施的两个示例:

1:Example based on Andrew Ng explanations in the Machine Learning course in Coursera

2:Implementation of Jason Brownlee from Machine Learning mastery website