简单的逻辑回归

时间:2017-10-29 10:54:12

标签: python machine-learning linear-regression

我试图解决二维分类的简单线性回归问题。 这里我有一个特征矩阵X_expanded,其形状为[826,6]。为了对对象进行分类,我们将获得对象属于类'1'的概率。为了预测概率,我们将使用线性模型和逻辑函数的输出。 这是计算概率的函数。

def probability(X, w):
    """
    Given input features and weights
    return predicted probabilities of y==1 given x, P(y=1|x), see description above

    Don't forget to use expand(X) function (where necessary) in this and subsequent functions.

    :param X: feature matrix X of shape [n_samples,6] (expanded)
    :param w: weight vector w of shape [6] for each of the expanded features
    :returns: an array of predicted probabilities in [0,1] interval.
    """

    # TODO:<your code here>
    X = X_expanded
    m = X.shape[0]
    w = np.zeros((m,1))
    Z = np.dot(w.T,X)
    P = 1./(1+np.exp(-Z))

    return P

进行简单的测试:

dummy_weights = np.linspace(-1, 1, 6)
ans_part1 = probability(X_expanded[:1, :], dummy_weights)[0]

但它始终返回array([ 0.5, 0.5, 0.5, 0.5, 0.5, 0.5])

有什么建议吗?

1 个答案:

答案 0 :(得分:1)

由于您已将权重初始化为零,因此Z = np.dot(w.T,X)将为0且sigmoid函数将始终返回0.5。您需要随机初始化权重。它可以通过以下方式完成:

dummy_weights = np.random.rand(m, 1)