StatsModels Python中的Logit回归

时间:2015-12-11 18:56:21

标签: python logistic-regression statsmodels predict

我试图使用一个自变量运行逻辑回归,使模型适合数据,然后返回带有随机样本输入的概率输出。

In [153]: df[['Diff1', 'Win']]
Out[153]: 
   Diff1  Win
0    100    1
1    110    1
2     20    0  
3     80    1
4    200    1
5     25    0

In [154]: logit = sm.Logit(df['Win'], df['Diff1'])

In [155]: result=logit.fit()
Optimization terminated successfully.
         Current function value: 0.451400
         Iterations 6

                            Logit Regression Results                           
==============================================================================
Dep. Variable:                    Win   No. Observations:                    8
Model:                          Logit   Df Residuals:                        7
Method:                           MLE   Df Model:                            0
Date:                Fri, 11 Dec 2015   Pseudo R-squ.:                  0.3177
Time:                        13:49:07   Log-Likelihood:                -3.6112
converged:                       True   LL-Null:                       -5.2925
                                        LLR p-value:                       nan
==============================================================================
                 coef    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
Diff1          0.0207      0.014      1.435      0.151        -0.008     0.049
==============================================================================

In [158]: result.predict(0)
Out[158]: array([ 0.5])

我显然错误地使用了预测函数,因为在这种情况下输入0不应该产生0.5。该结果用于逻辑模型的非拟合示例。

我会使用简单的OLS回归,但希望我的模型受(0,1)限制。

0 个答案:

没有答案