Scikit Logistic回归摘要输出?

时间:2016-05-21 05:09:42

标签: python scikit-learn statsmodels

有没有办法像scsmit中那样为scikit逻辑回归模型提供类似的,不错的输出?使用所有p值,std。一个表中的错误等?

1 个答案:

答案 0 :(得分:1)

正如您和其他人所指出的,这是 scikit learn 的一个限制。在下面讨论针对您的问题的 scikit 方法之前,“最佳”选择是使用 statsmodels,如下所示:

import statsmodels.api as sm 
smlog = sm.Logit(y,sm.add_constant(X)).fit()
smlog.summary()

X 代表您的输入特征/预测矩阵,y 代表结果变量。如果 X 缺乏高度相关的特征,缺乏低方差特征,特征不会产生“完美/准完美分离”,并且任何分类特征都减少到“n-1”级别,即虚拟编码,则 Statsmodels 运行良好(而不是“n”级,即单热编码,如下所述:dummy variable trap)。

但是,如果上述方法不可行/不切实际,下面将编码一种 scikit 方法以获得相当等效的结果 - 就特征系数/几率及其标准误差和 95% CI 估计而言。从本质上讲,代码从针对数据的不同测试训练拆分训练的不同逻辑回归 scikit 模型生成这些结果。同样,请确保将分类特征虚拟编码为 n-1 级(否则您的 scikit 系数对于分类特征将不正确)。

 #Instantiate logistic regression model with regularization turned OFF
log_nr = LogisticRegression(fit_intercept = True, penalty 
= "none")

##Generate 5 distinct random numbers - as random seeds for 5 test-train splits
import random
randomlist = random.sample(range(1, 10000), 5)

##Create features column 
coeff_table = pd.DataFrame(X.columns, columns=["features"])

##Assemble coefficients over logistic regression models on 5 random data splits
#iterate over random states while keeping track of `i`
from sklearn.model_selection import train_test_split
for i, state in enumerate(randomlist):
    train_x, test_x, train_y, test_y = train_test_split(X, y,   stratify=y, 
    test_size=0.3, random_state=state) #5 test-train splits
    log_nr.fit(train_x, train_y) #fit logistic model 
    coeff_table[f"coefficients_{i+1}"] = np.transpose(log_nr.coef_) 

##Calculate mean and std error for model coefficients (from 5 models above)
coeff_table["mean_coeff"] = coeff_table.mean(axis=1)
coeff_table["se_coeff"] = coeff_table.iloc[:, 1:6].sem(axis=1)    

#Calculate 95% CI intervals for feature coefficients
coeff_table["95ci_se_coeff"] = 1.96*coeff_table["se_coeff"]
coeff_table["coeff_95ci_LL"] = coeff_table["mean_coeff"] - 
coeff_table["95ci_se_coeff"]
coeff_table["coeff_95ci_UL"] = coeff_table["mean_coeff"] + 
coeff_table["95ci_se_coeff"]

最后,(可选)通过如下求幂将系数转换为赔率。优势比是我最喜欢的逻辑回归输出,它们使用下面的代码附加到您的数据帧中。

#Calculate odds ratios and 95% CI (LL = lower limit, UL = upper limit) intervals for each feature
coeff_table["odds_mean"] = np.exp(coeff_table["mean_coeff"])
coeff_table["95ci_odds_LL"] = np.exp(coeff_table["coeff_95ci_LL"])
coeff_table["95ci_odds_UL"] = np.exp(coeff_table["coeff_95ci_UL"])

这个答案建立在@pciunkiewicz 的一个有点相关的回复之上:Collate model coefficients across multiple test-train splits from sklearn