我有一个pandas
数据框,我希望能够从B列和C列中的值预测A列的值。这是一个玩具示例:
import pandas as pd
df = pd.DataFrame({"A": [10,20,30,40,50],
"B": [20, 30, 10, 40, 50],
"C": [32, 234, 23, 23, 42523]})
理想情况下,我会有类似ols(A ~ B + C, data = df)
的内容,但当我查看scikit-learn
等算法库中的examples时,它似乎会将数据提供给带有行列表的模型列。这将要求我将数据重新格式化为列表中的列表,这似乎首先打败了使用pandas的目的。在pandas数据框中对数据运行OLS回归(或更普遍的机器学习算法)的最pythonic方法是什么?
答案 0 :(得分:132)
我认为你几乎可以完全按照你认为理想的做法,使用statsmodels一个pandas
'pandas
'版本0.20.0之前的可选依赖项之一(它用于pandas.stats
中的一些事情。)
>>> import pandas as pd
>>> import statsmodels.formula.api as sm
>>> df = pd.DataFrame({"A": [10,20,30,40,50], "B": [20, 30, 10, 40, 50], "C": [32, 234, 23, 23, 42523]})
>>> result = sm.ols(formula="A ~ B + C", data=df).fit()
>>> print(result.params)
Intercept 14.952480
B 0.401182
C 0.000352
dtype: float64
>>> print(result.summary())
OLS Regression Results
==============================================================================
Dep. Variable: A R-squared: 0.579
Model: OLS Adj. R-squared: 0.158
Method: Least Squares F-statistic: 1.375
Date: Thu, 14 Nov 2013 Prob (F-statistic): 0.421
Time: 20:04:30 Log-Likelihood: -18.178
No. Observations: 5 AIC: 42.36
Df Residuals: 2 BIC: 41.19
Df Model: 2
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
Intercept 14.9525 17.764 0.842 0.489 -61.481 91.386
B 0.4012 0.650 0.617 0.600 -2.394 3.197
C 0.0004 0.001 0.650 0.583 -0.002 0.003
==============================================================================
Omnibus: nan Durbin-Watson: 1.061
Prob(Omnibus): nan Jarque-Bera (JB): 0.498
Skew: -0.123 Prob(JB): 0.780
Kurtosis: 1.474 Cond. No. 5.21e+04
==============================================================================
Warnings:
[1] The condition number is large, 5.21e+04. This might indicate that there are
strong multicollinearity or other numerical problems.
答案 1 :(得分:67)
注意: pandas.stats
has been removed与0.20.0
可以使用pandas.stats.ols
:
>>> from pandas.stats.api import ols
>>> df = pd.DataFrame({"A": [10,20,30,40,50], "B": [20, 30, 10, 40, 50], "C": [32, 234, 23, 23, 42523]})
>>> res = ols(y=df['A'], x=df[['B','C']])
>>> res
-------------------------Summary of Regression Analysis-------------------------
Formula: Y ~ <B> + <C> + <intercept>
Number of Observations: 5
Number of Degrees of Freedom: 3
R-squared: 0.5789
Adj R-squared: 0.1577
Rmse: 14.5108
F-stat (2, 2): 1.3746, p-value: 0.4211
Degrees of Freedom: model 2, resid 2
-----------------------Summary of Estimated Coefficients------------------------
Variable Coef Std Err t-stat p-value CI 2.5% CI 97.5%
--------------------------------------------------------------------------------
B 0.4012 0.6497 0.62 0.5999 -0.8723 1.6746
C 0.0004 0.0005 0.65 0.5826 -0.0007 0.0014
intercept 14.9525 17.7643 0.84 0.4886 -19.8655 49.7705
---------------------------------End of Summary---------------------------------
请注意,您需要安装statsmodels
软件包,它由pandas.stats.ols
函数在内部使用。
答案 2 :(得分:24)
我不知道这是sklearn
还是pandas
中的新内容,但我能够将数据框直接传递给sklearn
而无需将数据框转换为numpy数组或任何其他数据类型。
from sklearn import linear_model
reg = linear_model.LinearRegression()
reg.fit(df[['B', 'C']], df['A'])
>>> reg.coef_
array([ 4.01182386e-01, 3.51587361e-04])
答案 3 :(得分:15)
这需要我将数据重新格式化为列表中的列表,这似乎首先打败了使用pandas的目的。
不,不,只需转换为NumPy数组:
>>> data = np.asarray(df)
这需要花费一些时间,因为它只会在您的数据上创建视图。然后把它喂给scikit-learn:
>>> from sklearn.linear_model import LinearRegression
>>> lr = LinearRegression()
>>> X, y = data[:, 1:], data[:, 0]
>>> lr.fit(X, y)
LinearRegression(copy_X=True, fit_intercept=True, normalize=False)
>>> lr.coef_
array([ 4.01182386e-01, 3.51587361e-04])
>>> lr.intercept_
14.952479503953672
答案 4 :(得分:3)
Statsmodels可以使用直接指向熊猫数据框的列引用来构建OLS model。
又甜又甜:
model = sm.OLS(df[y], df[x]).fit()
代码详细信息和回归摘要:
# imports
import pandas as pd
import statsmodels.api as sm
import numpy as np
# data
np.random.seed(123)
df = pd.DataFrame(np.random.randint(0,100,size=(100, 3)), columns=list('ABC'))
# assign dependent and independent / explanatory variables
variables = list(df.columns)
y = 'A'
x = [var for var in variables if var not in y ]
# Ordinary least squares regression
model_Simple = sm.OLS(df[y], df[x]).fit()
# Add a constant term like so:
model = sm.OLS(df[y], sm.add_constant(df[x])).fit()
model.summary()
输出:
OLS Regression Results
==============================================================================
Dep. Variable: A R-squared: 0.019
Model: OLS Adj. R-squared: -0.001
Method: Least Squares F-statistic: 0.9409
Date: Thu, 14 Feb 2019 Prob (F-statistic): 0.394
Time: 08:35:04 Log-Likelihood: -484.49
No. Observations: 100 AIC: 975.0
Df Residuals: 97 BIC: 982.8
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 43.4801 8.809 4.936 0.000 25.996 60.964
B 0.1241 0.105 1.188 0.238 -0.083 0.332
C -0.0752 0.110 -0.681 0.497 -0.294 0.144
==============================================================================
Omnibus: 50.990 Durbin-Watson: 2.013
Prob(Omnibus): 0.000 Jarque-Bera (JB): 6.905
Skew: 0.032 Prob(JB): 0.0317
Kurtosis: 1.714 Cond. No. 231.
==============================================================================
如何直接获取R平方,系数和p值:
# commands:
model.params
model.pvalues
model.rsquared
# demo:
In[1]:
model.params
Out[1]:
const 43.480106
B 0.124130
C -0.075156
dtype: float64
In[2]:
model.pvalues
Out[2]:
const 0.000003
B 0.237924
C 0.497400
dtype: float64
Out[3]:
model.rsquared
Out[2]:
0.0190
答案 5 :(得分:0)
B 在统计上不显着。数据无法从中得出推论。 C 确实影响 B 的概率
df = pd.DataFrame({"A": [10,20,30,40,50], "B": [20, 30, 10, 40, 50], "C": [32, 234, 23, 23, 42523]})
avg_c=df['C'].mean()
sumC=df['C'].apply(lambda x: x if x<avg_c else 0).sum()
countC=df['C'].apply(lambda x: 1 if x<avg_c else None).count()
avg_c2=sumC/countC
df['C']=df['C'].apply(lambda x: avg_c2 if x >avg_c else x)
print(df)
model_ols = smf.ols("A ~ B+C",data=df).fit()
print(model_ols.summary())
df[['B','C']].plot()
plt.show()
df2=pd.DataFrame()
df2['B']=np.linspace(10,50,10)
df2['C']=30
df3=pd.DataFrame()
df3['B']=np.linspace(10,50,10)
df3['C']=100
predB=model_ols.predict(df2)
predC=model_ols.predict(df3)
plt.plot(df2['B'],predB,label='predict B C=30')
plt.plot(df3['B'],predC,label='predict B C=100')
plt.legend()
plt.show()
print("A change in the probability of C affects the probability of B")
intercept=model_ols.params.loc['Intercept']
B_slope=model_ols.params.loc['B']
C_slope=model_ols.params.loc['C']
#Intercept 11.874252
#B 0.760859
#C -0.060257
print("Intercept {}\n B slope{}\n C slope{}\n".format(intercept,B_slope,C_slope))
#lower_conf,upper_conf=np.exp(model_ols.conf_int())
#print(lower_conf,upper_conf)
#print((1-(lower_conf/upper_conf))*100)
model_cov=model_ols.cov_params()
std_errorB = np.sqrt(model_cov.loc['B', 'B'])
std_errorC = np.sqrt(model_cov.loc['C', 'C'])
print('SE: ', round(std_errorB, 4),round(std_errorC, 4))
#check for statistically significant
print("B z value {} C z value {}".format((B_slope/std_errorB),(C_slope/std_errorC)))
print("B feature is more statistically significant than C")
Output:
A change in the probability of C affects the probability of B
Intercept 11.874251554067563
B slope0.7608594144571961
C slope-0.060256845997223814
Standard Error: 0.4519 0.0793
B z value 1.683510336937001 C z value -0.7601036314930376
B feature is more statistically significant than C
z>2 is statistically significant