Scikit-learn线性模型,coef_返回特征的高值

时间:2015-12-07 07:36:32

标签: python machine-learning scikit-learn

问题陈述:根据客户订购某些物品(例如:靴子,运动鞋等)预测快递包裹的重量。

所以我拥有的数据框由历史数据组成,其中product_item_categories(例如:靴子,运动鞋等)组成了这个特征,而重量是我的' y'要预测的变量。数据框的每一行都包含客户订购的product_item_categories数量。

示例:客户订购1双靴子,1双运动鞋。该行看起来像:

x1  x2  x3  x4  x5  x6  x7  x8  x9  x10 x11 x12 x13 x14 x15 x16 x17 x18 x19 x20 x21 x22 x23 x24 x25 x26 x27 x28 x29 x30 x31 x32 x33 x34 x35 x36 x37 x38 x39 x40 x41 x42 x43 x44 x45 x46 x47 y
1   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   2   2.94

其中一项功能是items_total,此处为x47(客户订购的商品总数)。

我使用

创建了一个线性模型
regr_model = linear_model.LinearRegression()

将数据框拆分为训练集和测试集后,我使用regr_model.fit(x_train, y_train)

运行模型

当我查看系数时,我得到以下输出(格式化为更有意义)

1   feature x1  6494532107.689080 (this is the items_total feature)
2   feature x2  (-6494532105.548431)
3   feature x3  (-6494532105.956598)
4   feature x4  (-6494532105.987348)
5   feature x5  (-6494532106.081478)
6   feature x6  (-6494532106.139558)
7   feature x7  (-6494532106.163167)
8   feature x8  (-6494532106.326231)
9   feature x9  (-6494532106.360985)
10  feature x10 (-6494532106.507434)
11  feature x11 (-6494532106.678183)
12  feature x12 (-6494532106.711108)
13  feature x13 (-6494532106.906321)
14  feature x14 (-6494532106.916800)
15  feature x15 (-6494532106.941691)
16  feature x16 (-6494532107.049221)
17  feature x17 (-6494532107.071664)
18  feature x18 (-6494532107.076819)
19  feature x19 (-6494532107.095350)
20  feature x20 (-6494532107.124458)
21  feature x21 (-6494532107.208526)
22  feature x22 (-6494532107.291896)
23  feature x23 (-6494532107.315606)
24  feature x24 (-6494532107.319578)
25  feature x25 (-6494532107.322818)
26  feature x26 (-6494532107.337678)
27  feature x27 (-6494532107.345344)
28  feature x28 (-6494532107.347136)
29  feature x29 (-6494532107.374278)
30  feature x30 (-6494532107.403748)
31  feature x31 (-6494532107.405770)
32  feature x32 (-6494532107.411852)
33  feature x33 (-6494532107.469144)
34  feature x34 (-6494532107.470899)
35  feature x35 (-6494532107.471970)
36  feature x36 (-6494532107.489899)
37  feature x37 (-6494532107.495930)
38  feature x38 (-6494532107.504712)
39  feature x39 (-6494532107.522346)
40  feature x40 (-6494532107.557917)
41  feature x41 (-6494532107.561793)
42  feature x42 (-6494532107.562286)
43  feature x43 (-6494532107.601017)
44  feature x44 (-6494532107.603461)
45  feature x45 (-6494532107.686674)
46  feature x46 (-6494532107.843128)
47  feature x47 (-6494532107.910987)

截距为:0.555702083558 模型得分为:0.79

当我删除items_total时。我得到更有意义的系数:

1   feature x2  2.140582
2   feature x3  1.732328
3   feature x4  1.701661
4   feature x5  1.607465
5   feature x6  1.549196
6   feature x7  1.526227
7   feature x8  1.363067
8   feature x9  1.329225
9   feature x10 1.18109
10  feature x11 1.010639
11  feature x12 0.978123
12  feature x13 0.782569
13  feature x14 0.773164
14  feature x15 0.747479
15  feature x16 0.638743
16  feature x17 0.617082
17  feature x18 0.61257
18  feature x19 0.593665
19  feature x20 0.565309
20  feature x21 0.480105
21  feature x22 0.396592
22  feature x23 0.373675
23  feature x24 0.369643
24  feature x25 0.365989
25  feature x26 0.350971
26  feature x27 0.343381
27  feature x28 0.34158
28  feature x29 0.314405
29  feature x30 0.285344
30  feature x31 0.282827
31  feature x32 0.277007
32  feature x33 0.219727
33  feature x34 0.217814
34  feature x35 0.217466
35  feature x36 0.198526
36  feature x37 0.193277
37  feature x38 0.184332
38  feature x39 0.166745
39  feature x40 0.130655
40  feature x41 0.127573
41  feature x42 0.126665
42  feature x43 0.087371
43  feature x44 0.085545
44  feature x45 0.003045
45  feature x46 (-0.153778)
46  feature x47 (-0.221548)

模型的截距和分数相同。当我删除items_total列时,有人可以帮助我理解系数是如此不同吗?

1 个答案:

答案 0 :(得分:0)

我认为这主要是理论问题。 最好在https://stats.stackexchange.com/https://datascience.stackexchange.com/

中提出此问题

它被称为Multicollinearity

我将提供更好的示例来演示此问题,此示例可在俄语版维基百科页面中找到: 我们假设您有以下功能:x1x2x3,其中x1 = x2+x3 所以我们有一个看起来像enter image description here的模型。

让我们向b1添加一些任意a,并从ab2中减去b3enter image description here

因此我们在系数随机修改后实现了相同的模型,这就是问题所在。因此,你应该避免特征之间如此强烈的相关性(你的最后一个特征与所有其他特征相关)。