具有多项式特征的内核岭和简单岭

时间:2018-09-29 23:00:36

标签: python scikit-learn data-science

具有多项式内核的Kernel Ridge(来自sklearn.kernel_ridge)与使用PolynomialFeatures + Ridge(来自sklearn.linear_model)有什么区别?

3 个答案:

答案 0 :(得分:0)

区别在于特征计算。 PolynomialFeatures显式计算输入特征之间的多项式组合,直至达到所需的程度,而KernelRidge(kernel='poly')仅考虑将根据原始特征表示的多项式内核(a polynomial representation of feature dot products)。 This document总体上提供了很好的概述。

关于计算,我们可以检查源代码中的相关部分:

(训练)内核的计算遵循相似的过程:比较RidgeKernelRidge。主要区别在于Ridge明确考虑了它收到的任何(多项式)特征之间的点积,而对于KernelRidge,这些多项式特征是generated implicitly during the computation。例如,考虑单个功能x;通过gamma = coef0 = 1KernelRidge计算出(x**2 + 1)**2 == (x**4 + 2*x**2 + 1)。如果现在考虑使用PolynomialFeatures,它将提供功能x**2, x, 1,并且相应的点积为x**4 + x**2 + 1。因此,点积相差x**2。当然,我们可以重新调整多要素的比例以使其具有x**2, sqrt(2)*x, 1,而使用KernelRidge(kernel='poly')则没有这种灵活性。另一方面,差异可能并不重要(在大多数情况下)。

请注意,对偶系数的计算也以类似的方式执行:RidgeKernelRidge。最终KernelRidge保留对偶系数,而Ridge直接计算权重。

让我们看一个小例子:

import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import Ridge
from sklearn.kernel_ridge import KernelRidge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.utils.extmath import safe_sparse_dot

np.random.seed(20181001)

a, b = 1, 4
x = np.linspace(0, 2, 100).reshape(-1, 1)
y = a*x**2 + b*x + np.random.normal(scale=0.2, size=(100,1))

poly = PolynomialFeatures(degree=2, include_bias=True)
xp = poly.fit_transform(x)
print('We can see that the new features are now [1, x, x**2]:')
print(f'xp.shape: {xp.shape}')
print(f'xp[-5:]:\n{xp[-5:]}', end='\n\n')
# Scale the `x` columns so we obtain similar results.
xp[:, 1] *= np.sqrt(2)

ridge = Ridge(alpha=0, fit_intercept=False, solver='cholesky')
ridge.fit(xp, y)

krr = KernelRidge(alpha=0, kernel='poly', degree=2, gamma=1, coef0=1)
krr.fit(x, y)

# Let's try to reproduce some of the involved steps for the different models.
ridge_K = safe_sparse_dot(xp, xp.T)
krr_K = krr._get_kernel(x)
print('The computed kernels are (alomst) similar:')
print(f'Max. kernel difference: {np.abs(ridge_K - krr_K).max()}', end='\n\n')
print('Predictions slightly differ though:')
print(f'Max. difference: {np.abs(krr.predict(x) - ridge.predict(xp)).max()}', end='\n\n')

# Let's see if the fit changes if we provide `x**2, x, 1` instead of `x**2, sqrt(2)*x, 1`.
xp_2 = xp.copy()
xp_2[:, 1] /= np.sqrt(2)
ridge_2 = Ridge(alpha=0, fit_intercept=False, solver='cholesky')
ridge_2.fit(xp_2, y)
print('Using features "[x**2, x, 1]" instead of "[x**2, sqrt(2)*x, 1]" predictions are (almost) the same:')
print(f'Max. difference: {np.abs(ridge_2.predict(xp_2) - ridge.predict(xp)).max()}', end='\n\n')
print('Interpretability of the coefficients changes though:')
print(f'ridge.coef_[1:]: {ridge.coef_[0, 1:]}, ridge_2.coef_[1:]: {ridge_2.coef_[0, 1:]}')
print(f'ridge.coef_[1]*sqrt(2): {ridge.coef_[0, 1]*np.sqrt(2)}')
print(f'Compare with: a, b = ({a}, {b})')

plt.plot(x.ravel(), y.ravel(), 'o', color='skyblue', label='Data')
plt.plot(x.ravel(), ridge.predict(xp).ravel(), '-', label='Ridge', lw=3)
plt.plot(x.ravel(), krr.predict(x).ravel(), '--', label='KRR', lw=3)
plt.grid()
plt.legend()
plt.show()

我们从中获得:

We can see that the new features are now [x, x**2]:
xp.shape: (100, 3)
xp[-5:]:
[[1.         1.91919192 3.68329762]
 [1.         1.93939394 3.76124885]
 [1.         1.95959596 3.84001632]
 [1.         1.97979798 3.91960004]
 [1.         2.         4.        ]]

The computed kernels are (alomst) similar:
Max. kernel difference: 1.0658141036401503e-14

Predictions slightly differ though:
Max. difference: 0.04244651134471766

Using features "[x**2, x, 1]" instead of "[x**2, sqrt(2)*x, 1]" predictions are (almost) the same:
Max. difference: 7.15642822779472e-14

Interpretability of the coefficients changes though:
ridge.coef_[1:]: [2.73232239 1.08868872], ridge_2.coef_[1:]: [3.86408737 1.08868872]
ridge.coef_[1]*sqrt(2): 3.86408737392841
Compare with: a, b = (1, 4)

Example plot

答案 1 :(得分:0)

这是显示它的示例:

    from sklearn.datasets import make_friedman1
    plt.figure()
    plt.title('Complex regression problem with one input variable')
    X_F1, y_F1 = make_friedman1(n_samples = 100,
                           n_features = 7, random_state=0)
    from sklearn.linear_model import LinearRegression
    from sklearn.linear_model import Ridge
    from sklearn.preprocessing import PolynomialFeatures 

    print('\nNow we transform the original input data to add\n\
    polynomial features up to degree 2 (quadratic)\n')
    poly = PolynomialFeatures(degree=2)
    X_F1_poly = poly.fit_transform(X_F1) 
    X_train, X_test, y_train, y_test = train_test_split(X_F1_poly, y_F1,
                                                       random_state = 0)
    linreg = Ridge().fit(X_train, y_train)

    print('(poly deg 2 + ridge) linear model coeff (w):\n{}'
         .format(linreg.coef_))
    print('(poly deg 2 + ridge) linear model intercept (b): {:.3f}'
         .format(linreg.intercept_))
    print('(poly deg 2 + ridge) R-squared score (training): {:.3f}'
         .format(linreg.score(X_train, y_train)))
    print('(poly deg 2 + ridge) R-squared score (test): {:.3f}'
         .format(linreg.score(X_test, y_test)))
(poly deg 2 + ridge) linear model coeff (w):
[ 0.    2.23  4.73 -3.15  3.86  1.61 -0.77 -0.15 -1.75  1.6   1.37  2.52
  2.72  0.49 -1.94 -1.63  1.51  0.89  0.26  2.05 -1.93  3.62 -0.72  0.63
 -3.16  1.29  3.55  1.73  0.94 -0.51  1.7  -1.98  1.81 -0.22  2.88 -0.89]
(poly deg 2 + ridge) linear model intercept (b): 5.418
(poly deg 2 + ridge) R-squared score (training): 0.826
(poly deg 2 + ridge) R-squared score (test): 0.825

答案 2 :(得分:0)

我假设您已经知道内核岭回归(KRR)和PolynomialFeatures + Ridge 的工作原理。它们有些相同。我将列出它们之间的一些镜像差异。

  1. 您可以在PolynomialFeatures中关闭偏见功能,并将其包括在Ridge中。 Ridge的正则化项不包括偏差。相反,对于sklearn的KRR,惩罚项始终包括偏差项。

  2. 在使用Ridge之前,您可以缩放PolynomialFeatures生成的特征。等于为每个多项式特征自定义正则化强度。因此PolynomialFeatures = Ridge更具灵活性。相反,在多项式内核中只有两个参数需要调整,即gamma和c_0,请参见polynomial kernel

  3. 拟合和预测时间不同。您需要解决KRR中的线性方程组K_NxN x = y $。您只需要求解线性方程组A_Nx(D + 1)x = y $,其中N是训练中的样本数,D是多项式的度。

  4. (这是一个非常极端的情况),如果两个样本(在附近)相同,则内核将(几乎)是单数。并且当alpha(正则化强度)很小时。您将遇到数值稳定性问题。因为K + alpha * I几乎是奇数。您只能使用Ridge来解决此问题。许多机器学习教科书中都说明了Ridge起作用的原因。