我尝试使用GPyTorch版本在sklearn实现中复制GP回归的解决方案。 不幸的是,我不能用原始数据集来举例说明。 sklearn解决方案的测试mse始终比pytorch版本低40%,但是我对Torch也不是很熟悉。还有另一个问题处理类似的问题,它似乎是关于多输出GP回归的一个特殊问题,似乎在这里并不适用。
我试图用与原始数据完全不同的人工数据集来复制问题:
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error as mse
import numpy as np
import matplotlib.pyplot as plt
train_size = 1000
test_size = 200
X_train = np.random.normal(loc = 0, scale = 5, size = (train_size, 6))
y_train = X_train[:,0] + 2*X_train[:,1]-3*X_train[:,2]+4*X_train[:,3] - 5*X_train[:,4] + 6*X_train[:,5] + np.random.normal(loc = 0, scale = 0.3, size = train_size)
X_test = np.random.normal(loc = 0, scale = 5, size = (test_size, 6))
y_test = X_test[:,0] + 2*X_test[:,1]-3*X_test[:,2]+4*X_test[:,3] - 5*X_test[:,4] + 6*X_test[:,5] + np.random.normal(loc = 0, scale = 0.3, size = test_size)
naive_prediction = y_train.mean()
basic_mse = mse(y_test, np.repeat(naive_prediction, test_size) )
# Sklearn GP regression
import sklearn.gaussian_process as gp
kernel = gp.kernels.ConstantKernel(1.0, (1e-1, 1e3)) * gp.kernels.RBF(10.0, (1e-3, 1e3))
model = gp.GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=10, alpha=0.1, normalize_y=False)
model.fit(X_train, y_train)
params = model.kernel_.get_params()
sk_pred, std = model.predict(X_test, return_std=True)
sk_mse = mse(sk_pred, y_test)
print('Sklearn MSE reduction : ', sk_mse/basic_mse)
print('Sklearn Lengthscale: ', params['k2__length_scale'])
# GPytorch Regression
import torch
import gpytorch
X_train_tensor = torch.Tensor(X_train)
y_train_tensor = torch.Tensor(y_train)
X_test_tensor = torch.Tensor(X_test)
y_test_tensor = torch.Tensor(y_test)
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(X_train_tensor, y_train_tensor, likelihood)
training_iter = 1000
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam([
{'params': model.parameters()}, # Includes GaussianLikelihood parameters
], lr=0.1)
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
all_losses = []
for i in range(training_iter):
# Zero gradients from previous iteration
optimizer.zero_grad()
# Output from model
output = model(X_train_tensor)
# Calc loss and backprop gradients
loss = -mll(output, y_train_tensor)
loss.backward()
all_losses.append(loss.item())
optimizer.step()
plt.plot(all_losses)
model.eval()
likelihood.eval()
with torch.no_grad(), gpytorch.settings.fast_pred_var():
prediction = likelihood(model(X_test_tensor))
mean = prediction.mean
# Get lower and upper predictive bounds
lower, upper = prediction.confidence_region()
noise = model.likelihood.noise.item()
length_scale = model.covar_module.base_kernel.lengthscale.item()
torch_mse = mse(mean.numpy(), y_test)
print('Torch MSE reduction : ', torch_mse/basic_mse)
print('Torch Lengthscale: ', length_scale)
print('Torch Noise: ', noise)
顺便说一句,在此示例中,GPytorch的效果更好。 更具体地说,我试图了解噪声方差的差异。通过尝试,我发现基本设置协方差对角线上的噪声水平的alpha参数对测试mse的质量有很大的影响。如果我正确理解它,则是通过backprop推断出的是该参数。除此之外,如果允许割炬变体充分松弛成一个好的解决方案,那么这两个内核看起来是相同的。
我的问题是:
如何在pytorch内核上设置和固定噪声协方差?
在不同的实现之间,推断的长度比例似乎有很大不同,在原始实现的专有数据集上尤其如此。怎么解释?
是否还有其他因素可以解释性能差异?数据的哪些方面会放大这种差异?