R中的线性回归梯度下降算法产生不同的结果

时间:2017-09-11 19:58:37

标签: r algorithm plot gradient-descent

我正尝试从头开始在R中实现线性回归,而不使用任何使用以下数据的包或库:

  

UCI机器学习库,Bike-Sharing-Dataset

线性回归很容易,下面是代码:

data <- read.csv("Bike-Sharing-Dataset/hour.csv")

# Select the useable features
data1 <- data[, c("season", "mnth", "hr", "holiday", "weekday", "workingday", "weathersit", "temp", "atemp", "hum", "windspeed", "cnt")]

# Split the data
trainingObs<-sample(nrow(data1),0.70*nrow(data1),replace=FALSE)

# Create the training dataset
trainingDS<-data1[trainingObs,]

# Create the test dataset
testDS<-data1[-trainingObs,]

x0 <- rep(1, nrow(trainingDS)) # column of 1's
x1 <- trainingDS[, c("season", "mnth", "hr", "holiday", "weekday", "workingday", "weathersit", "temp", "atemp", "hum", "windspeed")]

# create the x- matrix of explanatory variables
x <- as.matrix(cbind(x0,x1))

# create the y-matrix of dependent variables

y <- as.matrix(trainingDS$cnt)
m <- nrow(y)

solve(t(x)%*%x)%*%t(x)%*%y 

下一步是实现批量更新梯度下降,这是我遇到问题的地方。我不知道错误来自何处或如何解决它们,但代码有效。问题是所产生的价值与回归的结果完全不同,我不确定原因。

我实施的批量更新梯度下降的两个版本如下(两种算法的结果彼此不同,并且与回归的结果不同):

# Gradient descent 1
gradientDesc <- function(x, y, learn_rate, conv_threshold, n, max_iter) {
  plot(x, y, col = "blue", pch = 20)
  m <- runif(1, 0, 1)
  c <- runif(1, 0, 1)
  yhat <- m * x + c
  MSE <- sum((y - yhat) ^ 2) / n
  converged = F
  iterations = 0
  while(converged == F) {
    ## Implement the gradient descent algorithm
    m_new <- m - learn_rate * ((1 / n) * (sum((yhat - y) * x)))
    c_new <- c - learn_rate * ((1 / n) * (sum(yhat - y)))
    m <- m_new
    c <- c_new
    yhat <- m * x + c
    MSE_new <- sum((y - yhat) ^ 2) / n
    if(MSE - MSE_new <= conv_threshold) {
      abline(c, m) 
      converged = T
      return(paste("Optimal intercept:", c, "Optimal slope:", m))
    }
    iterations = iterations + 1
    if(iterations > max_iter) { 
      abline(c, m) 
      converged = T
      return(paste("Optimal intercept:", c, "Optimal slope:", m))
    }
  }
  return(paste("MSE=", MSE))
}

grad <- function(x, y, theta) { # note that for readability, I redefined theta as a column vector
  gradient <-  1/m* t(x) %*% (x %*% theta - y) 
  return(gradient)
}
grad.descent <- function(x, maxit, alpha){
  theta <- matrix(rep(0, length=ncol(x)), ncol = 1)
  for (i in 1:maxit) {
    theta <- theta - alpha  * grad(x, y, theta)   
  }
  return(theta)
}

如果有人能够解释为什么这两个功能会产生不同的结果,我将非常感激。我还想确保我实际上正确地实现了梯度下降。

最后,如何根据不同的学习率绘制下降结果,并将这些数据叠加在回归结果本身上?

修改 以下是使用alpha = .005和10,000次迭代运行两种算法的结果:

1)

> gradientDesc(trainingDS, y, 0.005, 0.001, 32, 10000)
TEXT_SHOW_BACKTRACE environmental variable.
[1] "Optimal intercept: 2183458.95872599 Optimal slope: 62417773.0184353"

2)

> print(grad.descent(x, 10000, .005))
                   [,1]
x0            8.3681113
season       19.8399837
mnth         -0.3515479
hr            8.0269388
holiday     -16.2429750
weekday       1.9615369
workingday    7.6063719
weathersit  -12.0611266
temp        157.5315413
atemp       138.8019732
hum        -162.7948299
windspeed    31.5442471

1 个答案:

答案 0 :(得分:1)

为了举例说明如何以更好的方式编写这样的函数,请考虑以下内容:

TextBox

进行比较:

gradientDesc <- function(x, y, learn_rate, conv_threshold, max_iter) {
  n <- nrow(x) 
  m <- runif(ncol(x), 0, 1) # m is a vector of dimension ncol(x), 1
  yhat <- x %*% m # since x already contains a constant, no need to add another one

  MSE <- sum((y - yhat) ^ 2) / n

  converged = F
  iterations = 0

  while(converged == F) {
    m <- m - learn_rate * ( 1/n * t(x) %*% (yhat - y))
    yhat <- x %*% m
    MSE_new <- sum((y - yhat) ^ 2) / n

    if( abs(MSE - MSE_new) <= conv_threshold) {
      converged = T
    }
    iterations = iterations + 1
    MSE <- MSE_new

    if(iterations >= max_iter) break
  }
  return(list(converged = converged, 
              num_iterations = iterations, 
              MSE = MSE_new, 
              coefs = m) )
}

现在,

ols <- solve(t(x)%*%x)%*%t(x)%*%y 

此处,out <- gradientDesc(x,y, 0.005, 1e-7, 200000) data.frame(ols, out$coefs) ols out.coefs x0 33.0663095 35.2995589 season 18.5603565 18.5779534 mnth -0.1441603 -0.1458521 hr 7.4374031 7.4420685 holiday -21.0608520 -21.3284449 weekday 1.5115838 1.4813259 workingday 5.9953383 5.9643950 weathersit -0.2990723 -0.4073493 temp 100.0719903 147.1157262 atemp 226.9828394 174.0260534 hum -225.7411524 -225.2686640 windspeed 12.3671942 9.5792498 指的是您的第一个代码块中定义的x。注意系数之间的相似性。但是,还要注意

x

这样您就可以通过增加迭代次数或通过玩步长来提高准确性。它也可能有助于首先扩展变量。