Keras / Tensorflow在解决线性回归任务方面的局限性

时间:2018-11-20 13:25:22

标签: tensorflow machine-learning keras linear-regression

我试图在Keras / TensorFlow中实现线性回归,这让我感到非常惊讶。标准示例适用于随机数据。但是,如果我们稍微改变输入数据,所有示例都会停止正常工作。

我尝试找到y = 0.5 * x1 + 0.5 * x2的系数。

np.random.seed(1443)
n = 100000
x = np.zeros((n, 2))
y = np.zeros((n, 1))

x[:,0] =  sorted(preprocessing.scale( np.random.poisson(1000000, (n)) ))
x[:,1] =  sorted(preprocessing.scale( np.random.poisson(1000000, (n)) ) )
y = (x[:,0] + x[:,1]) /2

model = keras.Sequential()
model.add( keras.layers.Dense(1, input_shape =(2,), dtype="float32" ))
model.compile(loss='mean_squared_error', optimizer='sgd')

model.fit(x,y, epochs=1000, batch_size=64)
print(model.get_weights())

结果:

| epochs| batch_size |  bias      | x1         | x2
| ------+------------+------------+------------+-----------
| 1000  | 64         | -5.83E-05  | 0.90410435 | 0.09594361
| 1000  | 1024       | -5.71E-06  | 0.98739249 | 0.01258729
| 1000  | 10000      | -3.07E-07  | -0.2441376 | 1.2441349

我的第一个想法是这是Keras中的错误。因此,我尝试了R / Tensorflow库:

floatType <- "float32"
p <- 2L
X <- tf$placeholder(floatType, shape = shape(NULL, p), name = "x-data")
Y <- tf$placeholder(floatType, name = "y-data")
W <- tf$Variable(tf$zeros(list(p, 1L), dtype=floatType))
b <- tf$Variable(tf$zeros(list(1L), dtype=floatType))
Y_hat <- tf$add(tf$matmul(X, W), b)
cost <- tf$reduce_mean(tf$square(Y_hat - Y))
generator <- tf$train$GradientDescentOptimizer(learning_rate=0.01)
optimizer <- generator$minimize(cost)

session <- tf$Session()
session$run(tf$global_variables_initializer())

set.seed(1443)
n <- 10^5
x <- matrix( replicate(p, sort(scale((rpois(n, 10^6))))) , nrow = n )
y <- matrix((x[,1]+x[,2])/2)

i <- 1
batch_size <- 10000
epoch_number  <- 1000
iterationNumber <- n*epoch_number / batch_size

while (iterationNumber > 0) {
  feed_dict <- dict(X = x[i:(i+batch_size-1),, drop = F], Y = y[i:(i+batch_size-1),, drop = F])
  session$run(optimizer, feed_dict = feed_dict)

  i <- i+batch_size
  if( i > n-batch_size)
    i <- i %% batch_size 

  iterationNumber <- iterationNumber - 1
}
r_model <- lm(y ~ x)
tf_coef <- c(session$run(b), session$run(W))
r_coef  <- r_model$coefficients
print(rbind(tf_coef, r_coef))

结果:

| epochs| batch_size |  bias      | x1         | x2
| ------+------------+------------+------------+-----------
|2000   | 64         | -1.33E-06  | 0.500307   | 0.4996932
|1000   | 1000       | 2.79E-08   | 0.5000809  | 0.499919
|1000   | 10000      | -4.33E-07  | 0.5004921  | 0.499507
|1000   | 100000     | 2.96E-18   | 0.5        | 0.5   

仅当批次大小=样本数量且优化算法为SGD时,Tensorflow才能找到正确的结果。如果优化算法是“ adam”或“ adagrad”,则误差会大得多。

  1. 出于明显的原因,我无法选择超参数batch_size = n。您能否建议任何方法以Keras或TensorFlow的精度1E-07解决此问题?
  2. 为什么TensorFlow比Keras找到更好的解决方案?

评论1。 根据以下“今天”的帖子: 训练数据集改组将显着提高TensorFlow版本的性能:

shuffledIndex<-sample(1:(nrow(x)))
x <- x[shuffledIndex,]
y <- y[shuffledIndex,,drop=FALSE]

对于批量大小= 2000:

|(Intercept)     |       x1  |        x2
|----------------+-----------+----------
|-1.130693e-09   | 0.5000004 | 0.4999989

1 个答案:

答案 0 :(得分:1)

问题是您正在为每个功能值对生成的随机数进行排序。因此它们最终彼此非常接近:

>>> np.mean(np.abs(x[:,0]-x[:,1]))
0.004125721684553685

结果,我们将拥有:

y = (x1 + x2) / 2
 ~= (x1 + x1) / 2
  = x1
  = 0.5 * x1 + 0.5 * x1
  = 0.3 * x1 + 0.7 * x1
  = -0.3 * x1 + 1.3 * x1
  = 10.1 * x1 - 9.1 * x1
  = thousands of other possible combinations

在这种情况下,Keras收敛的解决方案实际上取决于Dense层的权重和偏差的初始值。使用不同的初始值,您将获得不同的结果(对于某些结果,可能根本不收敛):

# set the initial weight of Dense layer
model.layers[0].set_weights([np.array([[0], [1]]), np.array([0])])

# fit the model ...

# the final weights
model.get_weights()

[array([[0.00203656],
        [0.9981099 ]], dtype=float32),
 array([4.5520876e-05], dtype=float32)]    # because: y = 0 * x1 + 1 * x1 = x1 ~= (x1 + x2) / 2

# again set the weights to something different
model.layers[0].set_weights([np.array([[0], [0]]), np.array([1])])

# fit the model...

# the final weights
model.get_weights()

[array([[0.49986306],
       [0.50013727]], dtype=float32),
 array([1.4176634e-08], dtype=float32)]    # the one you were looking for!

但是,如果您不对要素进行排序(即仅删除sorted),则融合的权重很可能非常接近[0.5, 0.5]