经过一段时间后训练时参数会拍摄到无穷大

时间:2018-07-23 17:31:36

标签: python-3.x tensorflow linear-regression tensorboard

我第一次在Tensorflow中实现线性回归。最初,我使用线性模型进行了尝试,但是经过几次迭代训练,我的参数达到了无穷大。因此,我将模型更改为二次模型,然后再次尝试训练,但是在经过几次迭代之后,同样的事情正在发生。

因此,tf.summary.histogram('Weights',W0)中的参数正在接收inf作为参数,与W1和b1相似。

我想在tensorboard中查看我的参数(因为我从未使用过它),但是出现了此错误。

我之前曾问过这个问题,但略有变化的是我使用的是线性模型,这又带来了同样的问题(我不知道这是由于参数变为无穷大,因为我在我的计算机上运行了该模型) Ipython Notebook,但是当我在终端中运行该程序时,生成了以下提到的错误,这有助于我弄清楚问题是由于参数拍摄到infinity造成的。在注释部分,我知道它正在某人的PC上运行,并且他的张量板显示参数实际上达到了无穷大。

Here是前面提出的问题的链接。 我希望我已经在程序中正确声明了Y_,否则请纠正我!

这是Tensorflow中的代码:

import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.datasets import load_boston
import matplotlib.pyplot as plt

boston=load_boston()
type(boston)
boston.feature_names

bd=pd.DataFrame(data=boston.data,columns=boston.feature_names)

bd['Price']=pd.DataFrame(data=boston.target)
np.random.shuffle(bd.values)


W0=tf.Variable(0.3)
W1=tf.Variable(0.2)
b=tf.Variable(0.1)
#print(bd.shape[1])

tf.summary.histogram('Weights', W0)
tf.summary.histogram('Weights', W1)
tf.summary.histogram('Biases', b)



dataset_input=bd.iloc[:, 0 : bd.shape[1]-1];
#dataset_input.head(2)

dataset_output=bd.iloc[:, bd.shape[1]-1]
dataset_output=dataset_output.values
dataset_output=dataset_output.reshape((bd.shape[0],1)) 
#converted (506,) to (506,1) because in pandas
#the shape was not changing and it was needed later in feed_dict


dataset_input=dataset_input.values  #only dataset_input is in DataFrame form and converting it into np.ndarray


dataset_input = np.array(dataset_input, dtype=np.float32) 
#making the datatype into float32 for making it compatible with placeholders

dataset_output = np.array(dataset_output, dtype=np.float32)

X=tf.placeholder(tf.float32, shape=(None,bd.shape[1]-1))
Y=tf.placeholder(tf.float32, shape=(None,1))

Y_=W0*X*X + W1*X + b    #Hope this equation is rightly written
#Y_pred = tf.add(tf.multiply(tf.pow(X, pow_i), W), Y_pred)
print(X.shape)
print(Y.shape)


loss=tf.reduce_mean(tf.square(Y_-Y))
tf.summary.scalar('loss',loss)

optimizer=tf.train.GradientDescentOptimizer(0.001)
train=optimizer.minimize(loss)

init=tf.global_variables_initializer()#tf.global_variables_initializer()#tf.initialize_all_variables()
sess=tf.Session()
sess.run(init)



wb_=[]
with tf.Session() as sess:
    summary_merge = tf.summary.merge_all()

    writer=tf.summary.FileWriter("Users/ajay/Documents",sess.graph)

    epochs=10
    sess.run(init)

    for i in range(epochs):
        s_mer=sess.run(summary_merge,feed_dict={X: dataset_input, Y: dataset_output})  #ERROR________ERROR
        sess.run(train,feed_dict={X:dataset_input,Y:dataset_output})

        #CHANGED
        sess.run(loss, feed_dict={X:dataset_input,Y:dataset_output})
        writer.add_summary(s_mer,i)

        #tf.summary.histogram(name="loss",values=loss)
        if(i%5==0):
            print(i, sess.run([W0,W1,b]))
            wb_.append(sess.run([W0,W1,b]))

print(writer.get_logdir())
print(writer.close())

我遇到此错误:

 /anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
(?, 13)
(?, 1)
2018-07-22 02:04:24.826027: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
0 [-3833776.2, -7325.9595, -15.471448]
5 [inf, inf, inf]
Traceback (most recent call last):
  File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
    return fn(*args)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Infinity in summary histogram for: Biases
     [[Node: Biases = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Biases/tag, Variable_2/read)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "LR.py", line 75, in <module>
    s_mer=sess.run(summary_merge,feed_dict={X: dataset_input, Y: dataset_output})  #ERROR________ERROR
  File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
    run_metadata_ptr)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
    feed_dict_tensor, options, run_metadata)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
    run_metadata)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Infinity in summary histogram for: Biases
     [[Node: Biases = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Biases/tag, Variable_2/read)]]

Caused by op 'Biases', defined at:
  File "LR.py", line 24, in <module>
    tf.summary.histogram('Biases', b)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/summary/summary.py", line 187, in histogram
    tag=tag, values=values, name=scope)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_logging_ops.py", line 283, in histogram_summary
    "HistogramSummary", tag=tag, values=values, name=name)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3414, in create_op
    op_def=op_def)
  File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1740, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Infinity in summary histogram for: Biases
     [[Node: Biases = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Biases/tag, Variable_2/read)]]

1 个答案:

答案 0 :(得分:0)

我认为这是由于梯度下降的高学习率引起的。 请参阅Gradient descent explodes if learning rate is too large

在每个时期之后,损失实际上会越来越大。

我改变了

optimizer=tf.train.GradientDescentOptimizer(0.001)

optimizer=tf.train.GradientDescentOptimizer(0.0000000001)

然后在每个时期后打印损失。通过更改

sess.run(loss, feed_dict={X:dataset_input,Y:dataset_output})

print("loss",sess.run(loss, feed_dict={X:dataset_input,Y:dataset_output}))

在您的代码中。错误消失了。输出是

(?, 13)
(?, 1)
loss =  44061484.0
0 [-0.08337769, 0.19926739, 0.099998444]
loss =  3373030.2
loss =  258605.05
loss =  20211.799
loss =  1964.4918
loss =  567.7717
5 [-0.0001616638, 0.19942635, 0.099998794]
loss =  460.862
loss =  452.67877
loss =  452.05255
loss =  452.00452
Users/ajay/Documents
None