TensorFlow:在Tensorflow中覆盖了python中的所有基本操作

时间:2018-05-06 08:09:15

标签: python tensorflow machine-learning

我是tensorflow的新手,并尝试使用基本的python运算符编写损失函数(平方损失),但它不起作用。有谁能告诉我哪里出错了。在adavnce中感谢

n = x_data.shape[0]
L = (Y_pred-y)**2
loss = (1/n)*tf.reduce_sum(L)

当我运行相应的会话时,我得到损失= 0.0

_ ,_m, _c, _l = session.run([optimizer,m,c,loss], feed_dict={x: x_data, y: y_data})

y是占位符

loss = tf.reduce_mean(tf.squared_difference(Y_pred,y))

这个工作得很好吗?

完整代码:

import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

#downloading dataset
!wget -nv -O /resources/data/PierceCricketData.csv https://ibm.box.com/shared/static/reyjo1hk43m2x79nreywwfwcdd5yi8zu.csv


df = pd.read_csv("/resources/data/PierceCricketData.csv")
df.head()

%matplotlib inline

x_data, y_data = (df["Chirps"].values,df["Temp"].values)

plt.plot(x_data, y_data, 'ro')
# label the axis
plt.xlabel("# Chirps per 15 sec")
plt.ylabel("Temp in Farenhiet")


x = tf.placeholder(tf.float32, shape=x_data.shape)
y = tf.placeholder(tf.float32, shape=y_data.shape)
m = tf.Variable(3.0, name='m')
c = tf.Variable(2.0, name='c')

Y_pred = m*x+c



n = x_data.shape[0]
L = (Y_pred*nf-y*nf)**2
loss = (1/n)*tf.reduce_sum(L)

# loss = tf.reduce_mean(tf.squared_difference(Y_pred,y))


optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(loss)

session = tf.Session()
session.run(tf.global_variables_initializer())

convergenceTolerance = 0.0001
previous_m = np.inf
previous_c = np.inf

steps = {}
steps['m'] = []
steps['c'] = []

losses=[]

for k in range(100000):
    _ ,_m, _c, _l = session.run([optimizer,m,c,loss], feed_dict={x: x_data, y: y_data})



    steps['m'].append(_m)
    steps['c'].append(_c)
    losses.append(_l)
    if (np.abs(previous_m - _m) <= convergenceTolerance) or (np.abs(previous_c - _c) <= convergenceTolerance):

        print "Finished by Convergence Criterion"
        print k
        print _l
        break
    previous_m = _m, 
    previous_c = _c, 
print(losses)

我得到的输出是[0.0,0.0] 为什么呢?

1 个答案:

答案 0 :(得分:0)

这是mean_squared_error的官方TensorFlow实现:

from tensorflow.python.framework import ops, math_ops
@tf_export("losses.mean_squared_error")
def mean_squared_error(labels, predictions, weights=1.0, scope=None,
                       loss_collection=ops.GraphKeys.LOSSES, 
                       reduction=Reduction.SUM_BY_NONZERO_WEIGHTS):
    if labels is None:
        raise ValueError("labels must not be None.")
    if predictions is None:
        raise ValueError("predictions must not be None.")
    with ops.name_scope(scope, "mean_squared_error",(predictions, labels, weights)) as scope:
        predictions = math_ops.to_float(predictions)
        labels = math_ops.to_float(labels)
        predictions.get_shape().assert_is_compatible_with(labels.get_shape())
        losses = math_ops.squared_difference(predictions, labels)
        return compute_weighted_loss(losses, weights, scope, loss_collection, reduction=reduction)

正如您在源代码中看到的那样,您应该确保张量具有相同的dtype。希望能回答你的问题。