我尝试使用tensorflow从tf.contrib.learn重新创建DNNRegressor模型,但我的损失高出6个数量级。有人能指出我正确的方向吗?我不知道出了什么问题或有什么不同:/数据在这里是否有帮助http://pastebin.com/BG6r6EF6
tf.contrib.learn代码:
data = np.loadtxt('training.csv',
delimiter=',',skiprows=1,usecols = (3,4,5,6,7,8,9,10,11,12,13,14,15,16,17)
,dtype=np.float32)
X_ = data[:,:-1]
Y_ = data[:,-1]
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=14)]
classifier = tf.contrib.learn.DNNRegressor(feature_columns=feature_columns,
hidden_units=[7],
optimizer=tf.train.RMSPropOptimizer(learning_rate=.001),
activation_fn=tf.nn.relu)
classifier.fit(x=X_,
y=Y_,
max_steps=1000)
tensorflow代码:
data = np.loadtxt('training.csv',
delimiter=',',skiprows=1,usecols = (3,4,5,6,7,8,9,10,11,12,13,14,15,16,17)
,dtype=np.float32)
n_features = 14
hidden_units = 7
n_classes = 1
lr = .001
X = tf.placeholder(tf.float32,[None,n_features])
Y = tf.placeholder(tf.float32,[None])
W = tf.Variable(tf.truncated_normal([n_features,hidden_units]))
W2 = tf.Variable(tf.truncated_normal([hidden_units,n_classes]))
b = tf.Variable(tf.zeros([hidden_units]))
b2 = tf.Variable(tf.zeros([n_classes]))
hidden1 = tf.nn.relu(tf.matmul(X,W) + b)
pred = tf.matmul(hidden1,W2)+b2
#I have tried a few variations of squared error loss with no luck
loss = tf.nn.l2_loss(pred - Y)
#loss = tf.reduce_sum(tf.pow(pred - Y,2))/(2*n_instances)
#loss = tf.reduce_mean(tf.squared_difference(pred, Y))
#loss = tf.reduce_sum(tf.pow(pred - Y,2))/(2*n_instances)
optimizer = tf.train.RMSPropOptimizer(lr).minimize(loss)
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
for step in range(1000):
_, loss_value = sess.run([optimizer,loss],
feed_dict={X: X_,Y: Y_} )
更新
我改为
loss = tf.reduce_mean(tf.squared_difference(pred, Y))
现在两种方法的损失大致相同(~200)。张量流模型非常不准确,但DNNRegressor输出使用验证数据时的预期。张量板图也非常不同。
答案 0 :(得分:1)
我将使用张量板比较两个模型的图形。你试过吗?