我的代码所做的是拟合这样一个函数:y = x * 0.1 + 0.3
这是我的代码:
with tf.device('/gpu:0'):
# create data
x_data = tf.random_uniform(shape=[100], minval=0, maxval=1, dtype=tf.float32, seed=1)
y_data = x_data*0.1 + 0.3
## create tensorflow structure start ###
Weights = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
biases = tf.Variable(tf.zeros([1]))
y = Weights*x_data + biases
loss = tf.reduce_mean(tf.square(y-y_data))
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
### create tensorflow structure end ###
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# tf.initialize_all_variables() no long valid from
# 2017-03-02 if using tensorflow >= 0.12
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
init = tf.initialize_all_variables()
else:
init = tf.global_variables_initializer()
sess.run(init)
for step in range(200000):
sess.run(train)
if step % 200 == 0:
print(step, sess.run(Weights), sess.run(biases))
我知道CPU生成的数据存储在RAM中。如果我们使用GPU进行计算,那么 需要额外的时间。为了让我的代码用GPU运行得更快,我怎么做?让GPU运行得比CPU快。