我正在使用tensorflow来训练cifar-10数据集。当我运行训练循环时,我的PC冻结了。
# forward propagation
# convolution layer 1
c1 = tf.nn.conv2d(x_train, w1, strides = [1,1,1,1], padding = 'SAME')
# activation function for c1: relu
r1 = tf.nn.relu(c1)
# maxpooling
p1 = tf.nn.max_pool(r1, ksize = [1,2,2,1], strides = [1,1,1,1], padding = 'SAME')
print('p1 shape: ',p1.shape)
# convolution layer 2
c2 = tf.nn.conv2d(p1, w2, strides = [1,1,1,1], padding='SAME')
# activation function for c2: relu
r2 = tf.nn.relu(c2)
# maxpooling
p2 = tf.nn.max_pool(r2, ksize = [1,2,2,1], strides = [1,2,2,1], padding = 'SAME')
print('p2 shape: ',p2.shape)
# fully connected layer
l1 = tf.contrib.layers.flatten(p2)
# fully connected layer
final = tf.contrib.layers.fully_connected(l1, 10, activation_fn = None)
print('output layer shape: ',final.shape)
我正在使用softmax交叉熵和adam优化器:
# training and optimization
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = final, labels = y_train))
# using adam optimizer
optimize = tf.train.AdamOptimizer(learning_rate).minimize(cross_entropy)
这就是冻结的地方:
# creating tensorflow session
se = tf.Session()
# initializing variables
se.run(tf.global_variables_initializer())
# training the graph
for i in range(1000):
x_batch, y_batch = mini_batch(x_train, y_train, 110)
se.run(optimize, {x: x_batch, y: y_batch})
cost = se.run(cross_entropy, {x: x_train, y: y_train})
print(cost)
答案 0 :(得分:1)
嗯,如果您还提到了PC配置,那就太棒了。然而,你运行的程序并不是一个计算困难的程序或者包含无限循环的程序,所以在我看来,问题可能来自你的PC,你可能运行了很多应用程序,因为你的python守护程序是因为这个代码在我的MacBook Pro 2012上运行良好,所以不能进行足够的分配,因此会发生冻结/悬挂问题,不一定是与代码相关的问题。