我正在使用32g ram的2018 MBP在jupyter中运行tensorflow代码。 该代码来自Geron的“使用Scikit-Learn和TensorFlow进行动手式机器学习”的第9章。代码成功运行直到代码调用.eval或.run的某一时刻内核死亡。
https://github.com/ageron/handson-ml/blob/master/09_up_and_running_with_tensorflow.ipynb
代码包含在上面。
将上面链接的代码剪切并粘贴到我自己的Jupyter笔记本上,然后运行会导致单元格19中的内核死亡:
import numpy as np
from sklearn.datasets import fetch_california_housing
reset_graph()
housing = fetch_california_housing()
m, n = housing.data.shape
housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data]
X = tf.constant(housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
XT = tf.transpose(X)
theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, X)), XT), y)
with tf.Session() as sess:
theta_value = theta.eval()
以及单元格25中:
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
gradients = 2/m * tf.matmul(tf.transpose(X), error)
training_op = tf.assign(theta, theta - learning_rate * gradients)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
要找到问题,我尝试在最后一个单元中成功运行mse.eval()和sess.run(training_op)。如果最后一节是
,那不会杀死内核with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =")
或者如果是:
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
print('hi')
best_theta = theta.eval()
但它因以下两个原因而死亡:
(1)
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
(2)
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
sess.run(training_op)
我还在Spyder中尝试了此代码,然后重新安装了tensorflow。结果相同。 Geron的书第9章中的类似代码扼杀了我的内核,在我喜欢他的书的指导时,我想继续。