我正在尝试转换Iris教程(https://www.tensorflow.org/get_started/estimator)以从.png文件而不是.csv中读取训练数据。它可以使用numpy_input_fn
,但是当我从Dataset
创建它时。我认为input_fn()
返回了错误的类型,但并不真正理解它应该是什么以及如何实现它。错误是:
File "iris_minimal.py", line 27, in <module>
model_fn().train(input_fn(), steps=1)
...
raise TypeError('unsupported callable') from ex
TypeError: unsupported callable
TensorFlow版本是1.3。完整代码:
import tensorflow as tf
from tensorflow.contrib.data import Dataset, Iterator
NUM_CLASSES = 3
def model_fn():
feature_columns = [tf.feature_column.numeric_column("x", shape=[4])]
return tf.estimator.DNNClassifier([10, 20, 10], feature_columns, "tmp/iris_model", NUM_CLASSES)
def input_parser(img_path, label):
one_hot = tf.one_hot(label, NUM_CLASSES)
file_contents = tf.read_file(img_path)
image_decoded = tf.image.decode_png(file_contents, channels=1)
image_decoded = tf.image.resize_images(image_decoded, [2, 2])
image_decoded = tf.reshape(image_decoded, [4])
return image_decoded, one_hot
def input_fn():
filenames = tf.constant(['images/image_1.png', 'images/image_2.png'])
labels = tf.constant([0,1])
data = Dataset.from_tensor_slices((filenames, labels))
data = data.map(input_parser)
iterator = data.make_one_shot_iterator()
features, labels = iterator.get_next()
return features, labels
model_fn().train(input_fn(), steps=1)
答案 0 :(得分:5)
我注意到您的代码段中出现了几个错误:
train
方法接受输入功能,因此它应为input_fn
,而不是input_fn()
。{'x': features}
。DNNClassifier
使用SparseSoftmaxCrossEntropyWithLogits
损失功能。 稀疏意味着它假定有序的类表示,而不是一次性的,因此您的转换是不必要的(this question解释了tf中交叉熵损失之间的差异。)请尝试以下代码:
import tensorflow as tf
from tensorflow.contrib.data import Dataset
NUM_CLASSES = 3
def model_fn():
feature_columns = [tf.feature_column.numeric_column("x", shape=[4], dtype=tf.float32)]
return tf.estimator.DNNClassifier([10, 20, 10], feature_columns, "tmp/iris_model", NUM_CLASSES)
def input_parser(img_path, label):
file_contents = tf.read_file(img_path)
image_decoded = tf.image.decode_png(file_contents, channels=1)
image_decoded = tf.image.resize_images(image_decoded, [2, 2])
image_decoded = tf.reshape(image_decoded, [4])
label = tf.reshape(label, [1])
return image_decoded, label
def input_fn():
filenames = tf.constant(['input1.jpg', 'input2.jpg'])
labels = tf.constant([0,1], dtype=tf.int32)
data = Dataset.from_tensor_slices((filenames, labels))
data = data.map(input_parser)
data = data.batch(1)
iterator = data.make_one_shot_iterator()
features, labels = iterator.get_next()
return {'x': features}, labels
model_fn().train(input_fn, steps=1)