我已经写出了一个简单的TFRECORDS文件,其中包含三个功能和一个标签。 当我在学习教程时,似乎要使用这些TFRECORDS,我需要创建一个数据集,解析示例,并通过map()进行标准化等其他操作。如果这不是正确的工作流程,我将不胜感激!
dataset = tf.data.TFRecordDataset("dataset.tfrecords")
#parse the protobuffer
def _parse_function(proto):
# define your tfrecord again.
keys_to_features = {'weight_pounds': tf.io.FixedLenFeature([], tf.float32),
'gestation_weeks': tf.io.FixedLenFeature([], tf.float32),
'plurality': tf.io.FixedLenFeature([], tf.float32),
'isMale': tf.io.FixedLenFeature([], tf.float32),
}
# Load one example
parsed_features = tf.io.parse_example(proto, keys_to_features)
# Turn your saved image string into an array
#parsed_features['image'] = tf.decode_raw(
# parsed_features['image'], tf.uint8)
return parsed_features
hold_meanstd={
'weight_pounds':[7.234738,1.330294],
'gestation_weeks':[38.346464,4.153269],
'plurality':[1.035285,0.196870]
}
def normalize(example):
example['weight_pounds']=(example['weight_pounds']-hold_meanstd['weight_pounds'][0])/hold_meanstd['weight_pounds'][1]
example['gestation_weeks']=(example['gestation_weeks']-hold_meanstd['gestation_weeks'][0])/hold_meanstd['gestation_weeks'][1]
example['plurality']=(example['plurality']-hold_meanstd['plurality'][0])/hold_meanstd['plurality'][1]
label=example.pop('isMale')
return(example,label)
dataset = tf.data.TFRecordDataset(["dataset.tfrecords"]).map(_parse_function)
dataset =dataset.map(normalize)
dataset =dataset.batch(64)
然后,一旦有了这个数据集,我就想可以将其引入Keras模型:
Dense = keras.layers.Dense
model = keras.Sequential(
[
Dense(500, activation="relu", kernel_initializer='uniform',
input_shape=(3,)),
Dense(200, activation="relu"),
Dense(100, activation="relu"),
Dense(25, activation="relu"),
Dense(1, activation="sigmoid")
])
optimizer = keras.optimizers.RMSprop(lr=0.01)
# Compile Keras model
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=[tf.keras.metrics.AUC()])
model.fit(dataset)
这会引发错误:
ValueError: Layer sequential_1 expects 1 inputs, but it received 3 input tensors. Inputs received: [<tf.Tensor 'ExpandDims:0' shape=(None, 1) dtype=float32>, <tf.Tensor 'ExpandDims_1:0' shape=(None, 1) dtype=float32>, <tf.Tensor 'ExpandDims_2:0' shape=(None, 1) dtype=float32>]
问题似乎是输入数据集看起来像三个输入而不是一个?如何允许Keras在TF RECORDS数据集上进行训练?
答案 0 :(得分:2)
在第一个Dense层中指定input_shape=(3,)
后,您的keras模型需要输入形状为(None,3)
(其中None
定义批处理大小)的Tensor作为输入。我们可以举个例子:
[
[0.0,0.0,0.0]
]
如果我们查看您的tf.data.Dataset
,则可以看到它正在返回字典。每个输入将如下所示:
{
"weight_pounds":[0.0],
"gestation_weeks":[0.0],
"plurality":[0.0]
}
与上面指定的input_shape
有点不同!
要解决此问题,您有两种解决方法:
def normalize(example):
example['weight_pounds']=(example['weight_pounds']-hold_meanstd['weight_pounds'][0])/hold_meanstd['weight_pounds'][1]
example['gestation_weeks']=(example['gestation_weeks']-hold_meanstd['gestation_weeks'][0])/hold_meanstd['gestation_weeks'][1]
example['plurality']=(example['plurality']-hold_meanstd['plurality'][0])/hold_meanstd['plurality'][1]
label=example.pop('isMale')
# removing the dict struct
data_input = [example['weight_pounds'], example['gestation_weeks'], example['plurality']]
return(data_input,label)
from keras.layers import Dense, DenseFeatures
from tensorflow.feature_column import numeric_column
feature_names = ['weight_pounds', 'gestation_weeks', 'plurality']
columns = [numeric_column(header) for header in feature_names]
model = keras.Sequential(
[
DenseFeatures(columns)
Dense(500, activation="relu", kernel_initializer='uniform'),
Dense(200, activation="relu"),
Dense(100, activation="relu"),
Dense(25, activation="relu"),
Dense(1, activation="sigmoid")
])