一次在不同数据集中具有tf.feature_column.input_layer的自定义模型(不是tf.estimator)?

时间:2019-01-06 08:36:14

标签: python tensorflow

我遇到了一个问题,我不知道如何使用tf.feature_column.input_layer将我的网络定义为同时在两个数据集上运行。在“传统”布局中,我只使用feed_dict并通过一些输入占位符和输出占位符手动传递训练和测试数据,但是我认为尝试使用{{ 1}}。

数据集

input_layer

网络

features, labels = dataset_iterator(training_files, config)
features_test, labels_test = dataset_iterator(testing_files, config)

我是否可以使用dense_tensor = tf.feature_column.input_layer(features=features, feature_columns=columns) for units in [256, 16]: dense_tensor = tf.layers.dense(dense_tensor, units, tf.nn.relu) logits = tf.layers.dense(dense_tensor, 8) # Verification correct_pred = tf.equal(tf.cast(logits, tf.int32), labels) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # Training loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=labels)) optimizer = tf.train.AdamOptimizer(learning_rate=0.1) train_op = optimizer.minimize(loss_op) features_test

我的训练过程如下:

labels_test

为了澄清:我在问是否有可能将单独的内容输入

init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    keep_iterating = True

    i = 0
    print('Accuracy: {}'.format(sess.run(accuracy)))
    while keep_iterating:
        i += 1
        try:
            _, loss_val, accuracy_val = sess.run([train_op, loss_op, accuracy])
            if i % 1000 == 1:
                print('Iteration: {}: Loss: {} Accuracy: {}'.format(i, loss_val, accuracy_val))
        except tf.errors.OutOfRangeError:
            print('Iteration: {}: Loss: {} Accuracy: {}'.format(i, loss_val, accuracy_val))
            keep_iterating = False
        except Exception as e:
            keep_iterating = False

这样,我可以调用dense_tensor = tf.feature_column.input_layer(features=features, feature_columns=columns) 并使用训练迭代器(功能,标签)运行它,并调用train_op并使其运行测试迭代器(features_test,labels_test)。

当前,调用accuracy仍然使用训练迭代器的“功能”

1 个答案:

答案 0 :(得分:0)

因此,解决方案是执行以下操作:

1)换成

def train_func():
    return dataset_config(filenames=filename_list, batch_size=64, mapper=feature_proto.unpack, num_cpus=num_cpus)

def test_func():
    return dataset_config(filenames=evaluation_list, batch_size=4096, mapper=feature_proto.unpack, num_cpus=num_cpus)

2)使用

is_training = tf.placeholder_with_default(True, shape=(), name='Is_Training')
features, labels = tf.cond(is_training, train_func, test_func)

3)修改网络输入为

dense_tensor = tf.feature_column.input_layer(features=features, feature_columns=columns)

4)将correct_pred修改为

correct_pred = tf.equal(tf.cast(logits, tf.int32), labels)

使得它现在使用给定的任何标签