我使用tf.layers.dense训练了一个简单的前馈神经网络 但是,在使用优化和训练对图层进行训练后,我不知道如何使用训练过的图层来预测我想要评估的新数据的标签。我已经搜索了stackoverflow和谷歌如何做到这一点和最近的回答我发现是Using a created tensorflow model for predicting 但是,除了保存该图层并再次调用它之外,是否有更简单的方法来使用训练过的图层?
训练该神经网络以预测化学反应的输出浓度。化学反应输出可以用耦合的ODE建模并且可以很容易地解决,但我试图使用神经网络给出近似解。
#Making Data
training_features=pd.DataFrame(data=np.random.random_sample([500,3]),columns=['ca','t','T'])
training_labels=conc_out(training_features,trans)
validation_features=pd.DataFrame(data=np.random.random_sample([30,3]),columns=['ca','t','T'])
validation_labels=conc_out(training_features,trans)
def my_input_fn(features, targets, batch_size=1,num_epochs=None,shuffle=True):
#Creating Dataset importing function, returning get_next from iterator
#features=features.to_dict('list')
ds=tf.data.Dataset.from_tensor_slices((features,targets))
if shuffle:
ds=ds.shuffle(buffer_size=10000)
ds=ds.batch(batch_size).repeat(num_epochs)
features,labels=ds.make_one_shot_iterator().get_next()
return features,labels
def nn(input_featurs,hidden_layers=[10]):
net=tf.layers.dense(input_featurs,hidden_layers[0],activation=tf.nn.sigmoid)
if len(hidden_layers)>1:
for i in range(len(hidden_layers)-1):
net=tf.layers.dense(net,hidden_layers[i+1],activation=tf.nn.sigmoid)
logits=tf.layers.dense(net,2,activation=None)
return logits
def train_nn_regression_model(
learning_rate,
epochs,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
# Create input functions.
features,labels=my_input_fn(training_examples,training_targets,batch_size=batch_size)
predictions=nn(features,hidden_units)
loss=tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=predictions,labels=labels))
train_op=tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
with tf.Session() as sess:
#Training neural network
sess.run(tf.global_variables_initializer())
training_predictions_before = pd.DataFrame(data=sess.run(training_predictions), columns=['TP_Ca', 'TP_Cb'])
print(training_predictions_before.head(10))
for epoch in range(epochs):
epoch_loss=0
for _ in range(int(training_examples.shape[0]/batch_size)):
_,loss_value=sess.run([train_op,loss])
epoch_loss+=loss_value
print('Epoch : ',epoch+1, ' out of ', epochs,' . Epoch loss = ',epoch_loss)
training_predictions=pd.DataFrame(data=sess.run(training_predictions),columns=['TP_Ca','TP_Cb'])
print(training_predictions.head(10))
train_nn_regression_model(
learning_rate=0.002,
epochs=10,
batch_size=20,
hidden_units=[100],
training_examples=training_features,
training_targets=training_labels,
validation_examples=validation_features,
validation_targets=validation_labels)
当我运行代码时,training_predictions_before和training_predictions给出完全相同的答案。但是,训练操作后,训练预测是否应该给出不同的答案?
谢谢!
编辑:我编辑了代码以确保所有内容都在同一个会话中运行。