要清楚,我不想在C ++中加载预训练模型。我想用C ++进行培训,然后使用该模型。
答案 0 :(得分:3)
我已经尝试使用C ++ API进行培训几天了,据我所知,使用1.2版本没有完整的示例。我还没有放弃,如果我设法做出一个有效的例子(我正在尝试识别来自MNIST的数字)我会在这里发布它。在1.2中,会话的运行方式发生了变化,文档非常缺乏。
如果您愿意使用旧版本的C ++ API,源代码中的这个示例就足够了:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/tutorials/example_trainer.cc。
答案 1 :(得分:0)
下面提供了一个很好的示例,可以从头开始训练模型,不幸的是,无法保存训练后的示例。他还解释了代码。 我一直在用这个: https://matrices.io/training-a-deep-neural-network-using-only-tensorflow-c/ (这是在c ++中训练tensorflow-NN的第一个示例)
我的存储库中也进行了一些基本测试: https://github.com/PinkySan/TensorflowHandlingTests
Scope scope = Scope::NewRootScope();
auto x = Placeholder(scope, DT_FLOAT);
auto y = Placeholder(scope, DT_FLOAT);
// weights init
auto w1 = Variable(scope, {3, 3}, DT_FLOAT);
auto assign_w1 = Assign(scope, w1, RandomNormal(scope, {3, 3}, DT_FLOAT));
auto w2 = Variable(scope, {3, 2}, DT_FLOAT);
auto assign_w2 = Assign(scope, w2, RandomNormal(scope, {3, 2}, DT_FLOAT));
auto w3 = Variable(scope, {2, 1}, DT_FLOAT);
auto assign_w3 = Assign(scope, w3, RandomNormal(scope, {2, 1}, DT_FLOAT));
// bias init
auto b1 = Variable(scope, {1, 3}, DT_FLOAT);
auto assign_b1 = Assign(scope, b1, RandomNormal(scope, {1, 3}, DT_FLOAT));
auto b2 = Variable(scope, {1, 2}, DT_FLOAT);
auto assign_b2 = Assign(scope, b2, RandomNormal(scope, {1, 2}, DT_FLOAT));
auto b3 = Variable(scope, {1, 1}, DT_FLOAT);
auto assign_b3 = Assign(scope, b3, RandomNormal(scope, {1, 1}, DT_FLOAT));
// layers
auto layer_1 = Tanh(scope, Tanh(scope, Add(scope, MatMul(scope, x, w1), b1)));
auto layer_2 = Tanh(scope, Add(scope, MatMul(scope, layer_1, w2), b2));
auto layer_3 = Tanh(scope, Add(scope, MatMul(scope, layer_2, w3), b3));
// regularization
auto regularization = AddN(scope,
initializer_list<Input>{L2Loss(scope, w1),
L2Loss(scope, w2),
L2Loss(scope, w3)});
// loss calculation
auto loss = Add(scope,
ReduceMean(scope, Square(scope, Sub(scope, layer_3, y)), {0, 1}),
Mul(scope, Cast(scope, 0.01, DT_FLOAT), regularization));
// add the gradients operations to the graph
std::vector<Output> grad_outputs;
TF_CHECK_OK(AddSymbolicGradients(scope, {loss}, {w1, w2, w3, b1, b2, b3}, &grad_outputs));
// update the weights and bias using gradient descent
auto apply_w1 = ApplyGradientDescent(scope, w1, Cast(scope, 0.01, DT_FLOAT), {grad_outputs[0]});
auto apply_w2 = ApplyGradientDescent(scope, w2, Cast(scope, 0.01, DT_FLOAT), {grad_outputs[1]});
auto apply_w3 = ApplyGradientDescent(scope, w3, Cast(scope, 0.01, DT_FLOAT), {grad_outputs[2]});
auto apply_b1 = ApplyGradientDescent(scope, b1, Cast(scope, 0.01, DT_FLOAT), {grad_outputs[3]});
auto apply_b2 = ApplyGradientDescent(scope, b2, Cast(scope, 0.01, DT_FLOAT), {grad_outputs[4]});
auto apply_b3 = ApplyGradientDescent(scope, b3, Cast(scope, 0.01, DT_FLOAT), {grad_outputs[5]});
ClientSession session(scope);
std::vector<Tensor> outputs;
// init the weights and biases by running the assigns nodes once
TF_CHECK_OK(session.Run({assign_w1, assign_w2, assign_w3, assign_b1, assign_b2, assign_b3}, nullptr));
// training steps
for (int i = 0; i < 5000; ++i) {
if (i % 100 == 0) {
TF_CHECK_OK(session.Run({{x, x_data}, {y, y_data}}, {loss}, &outputs));
std::cout << "Loss after " << i << " steps " << outputs[0].scalar<float>() << std::endl;
}
// nullptr because the output from the run is useless
TF_CHECK_OK(session.Run({{x, x_data}, {y, y_data}}, {apply_w1, apply_w2, apply_w3, apply_b1, apply_b2, apply_b3}, nullptr));
}
答案 2 :(得分:0)
我认为最好的策略是在Python中训练模型,保存模型然后在C ++中加载冻结的模型。我写了说明,说明如何冻结Tensorflow(TFLearn)中的检查点并加载此模型以使用C / C ++ API进行推理:
我将要复制的说明太长了。