我一直在尝试将最初用python编写的facenet classifier实现到C ++中,并且在大多数情况下它运行良好。我已经使用opencv读取图像并转换为tensorflow张量,但是在运行图形后,我的输出张量填充了NaN值。
下面是代码部分:
string input_layer = "input:0";
string phase_train_layer = "phase_train:0";
string output_layer = "embeddings:0";
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({input_Images.size(), height, width, channels}));
auto input_tensor_mapped = input_tensor.tensor<float, 4>();
for (int i = 0; i < input_Images.size(); i++) {
Mat image = input_Images[i];
const float * source_data = (float*) image.data;
for (int h = 0; h < image.rows; ++h) {
const float* source_row = source_data + (h * image.cols * image.channels());
for (int w = 0; w < image.cols; ++w) {
const float* source_pixel = source_row + (w * image.channels());
for (int c = 0; c < image.channels(); ++c) {
const float* source_value = source_pixel + c;
//std::cout << *source_value << std::endl;
input_tensor_mapped(i, h, w, c) = *source_value;
}
}
}
}
tensorflow::Tensor phase_tensor(tensorflow::DT_BOOL, tensorflow::TensorShape());
phase_tensor.scalar<bool>()() = false;
cout << phase_tensor.DebugString() << endl;
cout << input_tensor.DebugString() << endl;
std::vector<tensorflow::Tensor> outputs;
std::vector<std::pair<string, tensorflow::Tensor>> feed_dict = {
{input_layer, input_tensor},
{phase_train_layer, phase_tensor},
};
Status run_status = session->Run(feed_dict,
{output_layer}, {} , &outputs);
if (!run_status.ok()) {
LOG(ERROR) << "\tRunning model failed: " << run_status << "\n";
return -1;
}
cout << outputs[0].DebugString() << endl;
为什么会出现这种情况?
答案 0 :(得分:0)
所以,经过长时间的分心,我终于跳回了这个小项目。这意味着我忘记了为什么我做了什么,所以需要从头开始查看事情。这意味着我非常确定我已经找到了为什么我得到NaN值,以及为什么添加prewhiten / normalization步骤可以解决它。这是非常基础的,我很生气,我第一次没有接受它。
所有初始图都要求输入矩阵具有归一化值,[ - 1,1]。 prewhiten / normalization步骤就是这样做的。
Image = Image - cv::Vec3d(mean_pxl, mean_pxl, mean_pxl);
Image = Image / stddev_pxl;