我有一个带有EculideanLoss
图层的回归网络,经过培训后,loss
的值大约为3.
layer {
name: "conv2"
type: "Convolution"
bottom: "conv"
top: "conv2"
convolution_param {
num_output: 1 # <-- this is correct right??
kernel_size: 5
pad: 2
stride: 1
...
}
}
layer {
name: "relu"
type: "ReLU"
bottom: "conv2"
top: "result"
relu_param{
negative_slope: 0.01
}
}
我的数据是尺寸为1 x 128 x 128的图像,而我的.g ground_truth
也是具有相同尺寸的图像。我缩放[0,255]的所有值 - &gt; [0,1]。
当我尝试检索predicted image
我的结果时,会发送一个奇怪的图像,甚至看起来不像ground_truth
我检索输出的python脚本如下所示:
net.blobs['data'].data[...] = transformer.preprocess('data', img)
pred = net.forward()
output_blob = pred['result']
predicated_image_array = np.array(output_blob)
predicated_image_array = predicated_image_array.squeeze()
range_value = np.ptp(predicated_image_array)
min_value = predicated_image_array.min()
max_value = predicated_image_array.max()
# make positive
predicated_image_array[:] -= min_value
if not range_value == 0:
predicated_image_array /= range_value
predicated_image_array *= 255
predicated_image_array = predicated_image_array.astype(np.int64)
cv2.imwrite('predicted_output.jpg', predicated_image_array)
价值损失3是否太高或者我的python脚本出了什么问题?
提示:当我使用SoftmaxWithLoss
图层而不是EuclideanLayer
时,并且不会缩放[0,255]中的值 - &gt; [0,1]但是留下它们让我的标签来自[0,255]我得到了一个非常不错的结果!