在具有不同数据集的DIGITS中使用预训练模型时出错。如何根据新数据集修改图层?

时间:2016-10-20 02:13:35

标签: machine-learning computer-vision deep-learning caffe nvidia-digits

我尝试将预训练模型(VGG 16)用于DIGITS但是我遇到了这个错误。

  

错误:检查失败:错误== cudaSuccess(2对0)内存不足

conv2_2 does not need backward computation.
relu2_1 does not need backward computation.
conv2_1 does not need backward computation.
pool1 does not need backward computation.
relu1_2 does not need backward computation.
conv1_2 does not need backward computation.
relu1_1 does not need backward computation.
conv1_1 does not need backward computation.
data does not need backward computation.
This network produces output label
This network produces output softmax
Network initialization done.
Solver scaffolding done.
Finetuning from /home/digits/digits/jobs/20161020-095911-9d01/model.caffemodel
Attempting to upgrade input file specified using deprecated V1LayerParameter: /home/digits/digits/jobs/20161020-095911-9d01/model.caffemodel
Successfully upgraded file specified using deprecated V1LayerParameter
Attempting to upgrade input file specified using deprecated input fields: /home/digits/digits/jobs/20161020-095911-9d01/model.caffemodel
Successfully upgraded file specified using deprecated input fields.
Note that future Caffe releases will only support input layers and not input fields.
Check failed: error == cudaSuccess (2 vs. 0)  out of memory

我已成功将deploy.prototxtVGG_ILSVRC_16_layers.caffemodel以及synset_words.txt上传到DIGITS,并使用我的数据集进行测试,该数据集包含两个类。

1 个答案:

答案 0 :(得分:2)

有时digits-server无法清除记忆。如果您使用的是ubuntu,请尝试使用此命令:

sudo restart nvidia-digits-server

如果这不起作用并再次面对同样的情况,则需要减少batch_size