如何通过交叉验证获得svmlight的训练准确性

时间:2014-03-24 19:11:31

标签: machine-learning svm cross-validation svmlight

我想使用SVMlight在我的训练集上运行交叉验证。似乎这个选项是-x 1(虽然我不确定它实现了多少折叠......)。输出是:

XiAlpha-estimate of the error: error<=31.76% (rho=1.00,depth=0)
XiAlpha-estimate of the recall: recall=>68.24% (rho=1.00,depth=0)
XiAlpha-estimate of the precision: precision=>69.02% (rho=1.00,depth=0)
Number of kernel evaluations: 56733
Computing leave-one-out **lots of gibberish here**
Retrain on full problem..............done.
Leave-one-out estimate of the error: error=12.46%
Leave-one-out estimate of the recall: recall=86.39%
Leave-one-out estimate of the precision: precision=88.82%
Actual leave-one-outs computed:  412 (rho=1.00)
Runtime for leave-one-out in cpu-seconds: 0.84

如何获得准确度?来自estimate of the error

谢谢!

1 个答案:

答案 0 :(得分:2)

这些是矛盾的概念。训练错误是训练集上的错误,而交叉验证用于估算验证错误(对于用于训练的数据

您的输出表明您正在使用N折叠(其中N尺寸的训练集)导致所谓的“留出一个”验证(仅1个测试点!),这高估了您的模型的质量。你应该尝试10倍,你的准确性只是1错误。