我在matlab中使用svmtrain和 MLP内核,如下所示:
mlp=svmtrain(train_data,train_label,'Kernel_Function','mlp','showplot',true);
但是我收到了这个错误:
??? Error using ==> svmtrain at 470
Unable to solve the optimization problem:
Exiting: the solution is unbounded and at infinity;
the constraints are not restrictive enough.
是什么原因?我尝试了其他内核,没有任何错误。 即使我尝试了svmtrain - unable to solve the optimization problem的答案如下:
options = optimset('maxiter',1000);
svmtrain(train_data,train_label,'Kernel_Function','mlp','Method','QP',...
'quadprog_opts',options);
但我又遇到了同样的错误。 我的训练集是一个简单的45 * 2数据集,包含2个类数据点。
答案 0 :(得分:0)
here中的解决方案并没有真正解释任何事情。问题是二次规划方法无法收敛于优化问题。正常的行动方案是增加迭代次数,但我已经在相同大小的数据上进行了测试,并进行了1,000,000次迭代,但仍然无法收敛。
options = optimset('maxIter',1000000);
mlp = svmtrain(data,labels,'Kernel_Function','mlp','Method','QP',...
'quadprog_opts',options);
??? Error using ==> svmtrain at 576
Unable to solve the optimization problem:
Exiting: the solution is unbounded and at infinity;
the constraints are not restrictive enough.
我的问题是:您是否有任何理由使用SMO上的二次规划进行优化?使用SMO做同样的事情很好:
mlp = svmtrain(data,labels,'Kernel_Function','mlp','Method','SMO');
mlp =
SupportVectors: [40x2 double]
Alpha: [40x1 double]
Bias: 0.0404
KernelFunction: @mlp_kernel
KernelFunctionArgs: {}
GroupNames: [45x1 double]
SupportVectorIndices: [40x1 double]
ScaleData: [1x1 struct]
FigureHandles: []