我正在尝试开发一个简单的成本预测引擎,以便根据几何信息和我们的历史数据猜测钢制氧切割零件的价格。该实用程序是一个更大的应用程序的一部分,该应用程序用vb.net编码,因此我不得不使用该语言。
找到的有关ML.NET的所有信息都是基于C#的,据我猜测,vb.net的实现并不完全相同,因此调整语言成为了一场噩梦。似乎vb.net风格甚至跳过了一些培训师,缺少一些功能并且支持较少。
首先,作为一个数值回归问题,我认为SDCA培训师是最佳选择,所以这就是我的方法。我已经向系统提供了一些“虚构”数据,并使用Excel根据1000个随机输入生成了“逻辑”成本(几乎是线性的!),我认为任何回归预测系统都应该非常准确地管理该测试数据至少远远超出了预期的真实数据!
这是我的简化代码,可从.csv文件构建和训练模型,并使用其中的4个输入对其进行测试:
Public Class CShapeCostPrediction
Public Class CShapeInput
<ColumnName("AgeFrom1990"), LoadColumn(0)>
Public Property AgeFrom1990 As Single
<ColumnName("Area"), LoadColumn(1)>
Public Property Area As Single
<ColumnName("RectangularArea"), LoadColumn(2)>
Public Property RectangularArea As Single
<ColumnName("Thickness"), LoadColumn(3)>
Public Property Thickness As Single
<ColumnName("Perimeter"), LoadColumn(4)>
Public Property Perimeter As Single
<ColumnName("Cuts"), LoadColumn(5)>
Public Property Cuts As Single
<ColumnName("Cost"), LoadColumn(6)>
Public Property CostReal As Single
Public Sub New()
End Sub
'For testing
Public Sub New(sAge As Single, sArea As Single, sRectArea As Single, sThick As Single, sPerim As Single, sCuts As Single)
AgeFrom1990 = sAge
Area = sArea
RectangularArea = sRectArea
Thickness = sThick
Perimeter = sPerim
Cuts = sCuts
End Sub
End Class
Public Class CShapeOutput
Public Property Score As Single
End Class
'Shared members
Public Shared Context As MLContext
Public Shared PredictionEngine As PredictionEngine(Of CShapeInput, CShapeOutput)
'Main simplified testing workflow
Public Shared Function Testing() As Boolean
Context = New MLContext()
Dim oTrainingDataView As IDataView = Context.Data.LoadFromTextFile(Of CShapeInput)(path:="D:\ShapeInfo.csv",
hasHeader:=True,
separatorChar:=CChar(";"),
allowQuoting:=True, allowSparse:=False)
'Normalization. Reportedly required for SDCA trainer
Dim oNormalize As EstimatorChain(Of Transforms.NormalizingTransformer) = Context.Transforms.NormalizeMeanVariance("AgeFrom1990").
Append(Context.Transforms.NormalizeMeanVariance("Area")).
Append(Context.Transforms.NormalizeMeanVariance("RectangularArea")).
Append(Context.Transforms.NormalizeMeanVariance("Thickness")).
Append(Context.Transforms.NormalizeMeanVariance("Perimeter")).
Append(Context.Transforms.NormalizeMeanVariance("Cuts"))
'Concatenate to features
Dim oConcatenate As EstimatorChain(Of ColumnConcatenatingTransformer) = oNormalize.Append(Context.Transforms.Concatenate("Features", "AgeFrom1990", "Area", "RectangularArea", "Thickness", "Perimeter", "Cuts"))
'Trainer to predict a label from a feature
Dim oTrainer As Trainers.SdcaRegressionTrainer = Context.Regression.Trainers.Sdca(labelColumnName:="Cost", featureColumnName:="Features")
Dim oTrainingPipeline As IEstimator(Of ITransformer) = oConcatenate.Append(oTrainer)
Dim oTrainedModel As ITransformer = oTrainingPipeline.Fit(oTrainingDataView) 'Too fast!?
Dim oCrossValidationResults As IEnumerable(Of TrainCatalogBase.CrossValidationResult(Of RegressionMetrics)) = Context.Regression.CrossValidate(oTrainingDataView, oTrainingPipeline, numberOfFolds:=5, labelColumnName:="Cost")
'Get some metrics and show them
Dim dRSQuared As Double = 0.0
Dim dRootMeanSquaredError As Double = 0.0
For Each oCVResult As TrainCatalogBase.CrossValidationResult(Of RegressionMetrics) In oCrossValidationResults
dRSQuared += oCVResult.Metrics.RSquared
dRootMeanSquaredError += oCVResult.Metrics.RootMeanSquaredError
Next
Dim dCount As Double = CDbl(oCrossValidationResults.LongCount)
dRSQuared /= dCount
dRootMeanSquaredError /= dCount
MessageBox.Show(String.Format("R-Squared: {0:0.000}" & Environment.NewLine() & "Root Mean Squared Error (RMSE): {1:0.000}", dRSQuared, dRootMeanSquaredError))
'Model saving for later use
If IO.File.Exists("D:\ShapeModel.zip") Then
IO.File.Delete("D:\ShapeModel.zip")
End If
Context.Model.Save(oTrainedModel, oTrainingDataView.Schema, "D:\ShapeModel.zip")
'Build prediction engine
PredictionEngine = Context.Model.CreatePredictionEngine(Of CShapeInput, CShapeOutput)(oTrainedModel)
'Some testing using some of the same values in the feeding data
Dim oTestInputs As New List(Of CShapeInput)
oTestInputs.Add(New CShapeInput(26, 0.553079716, 1.624771712, 47, 4.905492266, 3)) 'Cost = 193.42
oTestInputs.Add(New CShapeInput(40, 0.006435867, 0.018295898, 12, 0.495820115, 4)) 'Cost = 0.60
oTestInputs.Add(New CShapeInput(26, 0.948809904, 3.598203278, 96, 7.049619315, 8)) 'Cost = 703.96
oTestInputs.Add(New CShapeInput(5, 0.814014957, 1.391183561, 10, 3.985410019, 3)) 'Cost = 56.71
'Predict
Dim oTestOutputs As New List(Of CShapeOutput)
oTestOutputs.Add(PredictionEngine.Predict(oTestInputs(0)))
oTestOutputs.Add(PredictionEngine.Predict(oTestInputs(1)))
oTestOutputs.Add(PredictionEngine.Predict(oTestInputs(2)))
oTestOutputs.Add(PredictionEngine.Predict(oTestInputs(3)))
MessageBox.Show(String.Format("Cost 1: {0:0.000}" & Environment.NewLine() & "Cost 2: {1:0.000}" & Environment.NewLine() & "Cost 3: {2:0.000}" & Environment.NewLine() & "Cost 4: {3:0.000}", oTestOutputs(0).Score, oTestOutputs(1).Score, oTestOutputs(2).Score, oTestOutputs(3).Score))
Return True
End Function
End Class
输入数据的第一行(测试文件中有1000行):
AgeFrom1990;Area;RectangularArea;Thickness;Perimeter;Cuts;Cost
26.000;0.553;1.625;47.000;4.905;3.000;193.425
23.000;0.198;0.351;33.000;3.520;7.000;48.176
5.000;0.740;2.981;55.000;4.727;6.000;310.574
39.000;0.110;0.182;41.000;1.263;4.000;32.389
40.000;0.111;0.557;27.000;1.890;1.000;23.167
15.000;0.635;0.826;51.000;3.589;1.000;218.191
18.000;0.763;0.994;89.000;5.638;9.000;482.146
36.000;0.095;0.143;87.000;1.455;7.000;60.164
15.000;0.942;1.404;50.000;4.037;1.000;319.190
34.000;0.124;0.189;17.000;2.205;6.000;15.295
35.000;0.679;3.285;63.000;6.535;5.000;335.729
18.000;0.240;1.060;17.000;2.123;3.000;31.298
获得的指标是: RSquared:0.857 RMSE:63.41
但是,预测结果远非正确的(预期/获得):
测试1:193.4 / 192.8(.csv文件中的第一行,效果很好)
测试2:0.6 / -156.7(负!)
测试3:555.9 / 703.9
测试4:56.7 / 128.5
准确预测的唯一结果是.csv文件第一行的结果,因此我不确定是否未读取全部信息。
此外,拟合过程确实非常快,大约需要1-2秒,这是度量标准评估更长的过程。这有点奇怪,因为我认为对1000个输入进行拟合应进行一些处理。我可以避免交叉验证步骤获得相同的预测结果,因此似乎根本没有必要。
说实话,我对这一切的了解确实是原始的,我的代码是从C#的不同来源复制并改编不同代码段的结果,所以我确信这是不可接受的。
例如,我对完成规范化和列连接的方式不满意,将不同的结果附加到不同的返回数据类型上。找到的有关此工作流程的所有信息都用C#编码,以一种更直接的方法跳过数据类型。
任何信息都将不胜感激,因为我还没有找到任何东西。
非常感谢!