CNTK Evaluate模型有两个输入C ++

时间:2018-01-15 16:23:53

标签: c++ machine-learning cntk

我有一个基于CNTK 2.3的项目。我使用集成测试中的代码来训练MNIST分类器,如下所示:

Binding

那部分工作正常,我训练模型然后保存到模型文件。但是,当我尝试评估一个简单的图像来测试模型时,模型看起来有些不对劲。

    auto device = DeviceDescriptor::GPUDevice(0);

    const size_t inputDim = sizeBlob * sizeBlob;
    const size_t numOutputClasses = numberOfClasses;
    const size_t hiddenLayerDim = 200;

    auto input = InputVariable({ inputDim }, CNTK::DataType::Float, L"features");

    auto scaledInput = ElementTimes(Constant::Scalar(0.00390625f, device), input);
    auto classifierOutput = FullyConnectedDNNLayer(scaledInput, hiddenLayerDim, device, std::bind(Sigmoid, _1, L""));
    auto outputTimesParam = Parameter(NDArrayView::RandomUniform<float>({ numOutputClasses, hiddenLayerDim }, -0.05, 0.05, 1, device));
    auto outputBiasParam = Parameter(NDArrayView::RandomUniform<float>({ numOutputClasses }, -0.05, 0.05, 1, device));
    classifierOutput = Plus(outputBiasParam, Times(outputTimesParam, classifierOutput), L"classifierOutput");

    auto labels = InputVariable({ numOutputClasses }, CNTK::DataType::Float, L"labels");
    auto trainingLoss = CNTK::CrossEntropyWithSoftmax(classifierOutput, labels, L"lossFunction");;
    auto prediction = CNTK::ClassificationError(classifierOutput, labels, L"classificationError");

    // Test save and reload of model

    Variable classifierOutputVar = classifierOutput;
    Variable trainingLossVar = trainingLoss;
    Variable predictionVar = prediction;
    auto combinedNet = Combine({ trainingLoss, prediction, classifierOutput }, L"MNISTClassifier");
    //SaveAndReloadModel<float>(combinedNet, { &input, &labels, &trainingLossVar, &predictionVar, &classifierOutputVar }, device);

    classifierOutput = classifierOutputVar;
    trainingLoss = trainingLossVar;
    prediction = predictionVar;


    const size_t minibatchSize = 64;
    const size_t numSamplesPerSweep = 60000;
    const size_t numSweepsToTrainWith = 2;
    const size_t numMinibatchesToTrain = (numSamplesPerSweep * numSweepsToTrainWith) / minibatchSize;

    auto featureStreamName = L"features";
    auto labelsStreamName = L"labels";
    auto minibatchSource = TextFormatMinibatchSource(trainingSet, { { featureStreamName, inputDim },{ labelsStreamName, numOutputClasses } });

    auto featureStreamInfo = minibatchSource->StreamInfo(featureStreamName);
    auto labelStreamInfo = minibatchSource->StreamInfo(labelsStreamName);

    LearningRateSchedule learningRatePerSample = TrainingParameterPerSampleSchedule<double>(0.003125);
    auto trainer = CreateTrainer(classifierOutput, trainingLoss, prediction, { SGDLearner(classifierOutput->Parameters(), learningRatePerSample) });

    size_t outputFrequencyInMinibatches = 20;
    for (size_t i = 0; i < numMinibatchesToTrain; ++i)
    {
        auto minibatchData = minibatchSource->GetNextMinibatch(minibatchSize, device);
        trainer->TrainMinibatch({ { input, minibatchData[featureStreamInfo] },{ labels, minibatchData[labelStreamInfo] } }, device);
        PrintTrainingProgress(trainer, i, outputFrequencyInMinibatches);

        size_t trainingCheckpointFrequency = 100;
        if ((i % trainingCheckpointFrequency) == (trainingCheckpointFrequency - 1))
        {
            const wchar_t* ckpName = L"feedForward.net";
            //trainer->SaveCheckpoint(ckpName);
            //trainer->RestoreFromCheckpoint(ckpName);
        }
    }

    combinedNet->Save(g_dnnFile);

我在c#上运行相同的代码,它运行正常。我发现不同的是modelFunc-&gt; Arguments()应该有一个参数,但它有两个 - 它找到功能和标签作为两个输入,但我只需要将功能作为输入,它会抛出以下错误:< / p>

enter image description here

1 个答案:

答案 0 :(得分:1)

按名称查找输入和输出变量,而不是modelFunc->Arguments()[0]

Variable inputVar;
GetInputVariableByName(modelFunc, L"features", inputVar);

Variable outputVar;
GetOutputVaraiableByName(modelFunc, L"classifierOutput", outputVar);

GetInputVariableByNameGetOutputVaraiableByName()来自 https://github.com/Microsoft/CNTK/blob/v2.3.1/Tests/EndToEndTests/EvalClientTests/CNTKLibraryCPPEvalExamplesTest/EvalMultithreads.cpp#L316