使用np数组作为输入在Python中训练CNTK模型

时间:2018-06-29 05:13:18

标签: python numpy cntk

我一直在尝试用CNTK重写一个简单的分类器。但是我遇到的所有示例都使用带有输入映射的内置Reader,并且一旦读取我的数据就需要进行大量修改,因此我无法使用大多数示例演示的数据加载方法。我在here上遇到了代码,似乎表明了如何使用直接的np数组进行训练,但实际上并没有训练任何东西。

显示问题的最小工作示例:

import cntk as C
import numpy as np
from cntk.ops import relu
from cntk.layers import Dense, Convolution2D

outputs = 10

input_var = C.input_variable((7, 19, 19), name='features')
label_var = C.input_variable((outputs))

epochs = 20
minibatchSize = 100

cc = C.layers.Convolution2D((3,3), 64, activation=relu)(input_var)
net = C.layers.Dense(outputs)(cc)

loss = C.cross_entropy_with_softmax(net, label_var)

learner = C.adam(net.parameters, 0.0018, 0.9, minibatch_size=minibatchSize)

progressPrinter = C.logging.ProgressPrinter(tag='Training', num_epochs=epochs)

for i in range(epochs):
    X = np.zeros((minibatchSize, 7, 19, 19), dtype=np.float32)
    Y = np.ones((minibatchSize, outputs), dtype=np.float32)

    train_summary = loss.train((X, Y), parameter_learners=[learner], callbacks=[progressPrinter])

示例输出:

Learning rate per 100 samples: 0.0018
Finished Epoch[1 of 20]: [Training] loss = 2.302410 * 100, metric = 0.00% * 100 0.835s (119.8 samples/s);
Finished Epoch[2 of 20]: [Training] loss = 0.000000 * 0, metric = 0.00% * 0 0.003s (  0.0 samples/s);
Finished Epoch[3 of 20]: [Training] loss = 0.000000 * 0, metric = 0.00% * 0 0.001s (  0.0 samples/s);

这可能是一个很明显的原因,但我无法弄清楚。任何有关如何解决此问题的想法将不胜感激!

1 个答案:

答案 0 :(得分:1)

事实证明,该解决方案实际上非常简单,您无需读者即可轻松创建输入字典。这是解决培训问题的完整代码:

import cntk as C
import numpy as np
from cntk.ops import relu
from cntk.layers import Dense, Convolution2D

outputs = 10

input_var = C.input_variable((7, 19, 19), name='features')
label_var = C.input_variable((outputs))

epochs = 20
minibatchSize = 100

cc = C.layers.Convolution2D((3,3), 64, activation=relu)(input_var)
net = C.layers.Dense(outputs)(cc)

loss = C.cross_entropy_with_softmax(net, label_var)
pe = C.classification_error(net, label_var)    

learner = C.adam(net.parameters, 0.0018, 0.9, minibatch_size=minibatchSize)

progressPrinter = C.logging.ProgressPrinter(tag='Training', num_epochs=epochs)
trainer = C.Trainer(net, (loss, pe), learner, progressPrinter)    

for i in range(epochs):
    X = np.zeros((minibatchSize, 7, 19, 19), dtype=np.float32)
    Y = np.ones((minibatchSize, outputs), dtype=np.float32)

    trainer.train_minibatch({input_var : X, label_var : Y})

    trainer.summarize_training_progress()