CNTK二元分类器

时间:2017-08-10 02:27:33

标签: machine-learning cntk

我们正在开始一个利用CNTK创建二元分类器的项目。

我们的数据集如下所示:

|attribs 1436000 24246.3124164245 |isMatch 1
|attribs 535000 21685.9351529239 |isMatch 1
|attribs 729000 8988.24232231086 |isMatch 1
|attribs 436000 4787.7521169184 |isMatch 1
|attribs 110000 38236394.456649 |isMatch 0
|attribs 808000 39512500.9870238 |isMatch 0
|attribs 108000 28432968.9161523 |isMatch 0
|attribs 816000 39512231.5629576 |isMatch 0

我们正在努力确定校车站是否符合计划路线。第一个值是计划停止和实际停止之间的增量时间(以毫秒为单位),第二个值是计划位置和实际位置(毫米)之间的增量距离。

我遇到的问题是(可能是对如何使用CNTK的根本误解)无论我如何调整数据,隐藏节点,批量大小或任何其他旋钮,我都会继续得到几乎相同的结果。我可以评估最荒谬的输入,并且我一直得到1.00。

我应该如何修改数据或模型以获得更准确的结果?

完整的代码在这里:

import numpy as np
import cntk as C
from cntk import Trainer  # to train the NN
from cntk.learners import sgd, learning_rate_schedule, \
    UnitType
from cntk.ops import *  # input_variable() def
from cntk.logging import ProgressPrinter
from cntk.initializer import glorot_uniform
from cntk.layers import default_options, Dense
from cntk.io import CTFDeserializer, MinibatchSource, \
    StreamDef, StreamDefs, INFINITELY_REPEAT


def my_print(arr, dec):
    # print an array of float/double with dec decimals
    fmt = "%." + str(dec) + "f"  # like %.4f
    for i in range(0, len(arr)):
        print(fmt % arr[i] + '  ', end='')
    print("\n")


def create_reader(path, is_training, input_dim, output_dim):
    return MinibatchSource(CTFDeserializer(path, StreamDefs(
        features=StreamDef(field='attribs', shape=input_dim,
                           is_sparse=False),
        labels=StreamDef(field='isMatch', shape=output_dim,
                         is_sparse=False)
    )), randomize=is_training,
                           max_sweeps=INFINITELY_REPEAT if is_training else 1)


def save_weights(fn, ihWeights, hBiases,
                 hoWeights, oBiases):
    f = open(fn, 'w')
    for vals in ihWeights:
        for v in vals:
            f.write("%s\n" % v)
    for v in hBiases:
        f.write("%s\n" % v)
    for vals in hoWeights:
        for v in vals:
            f.write("%s\n" % v)
    for v in oBiases:
        f.write("%s\n" % v)
    f.close()


def do_demo():
    # create NN, train, test, predict
    input_dim = 2
    hidden_dim = 30
    output_dim = 1
    train_file = "trainData_cntk.txt"
    test_file = "testData_cntk.txt"
    input_Var = C.ops.input_variable(input_dim, np.float32)
    label_Var = C.ops.input_variable(output_dim, np.float32)
    print("Creating a 2-21 tanh softmax NN for Stop data ")
    with default_options(init=glorot_uniform()):
        hLayer = Dense(hidden_dim, activation=C.ops.tanh,
                       name='hidLayer')(input_Var)
        oLayer = Dense(output_dim, activation=C.ops.softmax,
                       name='outLayer')(hLayer)
    nnet = oLayer
    # ----------------------------------
    print("Creating a cross entropy mini-batch Trainer \n")
    ce = C.cross_entropy_with_softmax(nnet, label_Var)
    pe = C.classification_error(nnet, label_Var)
    fixed_lr = 0.05
    lr_per_batch = learning_rate_schedule(fixed_lr,
                                          UnitType.minibatch)
    learner = C.sgd(nnet.parameters, lr_per_batch)

    trainer = C.Trainer(nnet, (ce, pe), [learner])
    max_iter = 5000  # 5000 maximum training iterations
    batch_size = 100  # mini-batch size  5
    progress_freq = 1000  # print error every n minibatches
    reader_train = create_reader(train_file, True, input_dim,
                                 output_dim)
    my_input_map = {
        input_Var: reader_train.streams.features,
        label_Var: reader_train.streams.labels
    }
    pp = ProgressPrinter(progress_freq)
    print("Starting training \n")
    for i in range(0, max_iter):
        currBatch = reader_train.next_minibatch(batch_size,
                                                input_map=my_input_map)
        trainer.train_minibatch(currBatch)
        pp.update_with_trainer(trainer)
    print("\nTraining complete")
    # ----------------------------------
    print("\nEvaluating test data \n")
    reader_test = create_reader(test_file, False, input_dim,
                                output_dim)
    numTestItems = 200
    allTest = reader_test.next_minibatch(numTestItems,
                                         input_map=my_input_map)
    test_error = trainer.test_minibatch(allTest)
    print("Classification error on the test items = %f"
          % test_error)
    # ----------------------------------
    # make a prediction for an unknown flower
    # first train versicolor = 7.0,3.2,4.7,1.4,0,1,0
    unknown = np.array([[10000002000, 24275329.7232828]], dtype=np.float32)
    print("\nPredicting Stop Match for input features: ")
    my_print(unknown[0], 1)  # 1 decimal
    predicted = nnet.eval({input_Var: unknown})
    print("Prediction is: ")
    my_print(predicted[0], 3)  # 3 decimals
    # ---------------------------------
    print("\nTrained model input-to-hidden weights: \n")
    print(hLayer.hidLayer.W.value)
    print("\nTrained model hidden node biases: \n")
    print(hLayer.hidLayer.b.value)
    print("\nTrained model hidden-to-output weights: \n")
    print(oLayer.outLayer.W.value)
    print("\nTrained model output node biases: \n")
    print(oLayer.outLayer.b.value)
    save_weights("weights.txt", hLayer.hidLayer.W.value,
                 hLayer.hidLayer.b.value, oLayer.outLayer.W.value,
                 oLayer.outLayer.b.value)
    return 0  # success


def main():
    print("\nBegin Stop Match \n")
    np.random.seed(0)
    do_demo()  # all the work is done in do_demo()


if __name__ == "__main__":
    main()
# end script

1 个答案:

答案 0 :(得分:2)

我认为问题在于您的输出图层正在使用softmax()激活功能,但之后您使用cross_entropy_with_softmax()作为损失函数。因此,在训练时,您的结果将被评估为softmax上的softmax。

在输出图层中使用activation=None,看看您的培训是如何进行的。

在您的预测代码中,您显然必须将softmax应用于您的评估,例如C.ops.softmax(nnet).eval({input_Var: unknown})。回顾一下我所做的一个例子,我使用了C.softmax,但这可能与我写这个例子与你正在使用的CNTK版本时的名称空间不同。

PS:如果你正在进行二进制分类,那么你真的不需要使用softmax,因为它真正用于多类分类问题。它应该仍然可以在二进制的情况下工作。

PPS:在训练期间,在每个小批量之后打印出损失是有用的,这样你就可以看到梯度下降是否收敛。我想你会在你当前的模型中发现它不是。

PPS:我刚注意到你的变量output_dim被设置为1.我不知道在这种情况下你会用softmax获得什么行为。通常softmax将应用于一个热编码输出,因此在二进制情况下,您将有两个输出,这将给出正确结果为零或一的概率。同样,你需要在训练之前对你的基本事实进行热编码。无法确定您的方法是否有效,但它看起来很可疑。