当我使用keras时,如何获得每个类别的概率?

时间:2017-12-21 00:52:33

标签: deep-learning keras

当我尝试使用CNN对Fashion-MNIST数据集进行分类时,我想看看每个测试数据的每个类别的概率是多少。我的模型代码如下:

<head>
    <link href="http://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">

    <link rel="stylesheet" href="https://cdn.rawgit.com/MatthewLoveday/mq/master/mathquill.css">
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.97.8/css/materialize.min.css">

    <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script>
    <script src="https://cdn.rawgit.com/MatthewLoveday/mq/0c20cf96/mathquill.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.97.8/js/materialize.min.js"></script>
    <script src="https://www.gstatic.com/charts/loader.js"></script>

    <script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.7.1/Chart.min.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.20.1/locale/af.js"></script>

    <script>var MQ = MathQuill.getInterface(2);</script>

    <title>Maths Site</title>
</head>

虽然我想显示每个类别的概率,但我使用以下方法:

model = Sequential()
model.add(InputLayer(input_shape = (28, 28, 1)))
model.add(BatchNormalization())

model.add(Conv2D(64, (2, 2), 
                 padding = 'same',
                 bias_initializer = Constant(0.01),
                 kernel_initializer = 'random_uniform',
                 input_shape = (28, 28, 1)))
model.add(MaxPool2D(padding = 'same'))

model.add(Conv2D(64, (2, 2), 
                 padding = 'same',
                 bias_initializer = Constant(0.01),
                 kernel_initializer = 'random_uniform',
                 input_shape = (28, 28, 1)))
model.add(MaxPool2D(padding = 'same'))
model.add(Dropout(0.2))
model.add(Flatten())


model.add(Dense(128,
                activation = 'relu', 
                bias_initializer = Constant(0.01),
                kernel_initializer = 'random_uniform'))

model.add(Dense(10, activation = 'softmax'))

model.compile(loss = 'categorical_crossentropy',
              optimizer = 'adam', 
              metrics = ['accuracy'])

print(model.summary())


history = model.fit(X_train, 
                  y_train,
                  epochs = 20, 
                  batch_size = 32, 
                  validation_data = (X_test, y_test))

结果是:

pre_classes = model.predict_classes(X_test)
pre = model.predict(X_test)
for i, pre_c in enumerate(pre_classes[:1000]):
    print('Pre {}, True {}, Prob {}'.format(pre_c, y_test[i], pre[i]))

我想知道为什么概率不是介于0和1之间的数字,但都是0和1。

0 个答案:

没有答案