张量流提取分类预测

时间:2018-02-21 11:39:11

标签: python-3.x tensorflow machine-learning classification prediction

我有一个用于分类单热编码组标签的张量流NN模型(组是独占的),以(layerActivs[-1]为最终层的激活)结束:

probs = sess.run(tf.nn.softmax(layerActivs[-1]),...)
classes = sess.run(tf.round(probs))
preds = sess.run(tf.argmax(classes))

包括tf.round以强制任何低概率为0.如果观察的所有概率都低于50%,这意味着不会预测任何类别。即,如果有4个班级,我们可以probs[0,:] = [0.2,0,0,0.4],所以classes[0,:] = [0,0,0,0]; preds[0] = 0紧随其后。

显然这是不明确的,因为如果我们有probs[1,:]=[.9,0,.1,0] - >就会出现相同的结果。 classes[1,:] = [1,0,0,0] - > 1 preds[1] = 0。使用tensorflow内置度量类时这是一个问题,因为函数无法区分无预测和类0中的预测。此代码演示了这一点:

import numpy as np
import tensorflow as tf
import pandas as pd

''' prepare '''
classes = 6
n = 100

# simulate data
np.random.seed(42)
simY = np.random.randint(0,classes,n)     # pretend actual data
simYhat = np.random.randint(0,classes,n)  # pretend pred data
truth = np.sum(simY == simYhat)/n
tabulate = pd.Series(simY).value_counts()

# create placeholders
lab = tf.placeholder(shape=simY.shape, dtype=tf.int32)
prd = tf.placeholder(shape=simY.shape, dtype=tf.int32)
AM_lab = tf.placeholder(shape=simY.shape,dtype=tf.int32)
AM_prd = tf.placeholder(shape=simY.shape,dtype=tf.int32)

# create one-hot encoding objects
simYOH = tf.one_hot(lab,classes)

# create accuracy objects
acc = tf.metrics.accuracy(lab,prd)            # real accuracy with tf.metrics
accOHAM = tf.metrics.accuracy(AM_lab,AM_prd)  # OHE argmaxed to labels - expected to be correct

# now setup to pretend we ran a model & generated OHE predictions all unclassed
z = np.zeros(shape=(n,classes),dtype=float)
testPred = tf.constant(z)

''' run it all '''
# setup
sess = tf.Session()
sess.run([tf.global_variables_initializer(),tf.local_variables_initializer()])

# real accuracy with tf.metrics
ACC = sess.run(acc,feed_dict = {lab:simY,prd:simYhat})
# OHE argmaxed to labels - expected to be correct, but is it?
l,p = sess.run([simYOH,testPred],feed_dict={lab:simY})
p = np.argmax(p,axis=-1)
ACCOHAM = sess.run(accOHAM,feed_dict={AM_lab:simY,AM_prd:p})
sess.close()

''' print stuff '''
print('Accuracy')
print('-known truth: %0.4f'%truth)
print('-on unprocessed data: %0.4f'%ACC[1])
print('-on faked unclassed labels data (s.b. 0%%): %0.4f'%ACCOHAM[1])
print('----------\nTrue Class Freqs:\n%r'%(tabulate.sort_index()/n))

有输出:

Accuracy
-known truth: 0.1500
-on unprocessed data: 0.1500
-on faked unclassed labels data (s.b. 0%): 0.1100
----------
True Class Freqs:
0    0.11
1    0.19
2    0.11
3    0.25
4    0.17
5    0.17
dtype: float64
Note freq for class 0 is same as faked accuracy...

我尝试为没有预测的观察设置predsnp.nan的值,但tf.metrics.accuracy抛出ValueError: cannot convert float NaN to integer;还试过了np.inf,但得到了OverflowError: cannot convert float infinity to integer

如何将舍入概率转换为类预测,但是能够适当地处理不可预测的观察结果?

1 个答案:

答案 0 :(得分:0)

这已经足够长,没有答案,所以我将在这里发布作为我的解决方案的答案。我使用具有3个主要步骤的新函数将归属概率转换为类预测:

  1. 将任何NaN概率设置为0
  2. 1/num_classes以下的任何概率设置为0
  3. 使用np.argmax()提取预测类,然后将任何未显示的观察结果设置为统一选定的类
  4. 整数类标签的结果向量可以传递给tf.metrics函数。我的功能如下:

    def predFromProb(classProbs):
      '''
      Take in as input an (m x p) matrix of m observations' class probabilities in
      p classes and return an m-length vector of integer class labels (0...p-1). 
      Probabilities at or below 1/p are set to 0, as are NaNs; any unclassed
      observations are randomly assigned to a class.
      '''
      numClasses = classProbs.shape[1]
      # zero out class probs that are at or below chance, or NaN
      probs = classProbs.copy()
      probs[np.isnan(probs)] = 0
      probs = probs*(probs > 1/numClasses)
      # find any un-classed observations
      unpred = ~np.any(probs,axis=1)
      # get the predicted classes
      preds = np.argmax(probs,axis=1)
      # randomly classify un-classed observations
      rnds = np.random.randint(0,numClasses,np.sum(unpred))
      preds[unpred] = rnds
    
      return preds