在softmax_cross_entropy_with_logits中获取InvalidArgumentError

时间:2018-04-19 21:44:31

标签: python tensorflow

我对tensorflow很新,并尝试使用Iris数据集进行一些实验。我创建了以下模型函数(MWE):

def model_fn(features, labels, mode):
    net = tf.feature_column.input_layer(features, [tf.feature_column.numeric_column(key=key) for key in FEATURE_NAMES])

    logits = tf.layers.dense(inputs=net, units=3)

    loss = tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)

    optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
    train_op = optimizer.minimize(
        loss=loss,
        global_step=tf.train.get_global_step())

    return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)

不幸的是我收到以下错误:

InvalidArgumentError: Input to reshape is a tensor with 256 values, but the requested shape has 1
 [[Node: Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](softmax_cross_entropy_with_logits_sg, Reshape/shape)]]

似乎是张量形状的一些问题。但是,logits和label都具有相同的形状(256,3) - 正如documentation所要求的那样。两个张量都有float32类型。

为了完整起见,这是估算器的输入函数:

import pandas as pd
import tensorflow as tf
import numpy as np

IRIS_DATA = "data/iris.csv"

FEATURE_NAMES = ["sepal_length", "sepal_width", "petal_length", "petal_width"]
CLASS_NAME = ["class"]

COLUMNS = FEATURE_NAMES + CLASS_NAME

# read dataset
iris = pd.read_csv(IRIS_DATA, header=None, names=COLUMNS)

# encode classes
iris["class"] = iris["class"].astype('category').cat.codes

# train test split
np.random.seed(1)
msk = np.random.rand(len(iris)) < 0.8
train = iris[msk]
test = iris[~msk]

def iris_input_fn(batch_size=256, mode="TRAIN"):
    def prepare_input(data=None):

        #do mean normaization across all samples
        mu = np.mean(data)
        sigma = np.std(data)

        data = data - mu
        data = data / sigma
        is_nan = np.isnan(data)
        is_inf = np.isinf(data)
        if np.any(is_nan) or np.any(is_inf):
            print('data is not well-formed : is_nan {n}, is_inf: {i}'.format(n= np.any(is_nan), i=np.any(is_inf)))


        data = transform_data(data)
        return data

    def transform_data(data):
        data = data.astype(np.float32)
        return data


    def load_data():
        global train

        trn_all_data=train.iloc[:,:-1]
        trn_all_labels=train.iloc[:,-1]


        return (trn_all_data.astype(np.float32),
                                              trn_all_labels.astype(np.int32))

    data, labels = load_data()
    data = prepare_input(data)

    labels = tf.one_hot(labels, depth=3)

    labels = tf.cast(labels, tf.float32)
    dataset = tf.data.Dataset.from_tensor_slices((data.to_dict(orient="list"), labels))

    dataset = dataset.shuffle(1000).repeat().batch(batch_size)

    return dataset.make_one_shot_iterator().get_next()

来自UCI repo

的数据集

1 个答案:

答案 0 :(得分:0)

通过从nn模块替换损失函数来解决问题:

loss = tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)

由损失模块的损失函数

loss = tf.losses.softmax_cross_entropy(onehot_labels=labels, logits=logits)

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits))
输入 GradientDescentOptimizer 的最小化方法的

损失需要是一个标量。整批的单个值。

问题是,我计算了批次中每个元素的softmax交叉熵,这导致了包含256(批量大小)交叉熵值的张量,并试图在最小化方法中提供此值。因此错误消息

Input to reshape is a tensor with 256 values, but the requested shape has 1