在许多类的分类问题中,tensorflow文档建议在简单sampled_softmax_loss上使用softmax来减少训练运行时间。
根据docs和source(第1180行),sampled_softmax_loss的调用模式为:
tf.nn.sampled_softmax_loss(weights, # Shape (num_classes, dim) - floatXX
biases, # Shape (num_classes) - floatXX
labels, # Shape (batch_size, num_true) - int64
inputs, # Shape (batch_size, dim) - floatXX
num_sampled, # - int
num_classes, # - int
num_true=1,
sampled_values=None,
remove_accidental_hits=True,
partition_strategy="mod",
name="sampled_softmax_loss")
目前尚不清楚(至少对我而言)如何将现实世界的问题转换为这种损失函数所需的形状。我认为'输入'字段是问题所在。
这是一个复制粘贴就绪的最小工作示例,在调用loss函数时抛出矩阵乘法形状错误。
import tensorflow as tf
# Network Parameters
n_hidden_1 = 256 # 1st layer number of features
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# Dependent & Independent Variable Placeholders
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes]) #
# Weights and Biases
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Super simple model builder
def tiny_perceptron(x, weights, biases):
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
return out_layer
# Create the model
pred = tiny_perceptron(x, weights, biases)
# Set up loss function inputs and inspect their shapes
w = tf.transpose(weights['out'])
b = biases['out']
labels = tf.reshape(tf.argmax(y, 1), [-1,1])
inputs = pred
num_sampled = 3
num_true = 1
num_classes = n_classes
print('Shapes\n------\nw:\t%s\nb:\t%s\nlabels:\t%s\ninputs:\t%s' % (w.shape, b.shape, labels.shape, inputs.shape))
# Shapes
# ------
# w: (10, 256) # Requires (num_classes, dim) - CORRECT
# b: (10,) # Requires (num_classes) - CORRECT
# labels: (?, 1) # Requires (batch_size, num_true) - CORRECT
# inputs: (?, 10) # Requires (batch_size, dim) - Not sure
loss_function = tf.reduce_mean(tf.nn.sampled_softmax_loss(
weights=w,
biases=b,
labels=labels,
inputs=inputs,
num_sampled=num_sampled,
num_true=num_true,
num_classes=num_classes))
最后一行触发和ValueError,说明你不能将张量乘以形状(?,10)和(?,256)。作为一般规则,我同意这一说法。完整错误如下所示:
ValueError: Dimensions must be equal, but are 10 and 256 for 'sampled_softmax_loss_2/MatMul_1' (op: 'MatMul') with input shapes: [?,10], [?,256].
如果tensorflow docs的'dim'值打算保持不变,那么进入loss函数的'weight'或'inputs'变量是不正确的。
任何想法都会很棒,我完全不知道如何正确使用这种损失功能&它会对我们用于(500k级)的模型的训练时间产生巨大影响。谢谢!
--- --- EDIT
通过使用参数并忽略sampled_softmax_loss
调用模式的预期输入,可以使上面显示的示例无错误地运行。如果你这样做,它会产生一个可训练的模型,对预测精度有影响(如你所料)。
答案 0 :(得分:0)
关键是要传递正确的重量,偏差,输入和标签形状。传递给sampled_softmax的权重形状与一般情况不同。
例如,logits = xw + b
,像这样调用sampled_softmax:
sampled_softmax(weight=tf.transpose(w), bias=b, inputs=x)
,不是sampled_softmax(weight=w, bias=b, inputs=logits)
!!
此外,标签不是一个热门的代表。如果您的标签是一个热门代表,请通过labels=tf.reshape(tf.argmax(labels_one_hot, 1), [-1,1])