我正在尝试从lightaime的Github页面理解此代码。这是经过向量化的softmax方法。令我感到困惑的是“ softmax_output [range(num_train),list(y)]”
这个表达是什么意思?
def softmax_loss_vectorized(W, X, y, reg):
"""
Softmax loss function, vectorize implementation
Inputs have dimension D, there are C classes, and we operate on minibatches of N examples.
Inputs:
W: A numpy array of shape (D, C) containing weights.
X: A numpy array of shape (N, D) containing a minibatch of data.
y: A numpy array of shape (N,) containing training labels; y[i] = c means that X[i] has label c, where 0 <= c < C.
reg: (float) regularization strength
Returns a tuple of:
loss as single float
gradient with respect to weights W; an array of same shape as W
"""
# Initialize the loss and gradient to zero.
loss = 0.0
dW = np.zeros_like(W)
num_classes = W.shape[1]
num_train = X.shape[0]
scores = X.dot(W)
shift_scores = scores - np.max(scores, axis = 1).reshape(-1,1)
softmax_output = np.exp(shift_scores)/np.sum(np.exp(shift_scores), axis = 1).reshape(-1,1)
loss = -np.sum(np.log(softmax_output[range(num_train), list(y)]))
loss /= num_train
loss += 0.5* reg * np.sum(W * W)
dS = softmax_output.copy()
dS[range(num_train), list(y)] += -1
dW = (X.T).dot(dS)
dW = dW/num_train + reg* W
return loss, dW
答案 0 :(得分:0)
此表达式的意思是:切片形状为softmax_output
的数组(N, C)
,仅从中提取与训练标签y
相关的值。
二维numpy.array
可以切成两个包含适当值的列表(即它们不应引起索引错误)
range(num_train)
为第一个轴创建一个索引,该索引允许使用第二个索引-list(y)
在每一行中选择特定值。您可以在numpy documentation for indexing中找到它。
第一索引range_num的长度等于第一维度softmax_output
(= {N
)。它指向矩阵的每一行;然后为每一行从索引的第二部分list(y)
中通过相应的值选择目标值。
示例:
softmax_output = np.array( # dummy values, not softmax
[[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12]]
)
num_train = 4 # length of the array
y = [2, 1, 0, 2] # a labels; values for indexing along the second axis
softmax_output[range(num_train), list(y)]
Out:
[3, 5, 0, 12]
因此,它从第一行中选择第三个elemend,从第二行中选择第二个,等等。这就是它的工作方式。
(附言:我是否误解了您,您对“为什么”而不是“如何”感兴趣?)
答案 1 :(得分:0)