Vectorize for循环在python中

时间:2018-05-04 21:39:10

标签: python python-3.x jit numba

我有一个跟随循环,我正在为不同大小的批量计算softmax变换,如下所示

import numpy as np 
def softmax(Z,arr):
    """
    :param Z:  numpy array of any shape (output from hidden layer)
    :param arr: numpy array of any shape (start, end)
    :return A: output of multinum_logit(Z,arr), same shape as Z
    :return cache: returns Z as well, useful during back propagation
    """
    A = np.zeros(Z.shape)
    for i in prange(len(arr)):
        shiftx = Z[:,arr[i,1]:arr[i,2]+1] - np.max(Z[:,int(arr[i,1]):int(arr[i,2])+1])
        A[:,arr[i,1]:arr[i,2]+1] = np.exp(shiftx)/np.exp(shiftx).sum()
    cache = Z
    return A,cache

由于这个for循环没有矢量化,所以它是我代码中的瓶颈。什么是使其更快的可能解决方案。我尝试使用@jit numba,这使得它更快但不够。我想知道是否有另一种方法可以使它更快或矢量化/并行化它。

函数的示例输入数据

Z = np.random.random([1,10000])
arr = np.zeros([100,3])
arr[:,0] = 1
temp = int(Z.shape[1]/arr.shape[0])
for i in range(arr.shape[0]):
    arr[i,1] = i*temp
    arr[i,2] = (i+1)*temp-1
arr = arr.astype(int)

修改

我忘了在这里强调我的班级数量各不相同。例如,批次1表示10个类,批次2可以有15个类。因此,我传递一个数组arr,它跟踪哪些行属于batch1,依此类推。这些批次与传统神经网络框架中的批次不同

在上面的示例中,arr跟踪行的起始索引和结束索引。因此,softmax函数中的分母只是那些索引位于起始索引和结束索引之间的观察值的总和。

1 个答案:

答案 0 :(得分:0)

这是一个矢量化softmax函数。这是斯坦福大学在转发网上的cs231n课程作业的实施。

该函数采用可优化的参数,输入数据,目标和正则化器。 (您可以忽略正则化器,因为它引用了某些cs231n赋值专用的另一个类。)

它返回参数的损失和梯度。

def softmax_loss_vectorized(W, X, y, reg):
  """
  Softmax loss function, vectorized version.
  Inputs and outputs are the same as softmax_loss_naive.
  """
  # Initialize the loss and gradient to zero.

  loss = 0.0
  dW = np.zeros_like(W)

  num_train = X.shape[0]

  scores = X.dot(W)

  shift_scores = scores - np.amax(scores,axis=1).reshape(-1,1)

  softmax = np.exp(shift_scores)/np.sum(np.exp(shift_scores), axis=1).reshape(-1,1)

  loss = -np.sum(np.log(softmax[range(num_train), list(y)]))

  loss /= num_train

  loss += 0.5* reg * np.sum(W * W)

  dSoftmax = softmax.copy()

  dSoftmax[range(num_train), list(y)] += -1

  dW = (X.T).dot(dSoftmax)
  dW = dW/num_train + reg * W

  return loss, dW

为了比较,这里是一个相同方法的天真(非矢量化)实现。

def softmax_loss_naive(W, X, y, reg):
  """
  Softmax loss function, naive implementation (with loops)
  Inputs have dimension D, there are C classes, and we operate on minibatches
  of N examples.
  Inputs:
  - W: A numpy array of shape (D, C) containing weights.
  - X: A numpy array of shape (N, D) containing a minibatch of data.
  - y: A numpy array of shape (N,) containing training labels; y[i] = c means
    that X[i] has label c, where 0 <= c < C.
  - reg: (float) regularization strength
  Returns a tuple of:
  - loss as single float
  - gradient with respect to weights W; an array of same shape as W
  """

  loss = 0.0
  dW = np.zeros_like(W)

  num_train = X.shape[0]
  num_classes = W.shape[1]

  for i in xrange(num_train):
      scores = X[i].dot(W)

      shift_scores = scores - max(scores)

      loss_i = -shift_scores[y[i]] + np.log(sum(np.exp(shift_scores)))
      loss += loss_i
      for j in xrange(num_classes):
          softmax = np.exp(shift_scores[j])/sum(np.exp(shift_scores))
          if j==y[i]:

              dW[:,j] += (-1 + softmax) * X[i]
          else:
              dW[:,j] += softmax *X[i]

  loss /= num_train

  loss += 0.5 * reg * np.sum(W * W)

  dW /= num_train + reg * W

  return loss, dW

Source