张量流中定义的`_softmax_cross_entropy_with_logits`在哪里?

时间:2017-12-27 06:11:25

标签: python tensorflow neural-network deep-learning ack

我想看看softmax_cross_entropy_with_logits_v2()是如何实现的。它调用_softmax_cross_entropy_with_logits()。但我不知道后者的定义。有人知道如何找到它的定义吗?

$ ack '\b_softmax_cross_entropy_with_logits\b'
tensorflow/compiler/tests/binary_ops_test.py
176:          gen_nn_ops._softmax_cross_entropy_with_logits,

tensorflow/python/kernel_tests/xent_op_test.py
52:      loss, backprop = gen_nn_ops._softmax_cross_entropy_with_logits(
75:        loss, backprop = gen_nn_ops._softmax_cross_entropy_with_logits(
93:                              gen_nn_ops._softmax_cross_entropy_with_logits,
135:        gen_nn_ops._softmax_cross_entropy_with_logits(
141:        gen_nn_ops._softmax_cross_entropy_with_logits([0., 1., 2., 3.],

tensorflow/python/ops/nn_ops.py
1803:    cost, unused_backprop = gen_nn_ops._softmax_cross_entropy_with_logits(

2 个答案:

答案 0 :(得分:6)

kmario23的答案是正确的:基本上,当你看到对gen_*包的引用时,它意味着自动生成的python代码。

在这种情况下,它是gen_nn_ops.py

def _softmax_cross_entropy_with_logits(features, labels, name=None):
  r"""Computes softmax cross entropy cost and gradients to backpropagate.

  Inputs are the logits, not probabilities.

  Args:
    features: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
      batch_size x num_classes matrix
    labels: A `Tensor`. Must have the same type as `features`.
      batch_size x num_classes matrix
      The caller must ensure that each batch of labels represents a valid
      probability distribution.
    name: A name for the operation (optional).

  Returns:
    A tuple of `Tensor` objects (loss, backprop).

    loss: A `Tensor`. Has the same type as `features`. Per example loss (batch_size vector).
    backprop: A `Tensor`. Has the same type as `features`. backpropagated gradients (batch_size x num_classes matrix).
  """
  _ctx = _context.context()
  if _ctx.in_graph_mode():
    _, _, _op = _op_def_lib._apply_op_helper(
        "SoftmaxCrossEntropyWithLogits", features=features, labels=labels,
        name=name)
    _result = _op.outputs[:]
    _inputs_flat = _op.inputs
    _attrs = ("T", _op.get_attr("T"))
  else:
    _attr_T, _inputs_T = _execute.args_to_matching_eager([features, labels], _ctx)
    (features, labels) = _inputs_T
    _attr_T = _attr_T.as_datatype_enum
    _inputs_flat = [features, labels]
    _attrs = ("T", _attr_T)
    _result = _execute.execute(b"SoftmaxCrossEntropyWithLogits", 2,
                               inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
                               name=name)
  _execute.record_gradient(
      "SoftmaxCrossEntropyWithLogits", _inputs_flat, _attrs, _result, name)
  _result = _SoftmaxCrossEntropyWithLogitsOutput._make(_result)
  return _result

但是由于此函数是本机C ++实现的包装器,您可能有兴趣看到实际的C ++代码。它位于tensorflow/core/kernels/xent_op.cc,适用于CPU和GPU:

template <typename Device, typename T>
class SoftmaxXentWithLogitsOp : public OpKernel {
 public:
  explicit SoftmaxXentWithLogitsOp(OpKernelConstruction* context)
      : OpKernel(context) {}

  void Compute(OpKernelContext* context) override {
    const Tensor& logits_in = context->input(0);
    const Tensor& labels_in = context->input(1);
    OP_REQUIRES(context, logits_in.IsSameSize(labels_in),
                errors::InvalidArgument(
                    "logits and labels must be same size: logits_size=",
                    logits_in.shape().DebugString(), " labels_size=",
                    labels_in.shape().DebugString()));
    OP_REQUIRES(context, TensorShapeUtils::IsMatrix(logits_in.shape()),
                errors::InvalidArgument("logits must be 2-dimensional"));
    // As we already tested that both inputs have the same shape no need to
    // check that "labels" is a matrix too.

    // loss is 1-D (one per example), and size is batch_size.

    Tensor scratch;
    OP_REQUIRES_OK(
        context, context->allocate_temp(DataTypeToEnum<T>::value,
                                        TensorShape({logits_in.dim_size(0), 1}),
                                        &scratch));

    Tensor* loss_out = nullptr;
    OP_REQUIRES_OK(context,
                   context->allocate_output(
                       0, TensorShape({logits_in.dim_size(0)}), &loss_out));
    Tensor* back_out = nullptr;
    // Try to reuse the logits_in buffer for the backprop output.
    OP_REQUIRES_OK(context, context->forward_input_or_allocate_output(
                                {0}, 1, logits_in.shape(), &back_out));
    functor::XentFunctor<Device, T> functor;
    functor(context->eigen_device<Device>(), logits_in.matrix<T>(),
            labels_in.matrix<T>(), scratch.matrix<T>(), loss_out->vec<T>(),
            back_out->matrix<T>());
  }
};

如果您有兴趣深入了解,主要电话会在最后一行:functor(...),其中functorXentFunctor<Device, T>。实际逻辑被分派到第三方Eigen库。请参阅this very similar question,其中显示了最终的深度。

答案 1 :(得分:3)

在github中找不到它的实现,因为源代码是在TensorFlow安装期间由bazel build自动生成的。您可以在以下的安装目录中找到源代码:

tensorflow/python/ops/gen_nn_ops.py

实际的实现是用C ++编写的。另请参阅source code for gen_nn_ops