在Keras模型中采样的Softmax

时间:2019-06-30 00:24:58

标签: python tensorflow keras sampled-softmax

我考虑过的一些方法:

从Model类继承 Sampled softmax in tensorflow keras

从Layers类继承 How can I use TensorFlow's sampled softmax loss function in a Keras model?

在这两种方法中,“模型”方法更为简洁,因为“分层”方法有点笨拙-它将目标作为输入的一部分推入,然后再通过多输出模型。

在对Model类进行子类化时,我需要一些帮助-具体来说: 1)与第一种方法不同-我想像指定标准keras模型一样,采用任意数量的层。例如,

class LanguageModel(tf.keras.Model):
    def __init__(self, **kwargs)

2)我希望将以下代码合并到模型类中-但要让模型类认识到

def call(self, y_true, input):
        """ reshaping of y_true and input to make them fit each other """
        input = tf.reshape(input, (-1,self.hidden_size))
        y_true = tf.reshape(y_true, (-1,1))
      weights = tf.Variable(tf.float64))
      biases = tf.Variable(tf.float64)
      loss = tf.nn.sampled_softmax_loss(
      weights=weights,
      biases=biases,
      labels=labels,
      inputs=inputs,
      ...,
      partition_strategy="div")
      logits = tf.matmul(inputs, tf.transpose(weights))
      logits = tf.nn.bias_add(logits, biases)
       y_predis = tf.nn.softmax_cross_entropy_with_logits_v2(
                                labels=inputs[1],
                                logits=logits) 




3我想我需要一些指向功能API中Model类的哪些部分的指针,因为我知道我必须像上面那样编写自定义损失函数。 我想问题在于访问tf.nn.sampledsoftmax函数中的权重

1 个答案:

答案 0 :(得分:2)

我能想到的最简单的方法是定义一个忽略输出层结果的损耗。

完整Colab在这里: https://colab.research.google.com/drive/1Rp3EUWnBE1eCcaisUju9TwSTswQfZOkS

损失函数。请注意,它假定输出层是Dense(activation ='softmax'),并且忽略#[derive(PartialEq, Eq, Clone, Debug)] pub struct ListNode { pub val: i32, pub next: Option<Box<ListNode>>, } pub struct ListIter<'a> { iter: &'a Option<Box<ListNode>>, } impl<'a> Iterator for ListIter<'a> { type Item = i32; fn next(&mut self) -> Option<Self::Item> { if let Some(cur) = self.iter { self.iter = &cur.next; Some(cur.val) } else { None } } } impl ListNode { pub fn new(val: i32) -> Self { ListNode { next: None, val } } pub fn vec_to_list<'a>( value: impl IntoIterator<IntoIter = impl DoubleEndedIterator<Item = &'a i32>, Item = &'a i32>, ) -> Option<Box<ListNode>> { Self::vec_to_list_aux(value.into_iter().rev().copied(), None) } fn vec_to_list_aux( value: impl Iterator<Item = i32>, accu: Option<Box<ListNode>>, ) -> Option<Box<ListNode>> { let mut value = value; match value.next() { Some(x) => { Self::vec_to_list_aux(value, Some(Box::new(ListNode { val: x, next: accu }))) } None => accu, } } pub fn list_to_vec(list: &Option<Box<ListNode>>) -> Vec<i32> { Self::iter(list).collect() } fn iter(list: &Option<Box<ListNode>>) -> ListIter { ListIter { iter: &list } } } fn main() { let list = ListNode::vec_to_list(&[3, 4, 2]); println!("{:#?}", list); println!("{:?}", ListNode::list_to_vec(&list)); } 。因此,在使用损耗的训练/评估期间,密集层的实际输出为NOP。

进行预测时使用输出层。

y_pred

型号:

class SampledSoftmaxLoss(object):
  """ The loss function implements the Dense layer matmul and activation
  when in training mode.
  """
  def __init__(self, model):
    self.model = model
    output_layer = model.layers[-1]
    self.input = output_layer.input
    self.weights = output_layer.weights

  def loss(self, y_true, y_pred, **kwargs):
    labels = tf.argmax(y_true, axis=1)
    labels = tf.expand_dims(labels, -1)
    loss = tf.nn.sampled_softmax_loss(
        weights=self.weights[0],
        biases=self.weights[1],
        labels=labels,
        inputs=self.input,
        num_sampled = 3,
        num_classes = 4,
        partition_strategy = "div",
    )
    return loss

请注意,SampledSoftmaxLoss规定最后一个模型层的输入必须具有与类数相同的尺寸。