在python 2.7中引入了使用张量流的新激活函数

时间:2018-12-18 17:11:29

标签: python python-2.7 tensorflow deep-learning

我想在python 2.7 tensorflow中引入一个新的激活函数。但是,我通过查看一些参考进行了尝试,但大多数情况下是在python3中实现的。我通过将其重写为python 2来实现它,但每次都会遇到相同的错误。

class Mylayer(tf.keras.layers.Layer):
      def __init__(self, output_units, *args, **kwargs):
          self.output_units=output_units
          super(Mylayer, self).__init__(**kwargs)
      def build(self,input_shape):
           super(Mylayer, self).build(input_shape)
           self.kernel = self.add_variable(“kernel”,shape 
                                 [input_shape[-1].value,self.output_units])
      def call(self,x):
           if x<0:
             newfx=0
           else :
             newfx=x*1.5
           return tf.matmul(newfx,self.kernel)

这是我的激活功能。

self.inputs = tf.placeholder(shape=[1,4], dtype=tf.float32)
self.weights1 = tf.Variable(tf.truncated_normal([4,4]))
self.bias1 = tf.Variable(tf.zeros(shape=[1,4]))
self.weights2 = tf.Variable(tf.truncated_normal([4,4]))
self.bias2 = tf.Variable(tf.zeros(shape=[1, 4]))
self.weights3 = tf.Variable(tf.truncated_normal([4,1]))
self.bias3 = tf.Variable(tf.zeros([1,1]))
self.layer1 = tf.tanh(tf.matmul(self.inputs, self.weights1) + self.bias1)
self.layer2 = tf.tanh(tf.matmul(self.layer1, self.weights2) + self.bias2)
self.layer3 = Mylayer(tf.matmul(self.layer2, self.weights3) + self.bias3)
self.output_layer = self.layer3

这是我的网络。上次会话等被省略。

Traceback (most recent call last):
 File “capture.py”, line 832, in <module>
   options = readCommand( sys.argv[1:] ) # Get game components based on input
 File “capture.py”, line 683, in readCommand
   redAgents = loadAgents(True, options.red, nokeyboard, redArgs)
 File “capture.py”, line 755, in loadAgents
   return createTeamFunc(indices[0], indices[1], isRed, **args)
 File “/Users/Yuay/research/code1/Pacman-Tournament-Agent/code.py”, line 131, in createTeam
   return [eval(first)(firstIndex, **kwargs), eval(second)(secondIndex, **kwargs)]
 File “/Users/Yuay/research/code1/Pacman-Tournament-Agent/code.py”, line 212, in __init__
   self.loss = tf.reduce_sum(tf.square((self.nextQ - self.output_layer)))
 File “/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py”, line 869, in binary_op_wrapper
   y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name=“y”)
 File “/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py”, line 1050, in convert_to_tensor
   as_ref=False)
 File “/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py”, line 1146, in internal_convert_to_tensor
   ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
 File “/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py”, line 229, in _constant_tensor_conversion_function
   return constant(v, dtype=dtype, name=name)
 File “/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py”, line 208, in constant
   value, dtype=dtype, shape=shape, verify_shape=verify_shape))
 File “/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.py”, line 442, in make_tensor_proto
   _AssertCompatible(values, dtype)
 File “/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.py”, line 353, in _AssertCompatible
   (dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected float32, got <my.Mylayer object at 0x10385dd50> of type ‘Mylayer’ instead.

这是错误。

我知道返回值是通过对象而不是浮点数来的,但是我不知道如何解决它。 我对张量流和深度学习没有足够的了解。但是在keras中,这种实现效果很好。

1 个答案:

答案 0 :(得分:0)

要创建自定义层,首先创建自定义层对象,然后通过调用该对象来创建该层。初始化期间的输入参数是输出的数量,所以我们说它是10:

custom_layer = Mylayer(10)

然后,您可以使用custom_layer来制作图层:

self.layer3 = custom_layer(tf.matmul(self.layer2, self.weights3) + self.bias3)

注意:在撰写本文时,将出现错误,因为tf.matmul(self.layer2, self.weights3) + self.bias3是形状为(1,1)的张量。在检查条件x < 0或使用tf.cond之前,需要先评估张量。

编辑: 这是通过条件获取newfx的方法。 tf.cond需要返回相同的张量类型(无论是真还是假),因此,当元素小于零时,将生成t_false张量。

def call(self, x):
    t_false = tf.convert_to_tensor(np.array([[0]]), dtype=tf.float32)
    newfx = tf.cond(x[0,0] < 0, lambda: t_false, lambda: x*1.5)
    return tf.matmul(newfx, self.kernel)