感谢您看我的问题。
例如。
最终输出是两个矩阵A和B的总和,如下所示:
output = keras.layers.add([A, B])
现在,我想建立一个新的参数x来更改输出。
我想输入newoutput = A x + B (1-x)
并且x是我的网络中的可训练参数。
我该怎么办? 请帮助我〜非常感谢!
修改(部分代码):
conv1 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(input)
drop1 = Dropout(0.5)(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(drop1)
conv2 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
drop2 = Dropout(0.5)(conv2)
up1 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop2))
#the line I want to change:
merge = add([drop2,up1])
#this layer is simply add drop2 and up1 layer.now I want to add a trainable parameter x to adjust the weight of thoese two layers.
我尝试使用代码,但仍然出现一些问题:
1。如何使用我自己的图层?
merge = Mylayer()(drop2,up1)
还是其他方式?
2。out_dim是什么意思? 这些参数都是3维矩阵。out_dim的含义是什么?
谢谢... T.T
edit2(已解决)
from keras import backend as K
from keras.engine.topology import Layer
import numpy as np
from keras.layers import add
class MyLayer(Layer):
def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
self._x = K.variable(0.5)
self.trainable_weights = [self._x]
super(MyLayer, self).build(input_shape) # Be sure to call this at the end
def call(self, x):
A, B = x
result = add([self._x*A ,(1-self._x)*B])
return result
def compute_output_shape(self, input_shape):
return input_shape[0]
答案 0 :(得分:2)
您必须创建一个自Layer
继承的自定义类,并使用self.add_weight(...)
创建可训练的参数。您可以找到此here和there的示例。
对于您的示例,该层将以某种方式如下所示:
from keras import backend as K
from keras.engine.topology import Layer
import numpy as np
class MyLayer(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer.
self._A = self.add_weight(name='A',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
self._B = self.add_weight(name='B',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
super(MyLayer, self).build(input_shape) # Be sure to call this at the end
def call(self, x):
return K.dot(x, self._A) + K.dot(1-x, self._B)
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
编辑:仅基于名称(我错误地)假设x
是图层输入,并且您想优化A
和B
。但是,正如您所说,您想优化x
。为此,您可以执行以下操作:
from keras import backend as K
from keras.engine.topology import Layer
import numpy as np
class MyLayer(Layer):
def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer.
self._x = self.add_weight(name='x',
shape=(1,),
initializer='uniform',
trainable=True)
super(MyLayer, self).build(input_shape) # Be sure to call this at the end
def call(self, x):
A, B = x
return K.dot(self._x, A) + K.dot(1-self._x, B)
def compute_output_shape(self, input_shape):
return input_shape[0]
Edit2 :您可以使用
调用该层merge = Mylayer()([drop2,up1])