我正在使用keras编写测试模型,我想在其中进行一些数学运算,具体取决于图层及其衍生物的输出的数值。
我正在使用tensorflow后端。 我使用K.function来获取Lambda层和派生层的输出值。但是,如果我选择Lambda层中的函数作为幂函数,则会出现一些奇怪的错误,例如x ** 2。如果我将x ** 2更改为sin(x),则可以正常工作。
directConnect: true,
然后我使用backend.function定义一个函数以获得图层输出
import numpy as np
from keras.models import Model
from keras.layers import Input, Layer, Lambda
from keras import backend as K
x = Input(shape=(1,))
# the Lambda layer
c = Lambda(lambda x: x**2)(x) # this will causs err
#c = Lambda(lambda x: K.sin(x))(x) # but this works fine
class dc_layer(Layer):
def __init__(self,*args,**kwargs):
self.is_placeholder = True
super(dc_layer, self).__init__(*args,**kwargs)
def call(self,inputs):
x = inputs[0]
c0 = inputs[1]
c1 = K.gradients(c0,x)
return c1
# the derivatives of the lambda layer
c1 = dc_layer()([x,c])
c2 = dc_layer()([x,c1])
我在jupyter笔记本中收到以下错误消息
# define a function to get the derivatives
get_layer_outputs = K.function([x],[c2])
x_data = np.linspace(0,1,6)
val = get_layer_outputs([x_data])[0]
print(val)
哪个回击
InvalidArgumentError: data[0].shape = [1] does not start with indices[0].shape = [2]
但是如果我看一下c1层
---> 36 val = get_layer_outputs([x_data])[0]
它工作正常。
我猜我在使用K.function时出了点问题。任何解决方案/建议将不胜感激。
================================================ =======
即使我尝试了一个非常简单的代码,使用K.function时也会出错,如下所示
# define a function to get the derivatives
get_layer_outputs = K.function([x],[c1])
x_data = np.linspace(0,1,6)
val = get_layer_outputs([x_data])[0]
print(val)
我知道了
x = Input(shape=(1,))
h = Dense(10,activation='sigmoid')(x)
c = Dense(1)(h)
get_layer_outputs = K.function([x],[c])
x_data = np.linspace(0,1,6)
val = get_layer_outputs([x_data])[0]
print(val)
现在,我对如何正确使用K.function感到非常困惑。如果您有任何想法请帮助。谢谢!
答案 0 :(得分:0)
对我来说这行得通-您的x_data向量为0维:
import numpy as np
from keras.models import Model
from keras.layers import Input, Layer, Lambda, Dense
from keras import backend as K
x = Input(shape=(1,))
# the Lambda layer
c = Lambda(lambda x: x**2)(x) # this will causs err
#c = Lambda(lambda x: K.sin(x))(x) # but this works fine
class dc_layer(Layer):
def __init__(self,*args,**kwargs):
self.is_placeholder = True
super(dc_layer, self).__init__(*args,**kwargs)
def call(self,inputs):
x = inputs[0]
c0 = inputs[1]
c1 = K.gradients(c0,x)
return c1
# the derivatives of the lambda layer
c1 = dc_layer()([x,c]) # in Keras 2.0.2 need to unpack results, Keras 2.2.4 seems fine.
c2 = dc_layer()([x,c1])
# define a function to get the derivatives
get_layer_outputs = K.function([x],[c2])
x_data = np.linspace(0,1,6)[:,None] # ensure vector is 1D, not 0D
val = get_layer_outputs([x_data])[0]
print(val)
输出:
[[2.]
[2.]
[2.]
[2.]
[2.]
[2.]]