我正在尝试为3通道输入(三轴加速度计数据)的活动分类设计一种自动编码器。
输入张量的形状为[None,200,3]
([批大小,窗口大小,通道数]),在第一层中,我想简单地将输入层的尺寸减小为[None,150,3]
。这是用于创建占位符和第一层的代码:
import tensorflow as tf
def denseLayer(inputVal,weight,bias):
return tf.nn.relu((tf.matmul(inputVal,weight)+bias))
x = tf.placeholder(dtype=tf.float32,shape=[None,200,3]) #Input tensor
wIn = tf.get_variable(name='wIn',initializer=tf.truncated_normal(stddev=0.1,dtype=tf.float32,shape=[200,150]))
bIn = tf.get_variable(name='bIn',initializer=tf.constant(value = 0.1,shape=[150,3],dtype=tf.float32))
firstLayer = denseLayer(x,weight=wIn,bias=bIn)
该代码当然会导致错误(由于x
和wIn
之间的排名不同),我无法确定{{1}的形状}变量以获取wIn
所需的firstLayer
形状。
答案 0 :(得分:1)
我认为这可以满足您的要求
import tensorflow as tf
def denseLayer(inputVal, weight, bias):
# Each input "channel" uses the corresponding set of weights
value = tf.einsum('nic,ijc->njc', inputVal, weight) + bias
return tf.nn.relu(value)
#Input tensor
x = tf.placeholder(dtype=tf.float32, shape=[None, 200, 3])
# Weights and biases have three "channels" each
wIn = tf.get_variable(name='wIn',
shape=[200, 150, 3],
initializer=tf.truncated_normal_initializer(stddev=0.1))
bIn = tf.get_variable(name='bIn',
shape=[150, 3],
initializer=tf.constant_initializer(value=0.1))
firstLayer = denseLayer(x, weight=wIn, bias=bIn)
print(firstLayer)
# Tensor("Relu:0", shape=(?, 150, 3), dtype=float32)
这里wIn
可以看作是应用于每个输入通道的三组[200, 150]
参数。我认为tf.einsum
是在这种情况下最简单的实现方法。