因果关系的多特征CNN-Keras实现

时间:2019-04-25 13:46:09

标签: python keras deep-learning conv-neural-network lstm

我目前正在使用基本的LSTM进行回归预测,并且我想实现因果的CNN,因为它应该在计算上更加有效。

我正在努力寻找如何重塑当前数据以适应因果CNN单元并代表相同的数据/时间步长关系以及应将扩张率设置为什么的方法。

我当前的数据具有以下形状:(number of examples, lookback, features),这是我正在使用的LSTM NN的基本示例。

lookback = 20   #  height -- timeseries
n_features = 5  #  width  -- features at each timestep

# Build an LSTM to perform regression on time series input/output data
model = Sequential()
model.add(LSTM(units=256, return_sequences=True, input_shape=(lookback, n_features)))
model.add(Activation('elu'))

model.add(LSTM(units=256, return_sequences=True))
model.add(Activation('elu'))

model.add(LSTM(units=256))
model.add(Activation('elu'))

model.add(Dense(units=1, activation='linear'))

model.compile(optimizer='adam', loss='mean_squared_error')

model.fit(X_train, y_train,
          epochs=50, batch_size=64,
          validation_data=(X_val, y_val),
          verbose=1, shuffle=True)

prediction = model.predict(X_test)

然后,我创建了一个新的CNN模型(尽管不是因果关系,因为根据Keras文档,'causal'填充只是Conv1D而非Conv2D的一个选项。如果我理解正确的话,具有多种功能,我需要使用Conv2D而不是Conv1D,但是如果我设置Conv2D(padding='causal'),则会收到以下错误-Invalid padding: causal

无论如何,我还能够使用新形状(number of examples, lookback, features, 1)拟合数据,并使用Conv2D层运行以下模型:

lookback = 20   #  height -- timeseries
n_features = 5  #  width  -- features at each timestep

 model = Sequential()
            model.add(Conv2D(128, 3, activation='elu', input_shape=(lookback, n_features, 1)))
model.add(MaxPool2D())
model.add(Conv2D(128, 3, activation='elu'))
model.add(MaxPool2D())
model.add(Flatten())
model.add(Dense(1, activation='linear'))

model.compile(optimizer='adam', loss='mean_squared_error')

model.fit(X_train, y_train,
          epochs=50, batch_size=64,
          validation_data=(X_val, y_val),
          verbose=1, shuffle=True)

prediction = model.predict(X_test)

但是,据我所知,这并不是因果关系传播数据,而只是将整个(lookback, features, 1)传播为图像。

是否可以通过多种功能重塑数据以适应Conv1D(padding='causal')图层(具有多个功能)或以某种方式运行与Conv2D相同的数据并输入形状,并使用'causal'填充? / p>

2 个答案:

答案 0 :(得分:1)

我相信您可以针对任意数量的输入功能使用因果填充膨胀。这是我建议的解决方案。

TimeDistributed layer是关键。

来自Keras文档:“此包装器将层应用于输入的每个时间切片。输入应至少为3D,索引1的维将被视为时间维。”

出于我们的目的,我们希望该层对每个要素应用“内容”,因此将要素移至时间索引(即1)。

Conv1D documentation也很重要。

专门关于通道:“输入中维的顺序。“ channels_last”对应于具有形状(批处理,步骤,通道)的输入(Keras中时间数据的默认格式)” < / p>

from tensorflow.python.keras import Sequential, backend
from tensorflow.python.keras.layers import GlobalMaxPool1D, Activation, MaxPool1D, Flatten, Conv1D, Reshape, TimeDistributed, InputLayer

backend.clear_session()
lookback = 20
n_features = 5

filters = 128

model = Sequential()
model.add(InputLayer(input_shape=(lookback, n_features, 1)))
# Causal layers are first applied to the features independently

model.add(Reshape(target_shape=(n_features, lookback, 1)))
# After reshape 5 input features are now treated as the temporal layer 
# for the TimeDistributed layer

# When Conv1D is applied to each input feature, it thinks the shape of the layer is (20, 1)
# with the default "channels_last", therefore...

# 20 times steps is the temporal dimension
# 1 is the "channel", the new location for the feature maps

model.add(TimeDistributed(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**0)))
# You could add pooling here if you want. 
# If you want interaction between features AND causal/dilation, then apply later
model.add(TimeDistributed(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**1)))
model.add(TimeDistributed(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**2)))


# Stack feature maps on top of each other so each time step can look at 
# all features produce earlier
model.add(Reshape(target_shape=(lookback, n_features * filters)))  # (20 time steps, 5 features * 128 filters)
# Causal layers are applied to the 5 input features dependently
model.add(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**0))
model.add(MaxPool1D())
model.add(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**1))
model.add(MaxPool1D())
model.add(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**2))
model.add(GlobalMaxPool1D())
model.add(Dense(units=1, activation='linear'))

model.compile(optimizer='adam', loss='mean_squared_error')

model.summary()

最终模型摘要

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
reshape (Reshape)            (None, 5, 20, 1)          0         
_________________________________________________________________
time_distributed (TimeDistri (None, 5, 20, 128)        512       
_________________________________________________________________
time_distributed_1 (TimeDist (None, 5, 20, 128)        49280     
_________________________________________________________________
time_distributed_2 (TimeDist (None, 5, 20, 128)        49280     
_________________________________________________________________
reshape_1 (Reshape)          (None, 20, 640)           0         
_________________________________________________________________
conv1d_3 (Conv1D)            (None, 20, 128)           245888    
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 10, 128)           0         
_________________________________________________________________
conv1d_4 (Conv1D)            (None, 10, 128)           49280     
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 5, 128)            0         
_________________________________________________________________
conv1d_5 (Conv1D)            (None, 5, 128)            49280     
_________________________________________________________________
global_max_pooling1d (Global (None, 128)               0         
_________________________________________________________________
dense (Dense)                (None, 1)                 129       
=================================================================
Total params: 443,649
Trainable params: 443,649
Non-trainable params: 0
_________________________________________________________________

编辑:

“为什么需要重塑形状并将n_features用作时间层”

n_features最初需要位于时间层的原因是,具有扩张和因果填充的Conv1D一次仅适用于一个功能,以及因为TimeDistributed层的实现方式。

从文档“ ”中考虑“一批32个样本,其中每个样本是一个由16个维度的10个向量组成的序列。该层的批次输入形状为(32,10,16),并且input_shape (不包括样本维度)为(10,16)。

然后,您可以使用TimeDistributed独立地将Dense图层应用于10个时间步长中的每一个:“

通过将TimeDistributed层分别应用于每个要素,可以减小问题的范围,就好像只有一个要素一样(这很容易允许膨胀和因果填充)。具有5个功能,它们需要首先分别处理。

  • 编辑后,此建议仍然适用。

  • 在网络方面,无论InputLayer是包含在第一层中还是单独包含在内,如果可以解决问题,您都可以肯定地将其放在第一CNN中。

答案 1 :(得分:0)

Conv1D中,使用因果填充是膨胀卷积。对于Conv2D,可以使用Conv2D类的dilation_rate参数。您必须为dilation_rate分配2个整数的整数。有关更多信息,请阅读keras documentationhere