我想为我的分类特征构建一个带有嵌入的LSTM模型。我目前拥有数字功能和一些分类功能,例如位置信息,不能一键编码,例如由于计算复杂性而使用pd.get_dummies()
,这是我本来打算做的。
让我们想象一个例子:
data = {
'user_id': [1,1,1,1,2,2,3],
'time_on_page': [10,20,30,20,15,10,40],
'location': ['London','New York', 'London', 'New York', 'Hong Kong', 'Tokyo', 'Madrid'],
'page_id': [5,4,2,1,6,8,2]
}
d = pd.DataFrame(data=data)
print(d)
user_id time_on_page location page_id
0 1 10 London 5
1 1 20 New York 4
2 1 30 London 2
3 1 20 New York 1
4 2 15 Hong Kong 6
5 2 10 Tokyo 8
6 3 40 Madrid 2
让我们看看访问网站的人。我正在跟踪诸如页面停留时间等数字数据。分类数据包括:位置(超过1000个唯一性),Page_id(大于1000个唯一性),Author_id(超过100个唯一性)。最简单的解决方案是对所有内容进行一次热编码,然后将其放入具有可变序列长度的LSTM中,每个时间步对应于不同的页面视图。
上面的DataFrame将生成7个训练样本,它们具有可变的序列长度。例如,对于user_id=2
,我将有2个训练样本:
[ ROW_INDEX_4 ] and [ ROW_INDEX_4, ROW_INDEX_5 ]
让X
作为训练数据,让我们看一下第一个训练样本X[0]
。
从上图中,我的分类特征是X[0][:, n:]
。
在创建序列之前,我使用[0,1... number_of_cats-1]
将分类变量分解为pd.factorize()
,因此X[0][:, n:]
中的数据是与其索引相对应的数字。
我需要分别为每个分类功能创建一个Embedding
吗?例如。每个x_*n, x_*n+1, ..., x_*m
的嵌入内容?
如果是这样,如何将其放入Keras代码中?
model = Sequential()
model.add(Embedding(?, ?, input_length=variable)) # How do I feed the data into this embedding? Only the categorical inputs.
model.add(LSTM())
model.add(Dense())
model.add.Activation('sigmoid')
model.compile()
model.fit_generator() # fits the `X[i]` one by one of variable length sequences.
我的解决办法:
看起来像这样的东西
我可以在每个单独的分类特征(m-n)上训练Word2Vec模型,以向量化任何给定的值。例如。伦敦将在3维上进行矢量化处理。假设我使用3维嵌入。然后,我将所有内容放回X矩阵,该矩阵现在将具有n + 3(n-m),并使用LSTM模型对其进行训练?
我只是认为应该有一种更容易/更智能的方法。
答案 0 :(得分:3)
正如您提到的,一种解决方案是对分类数据进行一次热编码(或以基于索引的格式按原样使用它们),并将其沿数值数据馈送到LSTM层。当然,这里还可以有两个LSTM层,一个用于处理数字数据,另一个用于处理分类数据(一种热编码格式或基于索引的格式),然后合并它们的输出。
另一种解决方案是为每个分类数据具有一个单独的嵌入层。每个嵌入层可能有其自己的嵌入尺寸(并且如上所述,您可能有多个LSTM层,分别用于处理数字和分类特征):
num_cats = 3 # number of categorical features
n_steps = 100 # number of timesteps in each sample
n_numerical_feats = 10 # number of numerical features in each sample
cat_size = [1000, 500, 100] # number of categories in each categorical feature
cat_embd_dim = [50, 10, 100] # embedding dimension for each categorical feature
numerical_input = Input(shape=(n_steps, n_numerical_feats), name='numeric_input')
cat_inputs = []
for i in range(num_cats):
cat_inputs.append(Input(shape=(n_steps,1), name='cat' + str(i+1) + '_input'))
cat_embedded = []
for i in range(num_cats):
embed = TimeDistributed(Embedding(cat_size[i], cat_embd_dim[i]))(cat_inputs[i])
cat_embedded.append(embed)
cat_merged = concatenate(cat_embedded)
cat_merged = Reshape((n_steps, -1))(cat_merged)
merged = concatenate([numerical_input, cat_merged])
lstm_out = LSTM(64)(merged)
model = Model([numerical_input] + cat_inputs, lstm_out)
model.summary()
这是模型摘要:
Layer (type) Output Shape Param # Connected to
==================================================================================================
cat1_input (InputLayer) (None, 100, 1) 0
__________________________________________________________________________________________________
cat2_input (InputLayer) (None, 100, 1) 0
__________________________________________________________________________________________________
cat3_input (InputLayer) (None, 100, 1) 0
__________________________________________________________________________________________________
time_distributed_1 (TimeDistrib (None, 100, 1, 50) 50000 cat1_input[0][0]
__________________________________________________________________________________________________
time_distributed_2 (TimeDistrib (None, 100, 1, 10) 5000 cat2_input[0][0]
__________________________________________________________________________________________________
time_distributed_3 (TimeDistrib (None, 100, 1, 100) 10000 cat3_input[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 100, 1, 160) 0 time_distributed_1[0][0]
time_distributed_2[0][0]
time_distributed_3[0][0]
__________________________________________________________________________________________________
numeric_input (InputLayer) (None, 100, 10) 0
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 100, 160) 0 concatenate_1[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 100, 170) 0 numeric_input[0][0]
reshape_1[0][0]
__________________________________________________________________________________________________
lstm_1 (LSTM) (None, 64) 60160 concatenate_2[0][0]
==================================================================================================
Total params: 125,160
Trainable params: 125,160
Non-trainable params: 0
__________________________________________________________________________________________________
但是,您可以尝试另一种解决方案:仅对所有分类功能使用一个嵌入层。但是,它涉及一些预处理:您需要重新索引所有类别以使其彼此区分。例如,将第一个分类特征中的类别从1编号为size_first_cat
,然后将第二个分类特征中的类别从size_first_cat + 1
编号为size_first_cat + size_second_cat
,依此类推。但是,在此解决方案中,所有分类特征都将具有相同的嵌入维,因为我们仅使用一个嵌入层。
更新:现在我考虑了一下,您还可以在数据预处理阶段甚至在模型中重塑类别特征,以摆脱TimeDistributed
层和{{1 }}层(这也可以提高训练速度):
Reshape
关于拟合模型,您需要分别为其每个输入层提供其自身对应的numpy数组,例如:
numerical_input = Input(shape=(n_steps, n_numerical_feats), name='numeric_input')
cat_inputs = []
for i in range(num_cats):
cat_inputs.append(Input(shape=(n_steps,), name='cat' + str(i+1) + '_input'))
cat_embedded = []
for i in range(num_cats):
embed = Embedding(cat_size[i], cat_embd_dim[i])(cat_inputs[i])
cat_embedded.append(embed)
cat_merged = concatenate(cat_embedded)
merged = concatenate([numerical_input, cat_merged])
lstm_out = LSTM(64)(merged)
model = Model([numerical_input] + cat_inputs, lstm_out)
如果您想使用X_tr_numerical = X_train[:,:,:n_numerical_feats]
# extract categorical features: you can use a for loop to this as well.
# note that we reshape categorical features to make them consistent with the updated solution
X_tr_cat1 = X_train[:,:,cat1_idx].reshape(-1, n_steps)
X_tr_cat2 = X_train[:,:,cat2_idx].reshape(-1, n_steps)
X_tr_cat3 = X_train[:,:,cat3_idx].reshape(-1, n_steps)
# don't forget to compile the model ...
# fit the model
model.fit([X_tr_numerical, X_tr_cat1, X_tr_cat2, X_tr_cat3], y_train, ...)
# or you can use input layer names instead
model.fit({'numeric_input': X_tr_numerical,
'cat1_input': X_tr_cat1,
'cat2_input': X_tr_cat2,
'cat3_input': X_tr_cat3}, y_train, ...)
则没有区别:
fit_generator()
答案 1 :(得分:0)
我能想到的另一种解决方案是,甚至在将数字(归一化后)和分类特征馈入lstm之前,也可以将它们结合在一起。
在反向传播期间,渐变将仅在嵌入层中流动,因为默认情况下,渐变将在两个分支中流动。