将实体嵌入映射回原始分类值

时间:2019-03-25 17:25:33

标签: python machine-learning keras deep-learning nlp

我正在使用Keras嵌入层创建在Kaggle Rossmann商店销售3rd place entry.上流行的实体嵌入,但是我不确定如何将嵌入映射回实际的分类值。让我们看一个非常基本的例子:

在下面的代码中,我创建了一个具有两个数值和一个分类特征的数据集。

import numpy as np
import pandas as pd
from sklearn.datasets import make_classification
from keras.models import Model
from keras.layers import Input, Dense, Concatenate, Reshape, Dropout
from keras.layers.embeddings import Embedding

# create some fake data
data, labels = make_classification(n_classes=2, class_sep=2, n_informative=2,
                                   n_redundant=0, flip_y=0, n_features=2,
                                   n_clusters_per_class=1, n_samples=100,
                                   random_state=10)

cat_col = np.random.choice(a=[0,1,2,3,4], size=100)

data = pd.DataFrame(data)
data[2] = cat_col
embed_cols = [2]

# converting data to list of lists, as the network expects to
# see the data in this format
def preproc(df):
    data_list = []

    # convert cols to list of lists
    for c in embed_cols:
        vals = np.unique(df[c])
        val_map = {}
        for i in range(len(vals)):
            val_map[vals[i]] = vals[i]
        data_list.append(df[c].map(val_map).values)

    # the rest of the columns
    other_cols = [c for c in df.columns if (not c in embed_cols)]
    data_list.append(df[other_cols].values)
    return data_list

data = preproc(data)

分类列有5个唯一值:

print("Unique Values: ", np.unique(data[0]))
Out[01]: array([0, 1, 2, 3, 4])

然后将其嵌入具有嵌入层的Keras模型中:

inputs = []
embeddings = []

input_cat_col = Input(shape=(1,))
embedding = Embedding(5, 3, input_length=1, name='cat_col')(input_cat_col)
embedding = Reshape(target_shape=(3,))(embedding)
inputs.append(input_cat_col)
embeddings.append(embedding)


# add the remaining two numeric columns from the 'data array' to the network
input_numeric = Input(shape=(2,))
embedding_numeric = Dense(8)(input_numeric)
inputs.append(input_numeric)
embeddings.append(embedding_numeric)

x = Concatenate()(embeddings)
output = Dense(1, activation='sigmoid')(x)

model = Model(inputs, output)
model.compile(loss='binary_crossentropy', optimizer='adam')

history = model.fit(data, labels,
                    epochs=10,
                    batch_size=32,
                    verbose=1,
                    validation_split=0.2)

我可以通过获取嵌入层的权重来获取实际的嵌入:

embeddings = model.get_layer('cat_col').get_weights()[0]
print("Unique Values: ", np.unique(data[0]))
print("3 Dimensional Embedding: \n", embeddings)

Unique Values:  [0 1 2 3 4]
3 Dimensional Embedding: 
 [[ 0.02749949  0.04238378  0.0080842 ]
 [-0.00083209  0.01848664  0.0130044 ]
 [-0.02784528 -0.00713446 -0.01167112]
 [ 0.00265562  0.03886909  0.0138318 ]
 [-0.01526615  0.01284053 -0.0403452 ]]

但是,我不确定如何将它们映射回去。假设已订购砝码是否安全?例如0=[ 0.02749949 0.04238378 0.0080842 ]

1 个答案:

答案 0 :(得分:1)

是的,嵌入层的权重对应于按整数顺序索引的单词,即嵌入层中的权重数组0对应于索引为0的单词,依此类推。您可以将嵌入层视为 查找表 ,其中表的 nth 行对应于 nth 个字(但嵌入层是可训练的层,而不仅仅是静态查找表)

inputs = Input(shape=(1,))
embedding = Embedding(5, 3, input_length=1, name='cat_col')(inputs)
model = Model(inputs, embedding)

x = np.array([0,1,2,3,4]).reshape(5,1)
labels = np.zeros((5,1,3))

print (model.predict(x))
print (model.get_layer('cat_col').get_weights()[0])

assert np.array_equal(model.predict(x).reshape(-1), model.get_layer('cat_col').get_weights()[0].reshape(-1))

model.predict(x):

[[[-0.01862894,  0.0021644 ,  0.04706952]],
 [[-0.03891206,  0.01743075, -0.03666048]],
 [[-0.01799501,  0.01427511, -0.00056203]],
 [[ 0.03703432, -0.01952349,  0.04562894]],
 [[-0.02806044, -0.04623617, -0.01702447]]]

model.get_layer('cat_col')。get_weights()[0]

[[-0.01862894,  0.0021644 ,  0.04706952],
 [-0.03891206,  0.01743075, -0.03666048],
 [-0.01799501,  0.01427511, -0.00056203],
 [ 0.03703432, -0.01952349,  0.04562894],
 [-0.02806044, -0.04623617, -0.01702447]]