XGBoost功能重要性:如何在编码后获取原始变量名称

时间:2018-03-14 22:18:45

标签: python encoding xgboost

我正在按照DataCamp课程的指南使用XGBoost分类。数据处理如下:

X, y = df.iloc[:,:-1], df.iloc[:,-1]

# Create a boolean mask for categorical columns: check if df.dtypes == object
categorical_mask = (X.dtypes == object)

# Get list of categorical column names
categorical_columns = X.columns[categorical_mask].tolist()

# Create LabelEncoder object: le
le = LabelEncoder()

# Apply LabelEncoder to categorical columns
X[categorical_columns] = X[categorical_columns].apply(lambda x: le.fit_transform(x))

# Create OneHotEncoder: ohe
ohe = OneHotEncoder(categorical_features=categorical_mask, sparse=False)

# Apply OneHotEncoder to categorical columns - output is no longer a dataframe: df_encoded is a NumPy array
X_encoded = ohe.fit_transform(X)

testy = pd.DataFrame(X_encoded)

X_train, X_test, y_train, y_test= train_test_split(testy, y, test_size=0.2, random_state=123)

DM_train = xgb.DMatrix(X_train, label = y_train, )
DM_test = xgb.DMatrix(X_test, label = y_test)

我使用交叉验证的网格搜索来调整超参数,我使用x_trainy_train拟合模型。

我使用调整的参数拟合模型,然后创建特征重要性图:

model.fit(X_train,y_train)

xgb.plot_importance(model, importance_type = 'gain')

这是输出:

Feature Importance Plot

如何将这些功能映射回原始数据?我很困惑,因为我同时使用了LabelEncoder()OneHotEncoder()

非常感谢任何帮助。

1 个答案:

答案 0 :(得分:1)

我使用了DictVectorizer来解决问题:

X, y = df.iloc[:,:-1], df.iloc[:,-1]

# Import DictVectorizer
from sklearn.feature_extraction import DictVectorizer

# Convert df into a dictionary using .to_dict(): df_dict
df_dict = X.to_dict("records")

# Create the DictVectorizer object: dv
dv = DictVectorizer(sparse=False)

# Apply dv on df_dict: df_encoded
X_encoded = dv.fit_transform(df_dict)

X_encoded = pd.DataFrame(X_encoded)

X_train, X_test, y_train, y_test= train_test_split(X_encoded, y, test_size=0.2, random_state=123)

现在适合模型并绘制要素重要性:

Feature Importance Plot

最后,你必须查找名称:

# Use pprint to make the vocabulary easier to read
import pprint
pprint.pprint(dv.vocabulary_)

如果有人知道如何使用字典词汇来查找要素名称并将其放在情节上,我将非常感谢您的输入。