我一直非常满意地使用lightGBM模型,因为我有大型的数据集,其中包含数十个特征和一百万行,并且具有许多分类列。
我很喜欢lightGBM可以通过简单的astype('category')
来获得具有分类特征的熊猫数据框而无需任何热编码的方式。
我也有一些浮点列,我试图将它们转换为分类箱,以加快收敛速度并强加决策点的边界。
问题在于,尝试将浮动列与pd.cut
合并会导致fit方法失败并抛出ValueError: Circular reference detected
有一个类似的问题here,实际上在回溯中提到了Json编码器,但是那里没有我的答案所建议的DateTime列。 我想lightGBM可能不支持.cut类别,但是我无法在文档中找到有关此信息。
要复制该问题,不需要大数据集,这是一个玩具示例,我在其中构建了100行,10列的数据集。 5列是整数,我将其转换为具有astype的类别 5列为浮点数。 将浮点数保持为浮点数就可以了,使用pd.cut将一个或多个浮点列转换为分类列会导致fit函数抛出错误。
import lightgbm as lgb
from sklearn.model_selection import train_test_split
rows = 100
fcols = 5
ccols = 5
# Let's define some ascii readable names for convenience
fnames = ['Float_'+str(chr(97+n)) for n in range(fcols)]
cnames = ['Cat_'+str(chr(97+n)) for n in range(fcols)]
# The dataset is built by concatenation of the float and the int blocks
dff = pd.DataFrame(np.random.rand(rows,fcols),columns=fnames)
dfc = pd.DataFrame(np.random.randint(0,20,(rows,ccols)),columns=cnames)
df = pd.concat([dfc,dff],axis=1)
# Target column with random output
df['Target'] = (np.random.rand(rows)>0.5).astype(int)
# Conversion into categorical
df[cnames] = df[cnames].astype('category')
df['Float_a'] = pd.cut(x=df['Float_a'],bins=10)
# Dataset split
X = df.drop('Target',axis=1)
y = df['Target'].astype(int)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
# Model instantiation
lgbmc = lgb.LGBMClassifier(objective = 'binary',
boosting_type = 'gbdt' ,
is_unbalance = True,
metric = ['binary_logloss'])
lgbmc.fit(X_train,y_train)
这是错误,如果没有np.cat列,则不会显示该错误。
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-207-751795a98846> in <module>
4 metric = ['binary_logloss'])
5
----> 6 lgbmc.fit(X_train,y_train)
7
8 prob_pred = lgbmc.predict(X_test)
~\AppData\Local\conda\conda\envs\py36\lib\site-packages\lightgbm\sklearn.py in fit(self, X, y, sample_weight, init_score, eval_set, eval_names, eval_sample_weight, eval_class_weight, eval_init_score, eval_metric, early_stopping_rounds, verbose, feature_name, categorical_feature, callbacks)
740 verbose=verbose, feature_name=feature_name,
741 categorical_feature=categorical_feature,
--> 742 callbacks=callbacks)
743 return self
744
~\AppData\Local\conda\conda\envs\py36\lib\site-packages\lightgbm\sklearn.py in fit(self, X, y, sample_weight, init_score, group, eval_set, eval_names, eval_sample_weight, eval_class_weight, eval_init_score, eval_group, eval_metric, early_stopping_rounds, verbose, feature_name, categorical_feature, callbacks)
540 verbose_eval=verbose, feature_name=feature_name,
541 categorical_feature=categorical_feature,
--> 542 callbacks=callbacks)
543
544 if evals_result:
~\AppData\Local\conda\conda\envs\py36\lib\site-packages\lightgbm\engine.py in train(params, train_set, num_boost_round, valid_sets, valid_names, fobj, feval, init_model, feature_name, categorical_feature, early_stopping_rounds, evals_result, verbose_eval, learning_rates, keep_training_booster, callbacks)
238 booster.best_score[dataset_name][eval_name] = score
239 if not keep_training_booster:
--> 240 booster.model_from_string(booster.model_to_string(), False).free_dataset()
241 return booster
242
~\AppData\Local\conda\conda\envs\py36\lib\site-packages\lightgbm\basic.py in model_to_string(self, num_iteration, start_iteration)
2064 ptr_string_buffer))
2065 ret = string_buffer.value.decode()
-> 2066 ret += _dump_pandas_categorical(self.pandas_categorical)
2067 return ret
2068
~\AppData\Local\conda\conda\envs\py36\lib\site-packages\lightgbm\basic.py in _dump_pandas_categorical(pandas_categorical, file_name)
299 pandas_str = ('\npandas_categorical:'
300 + json.dumps(pandas_categorical, default=json_default_with_numpy)
--> 301 + '\n')
302 if file_name is not None:
303 with open(file_name, 'a') as f:
~\AppData\Local\conda\conda\envs\py36\lib\json\__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
236 check_circular=check_circular, allow_nan=allow_nan, indent=indent,
237 separators=separators, default=default, sort_keys=sort_keys,
--> 238 **kw).encode(obj)
239
240
~\AppData\Local\conda\conda\envs\py36\lib\json\encoder.py in encode(self, o)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
~\AppData\Local\conda\conda\envs\py36\lib\json\encoder.py in iterencode(self, o, _one_shot)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
ValueError: Circular reference detected
答案 0 :(得分:1)
与here中一样,您的问题与JSON序列化有关。序列化程序“不喜欢”由pd.cut创建的类别的标签(类似于“((0.109,0.208])”的标签)。
您可以覆盖使用剪切功能(https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html)的labels可选参数生成的标签。
在您的示例中,您可以替换以下行:
df['Float_a'] = pd.cut(x=df['Float_a'],bins=10)
包含以下行:
bins = 10
df['Float_a'] = pd.cut(x=df['Float_a'],bins=bins, labels=[f'bin_{i}' for i in range(bins)])