我正在尝试使用scikit-learn
对一些文本文档进行聚类。我正在尝试使用DBSCAN和MeanShift,并且想要确定哪些超参数(例如,bandwidth
用于MeanShift而eps
用于DBSCAN)最适合我正在使用的数据类型(新闻文章)。< / p>
我有一些测试数据,包括预先标记的集群。我一直在尝试使用scikit-learn
的{{1}}但是不明白在这种情况下如何(或者如果可以)应用,因为它需要分割测试数据,但我想对整个数据集运行评估,并将结果与预先标记的数据进行比较。
我一直在尝试指定一个评分函数,它将估算器的标签与真实标签进行比较,但当然它不起作用,因为只有一个数据样本已经聚集,而不是全部。
这里有什么合适的方法?
答案 0 :(得分:1)
您是否考虑过自己实施搜索?
实现for循环并不是特别困难。即使你想优化两个参数,它仍然相当容易。
对于DBSCAN和MeanShift,我建议首先了解您的相似性度量。根据对度量的理解而不是参数优化来选择参数以匹配某些标签(具有过度拟合的高风险)更有意义。
换句话说,两个假定要聚集的文章在哪个距离?
如果这个距离从一个数据点到另一个数据点变化太大,这些算法将会严重失败;并且您可能需要找到归一化距离函数,以使实际相似度值再次有意义。 TF-IDF是文本的标准,但主要是在检索上下文中。它们可能在聚类环境中工作得更糟。
还要注意MeanShift(类似于k-means)需要重新计算坐标 - 在文本数据上,这可能会产生不希望的结果;更新的坐标实际上变得更糟,而不是更好。
答案 1 :(得分:0)
以下用于DBSCAN的功能可能会有所帮助。我已经编写了它来遍历超参数eps和min_samples,并包括用于最小和最大群集的可选参数。由于DBSCAN是不受监督的,因此我没有包括评估参数。
def dbscan_grid_search(X_data, lst, clst_count, eps_space = 0.5,
min_samples_space = 5, min_clust = 0, max_clust = 10):
"""
Performs a hyperparameter grid search for DBSCAN.
Parameters:
* X_data = data used to fit the DBSCAN instance
* lst = a list to store the results of the grid search
* clst_count = a list to store the number of non-whitespace clusters
* eps_space = the range values for the eps parameter
* min_samples_space = the range values for the min_samples parameter
* min_clust = the minimum number of clusters required after each search iteration in order for a result to be appended to the lst
* max_clust = the maximum number of clusters required after each search iteration in order for a result to be appended to the lst
Example:
# Loading Libraries
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
import pandas as pd
# Loading iris dataset
iris = datasets.load_iris()
X = iris.data[:, :]
y = iris.target
# Scaling X data
dbscan_scaler = StandardScaler()
dbscan_scaler.fit(X)
dbscan_X_scaled = dbscan_scaler.transform(X)
# Setting empty lists in global environment
dbscan_clusters = []
cluster_count = []
# Inputting function parameters
dbscan_grid_search(X_data = dbscan_X_scaled,
lst = dbscan_clusters,
clst_count = cluster_count
eps_space = pd.np.arange(0.1, 5, 0.1),
min_samples_space = pd.np.arange(1, 50, 1),
min_clust = 3,
max_clust = 6)
"""
# Importing counter to count the amount of data in each cluster
from collections import Counter
# Starting a tally of total iterations
n_iterations = 0
# Looping over each combination of hyperparameters
for eps_val in eps_space:
for samples_val in min_samples_space:
dbscan_grid = DBSCAN(eps = eps_val,
min_samples = samples_val)
# fit_transform
clusters = dbscan_grid.fit_predict(X = X_data)
# Counting the amount of data in each cluster
cluster_count = Counter(clusters)
# Saving the number of clusters
n_clusters = sum(abs(pd.np.unique(clusters))) - 1
# Increasing the iteration tally with each run of the loop
n_iterations += 1
# Appending the lst each time n_clusters criteria is reached
if n_clusters >= min_clust and n_clusters <= max_clust:
dbscan_clusters.append([eps_val,
samples_val,
n_clusters])
clst_count.append(cluster_count)
# Printing grid search summary information
print(f"""Search Complete. \nYour list is now of length {len(lst)}. """)
print(f"""Hyperparameter combinations checked: {n_iterations}. \n""")