我正在使用这种聚类算法来聚集lat和lon点。我正在使用http://scikit-learn.org/stable/auto_examples/cluster/plot_dbscan.html给出的预先编写的代码。
代码如下,我的文件包含超过4000个lat和lon点。但是我想调整这个代码,以便它只将一个簇定义为彼此相差0.000020的点,因为我希望我的簇几乎处于街道级别。
目前我正在获得11个星团,而理论上我至少需要100个星团。我已经尝试调整和改变不同的数字,但无济于事。
print(__doc__)
import numpy as np
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.datasets.samples_generator import make_blobs
from sklearn.preprocessing import StandardScaler
##############################################################################
# Generate sample data
input = np.genfromtxt(open("dataset_import_noaddress.csv","rb"),delimiter=",", skip_header=1)
coordinates = np.delete(input, [0,1], 1)
X, labels_true = make_blobs(n_samples=4000, centers=coordinates, cluster_std=0.0000005,
random_state=0)
X = StandardScaler().fit_transform(X)
##############################################################################
# Compute DBSCAN
db = DBSCAN(eps=0.3, min_samples=10).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('Estimated number of clusters: %d' % n_clusters_)
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels))
print("Adjusted Rand Index: %0.3f"
% metrics.adjusted_rand_score(labels_true, labels))
print("Adjusted Mutual Information: %0.3f"
% metrics.adjusted_mutual_info_score(labels_true, labels))
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, labels))
##############################################################################
# Plot result
import matplotlib.pyplot as plt
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = plt.cm.Spectral(np.linspace(0, 1, len(unique_labels)))
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = 'k'
class_member_mask = (labels == k)
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=6)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
答案 0 :(得分:2)
您似乎只是在更改数据生成:
X, labels_true = make_blobs(n_samples=4000, centers=coordinates, cluster_std=0.0000005,
random_state=0)
而不是聚类算法:
db = DBSCAN(eps=0.3, min_samples=10).fit(X)
^^^^^^^ almost your complete data set?
对于地理数据,请务必使用半径距离而不是欧几里德距离。地球更像是一个球体,而不是一个平坦的欧几里德世界。