聚类数据输出不规则图表

时间:2011-10-16 11:23:55

标签: matlab sorting random cluster-analysis normalization

好的,我会试着想要实现的目标以及我是如何实现它的,然后我会解释为什么我尝试这种方法。

我有KDD杯1999的原始数据,数据有494k行,42列。

我的目标是尝试在无人监督的情况下对这些数据进行聚类。从上一个问题:

clustering and matlab

我收到了这个反馈:

  

对于初学者,您需要将属性规范化为相同   scale:计算欧氏距离时,作为步骤3中的一部分   方法,具有诸如239和486的值的特征将占主导地位   超过其他功能,小值为0.05,从而破坏了   结果

     

要记住的另一点是太多属性可能是坏事   东西(维数的诅咒)。因此你应该研究一下这个特征   选择或降维技术。

所以我要做的第一件事就是解决与本文相关的功能选择:http://narensportal.com/papers/datamining-classification-algorithm.aspx#_sec-2-1

在选择必要的功能后看起来像这样:

enter image description here

因此,对于聚类,我删除了带有数字数据的3列的离散值,然后我去删除重复的行,请参阅文件中的junk, index and unique on a matrix (how to keep matrix format),将3列从494k减少到67k是这样完成的:

[M,ind] = unique(data, 'rows', 'first');
[~,ind] = sort(ind);
M = M(ind,:);

然后我使用随机排列将文件大小从67k减少到1000,如下所示:

m = 1000;
n = 3;

%# pick random rows
indX = randperm( size(M,1) );
indX = indX(1:m);

%# pick random columns
indY = randperm( size(M,2) );
indY = indY(1:n);

%# filter data
data = M(indX,indY)

所以现在我有一个包含我的3个功能的文件,我选择了我删除了重复记录并使用随机排列来进一步减少数据集,我的最后一个目标是规范化这些数据,我这样做了:

normalized_data = data/norm(data);

然后我使用了以下K-means脚本:

%% generate clusters
K = 4;

%% cluster
opts = statset('MaxIter', 500, 'Display', 'iter');
[clustIDX, clusters, interClustSum, Dist] = kmeans(data, K, 'options',opts, ...
'distance','sqEuclidean', 'EmptyAction','singleton', 'replicates',3);

%% plot data+clusters
figure, hold on
scatter3(data(:,1),data(:,2),data(:,3), 50, clustIDX, 'filled')
scatter3(clusters(:,1),clusters(:,2),clusters(:,3), 200, (1:K)', 'filled')
hold off, xlabel('x'), ylabel('y'), zlabel('z')
%% plot clusters quality
figure
[silh,h] = silhouette(data, clustIDX);
avrgScore = mean(silh);

%% Assign data to clusters
% calculate distance (squared) of all instances to each cluster centroid
D = zeros(numObservarations, K);     % init distances
for k=1:K
%d = sum((x-y).^2).^0.5
D(:,k) = sum( ((data - repmat(clusters(k,:),numObservarations,1)).^2), 2);
end

% find  for all instances the cluster closet to it
[minDists, clusterIndices] = min(D, [], 2);
% compare it with what you expect it to be
sum(clusterIndices == clustIDX)

但我的结果仍然像我在此问到的原始问题一样:clustering and matlab

以下是绘制时的数据:

enter image description here

enter image description here

任何人都可以帮助解决这个问题,我使用的方法不是正确的方法,还是有什么东西丢失?

2 个答案:

答案 0 :(得分:2)

感谢cyborg和Amro的帮助,我意识到我不是创建自己的预处理,而是保持这样的维度,我终于设法获得了一些集群数据!

Out put!

enter image description here

当然我仍然有一些异常值,但如果我能摆脱它们并从-0.2 - 0.2绘制图表,我相信它看起来会更好。但如果你看看原来的尝试,我似乎到了那里!

  %% load data
    %# read the list of features
    fid = fopen('kddcup.names','rt');
    C = textscan(fid, '%s %s', 'Delimiter',':', 'HeaderLines',1);
    fclose(fid);

    %# determine type of features
    C{2} = regexprep(C{2}, '.$','');              %# remove "." at the end
    attribNom = [ismember(C{2},'symbolic');true]; %# nominal features

    %# build format string used to read/parse the actual data
    frmt = cell(1,numel(C{1}));
    frmt( ismember(C{2},'continuous') ) = {'%f'}; %# numeric features: read as number
    frmt( ismember(C{2},'symbolic') ) = {'%s'};   %# nominal features: read as string
    frmt = [frmt{:}];
    frmt = [frmt '%s'];                           %# add the class attribute

    %# read dataset
    fid = fopen('kddcup.data_10_percent_corrected','rt');
    C = textscan(fid, frmt, 'Delimiter',',');
    fclose(fid);

    %# convert nominal attributes to numeric
    ind = find(attribNom);
    G = cell(numel(ind),1);
    for i=1:numel(ind)
        [C{ind(i)},G{i}] = grp2idx( C{ind(i)} );
    end

    %# all numeric dataset
    fulldata = cell2mat(C);
    %% dimensionality reduction 
    columns = 42
    [U,S,V]=svds(fulldata,columns)
    %% randomly select dataset
    rows = 5000;
    %# pick random rows
    indX = randperm( size(fulldata,1) );
    indX = indX(1:rows);
    %# pick random columns
    indY = randperm( size(fulldata,2) );
    indY = indY(1:columns);
    %# filter data
    data = U(indX,indY)
    %% apply normalization method to every cell
    data = data./repmat(sqrt(sum(data.^2)),size(data,1),1)
    %% generate sample data
    K = 4;
    numObservarations = 5000;
    dimensions = 42;
    %% cluster
    opts = statset('MaxIter', 500, 'Display', 'iter');
    [clustIDX, clusters, interClustSum, Dist] = kmeans(data, K, 'options',opts, ...
    'distance','sqEuclidean', 'EmptyAction','singleton', 'replicates',3);
    %% plot data+clusters
    figure, hold on
    scatter3(data(:,1),data(:,2),data(:,3), 5, clustIDX, 'filled')
    scatter3(clusters(:,1),clusters(:,2),clusters(:,3), 100, (1:K)', 'filled')
    hold off, xlabel('x'), ylabel('y'), zlabel('z')
    %% plot clusters quality
    figure
    [silh,h] = silhouette(data, clustIDX);
    avrgScore = mean(silh);
    %% Assign data to clusters
    % calculate distance (squared) of all instances to each cluster centroid
    D = zeros(numObservarations, K);     % init distances
    for k=1:K
    %d = sum((x-y).^2).^0.5
    D(:,k) = sum( ((data - repmat(clusters(k,:),numObservarations,1)).^2), 2);
    end
    % find  for all instances the cluster closet to it
    [minDists, clusterIndices] = min(D, [], 2);
    % compare it with what you expect it to be
    sum(clusterIndices == clustIDX)

答案 1 :(得分:1)

您在规范化方面遇到问题:data/norm(data);。你可能需要做的是 使用:data_normed = data./repmat(sqrt(sum(data.^2)),size(data,1),1)。这会计算data每列的标准,然后将data的原始大小的答案重复,然后将data除以列的范数。

注释:

降低要素数量维数的更好方法是[U,S,V]=svd(data); U=U(:,1:m)或稀疏数据[U,S,V]=svds(data,m)。它可能会丢失一些信息,但它比随机选择要好得多。