线性判别分析(错误:索引1超出范围)

时间:2019-03-11 04:41:14

标签: python-3.x scikit-learn dimensionality-reduction linear-discriminant

我有一个数据集。前10个数字是我的功能(一个,两个,...,十),最后一列是我的目标(只有两个目标,包括MID和HIGH)。数据以txt格式(data.txt)保存,例如:

200000,400000,5000000,100000,5000000,50000,50000,300000,3333,1333,MID
200000,100000,500000,100000,5000000,5000,50000,300000,2000,1333,MID
100000,400000,5000000,100000,5000000,5000,50000,300000,2000,3333,MID
400000,200000,50000000,100000,5000000,5000,50000,300000,3333,3333,MID
200000,200000,5000000,100000,5000000,5000,50000,300000,3333,1333,HIGH
200000,100000,500000,10000000,5000000,50000,50000,300000,3333,3333,HIGH
100000,200000,500000,100000,5000000,50000,50000,300000,3333,666,HIGH
200000,100000,500000,1000000,5000000,50000,50000,300000,3333,666,HIGH
200000,100000,5000000,1000000,5000000,50000,5000,300000,3333,1333,HIGH

我已尝试根据可用的教程实施LDA分析。我还使用了 StandardScaler 进行归一化,因为 9 10 列的单位与前8列不同。这是我尝试过的:

import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
import math
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.preprocessing import StandardScaler

df = pd.read_csv('data.txt', header=None)
df.columns=['one','two','three','four','five','six','seven','eight','nine','ten','class']
X = df.ix[:,0:10].values
y = df.ix[:,10].values
X_std = StandardScaler().fit_transform(X)

lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit_transform(X_std,y)

with plt.style.context('seaborn-whitegrid'):
    plt.figure(figsize=(8, 6))
    for lab, col in zip(('MID', 'HIGH'),
                        ('blue', 'red')):
        plt.scatter(X_r2[y==lab, 0],
                    X_r2[y==lab, 1],
                    label=lab,s=100,
                    c=col)
    plt.xlabel('LDA 1')
    plt.ylabel('LDA 2')
    plt.legend(loc='lower right')
    plt.tight_layout()
    plt.savefig('Results.png', format='png', dpi=1200)
    plt.show()

我收到此错误:

line 32, in <module>X_r2[y==lab, 1],
IndexError: index 1 is out of bounds for axis 1 with size 1

有人知道我该如何解决这个问题? 预先感谢您的帮助。

1 个答案:

答案 0 :(得分:2)

当目标变量只有两个唯一值时,即使您将其指定为2,LDA生成的n_components也将仅为1。

摘自文档:

  

n_components :int,可选
  的组件数量,以减少尺寸。

因此,如果在数据集中为以下内容添加一行,

200000,400000,5000000,100000,5000000,50000,50000,300000,3333,1333,LOW

代码已更新为y中的另一个类别:

import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from matplotlib import pyplot as plt

df.columns=['one','two','three','four','five','six','seven','eight','nine','ten','class']
X = df.ix[:,0:10].values
y = df.ix[:,10].values
X_std = StandardScaler().fit_transform(X)

lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit_transform(X_std,y)

with plt.style.context('seaborn-whitegrid'):
    plt.figure(figsize=(8, 6))
    for lab, col in zip(('MID', 'HIGH','LOW'),
                        ('blue', 'red','green')):
        plt.scatter(X_r2[y==lab, 0],
                    X_r2[y==lab, 1],
                    label=lab,s=100,
                    c=col)
    plt.xlabel('LDA 1')
    plt.ylabel('LDA 2')
    plt.legend(loc='lower right')
    plt.tight_layout()
    plt.savefig('Results.png', format='png', dpi=1200)
    plt.show()

将生成以下情节!

enter image description here