葡萄酒质量数据集分析

时间:2018-08-27 10:55:38

标签: python-3.x machine-learning classification pca multiclass-classification

我有一个数据集,该数据集根据酸含量,密度,pH等因素来解释葡萄酒的质量。我附加了链接,它将向您显示葡萄酒质量数据集。根据数据集,我们需要使用多类分类算法,使用训练和测试数据来分析此数据集。如果我错了,请纠正我?

Wine_Quality.csv数据集

https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/

我还使用了主成分分析算法来处理该数据集。以下是我使用的代码:-

# -*- coding: utf-8 -*-
"""
Created on Sun Aug 26 14:14:44 2018

@author: 1022316
"""

# Wine Quality testing
#Multiclass classification - PCA

#importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

#importing the Dataset
dataset = pd.read_csv('C:\Machine learning\winequality-red_1.csv')
X = dataset.iloc[:, 0:11].values
y = dataset.iloc[:, 11].values

# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)

# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)

#Applying the PCA
from sklearn.decomposition import PCA
pca = PCA(n_components = 2 )
X_train = pca.fit_transform(X_train)
X_test = pca.fit_transform(X_test)
explained_variance = pca.explained_variance_ratio_

# Fitting Logistic Regression to the Training set
#from sklearn.tree import DecisionTreeClassifier
#classifier = DecisionTreeClassifier(max_depth = 2).fit(X_train, y_train)
#y_pred = classifier.predict(X_test)

#classifier = LogisticRegression(random_state = 0)
#classifier.fit(X_train, y_train)

#Fiiting the Logistic Regression model to the training set
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)


#Predicting thr Test set results
y_pred = classifier.predict(X_test)


# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)

请让我知道我是否使用了该数据集的正确算法。同样,正如我所看到的,我们有9个类将对该数据集进行划分。还请让我知道如何在不同的类中可视化和绘制数据。

1 个答案:

答案 0 :(得分:1)

  

根据数据集,我们需要使用多类分类算法来使用训练和测试数据来分析该数据集。如果我错了,请纠正我?

正确。

  

请让我知道我是否使用了该数据集的正确算法。

是的。但是,应用它们的一种更系统的方法是:首先使用PCA可视化地探索类的可分离性及其组成部分的相对信息性(您正在使用前两个)。然后,将逻辑回归应用于原始高维和PCA低维特征空间。

#importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import  seaborn as sns

#importing the Dataset
dataset = pd.read_csv('winequality-red.csv', sep=';') # https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv
sns.countplot(dataset['quality'])

observation: 6 classes and high class imbalance

观察:6个类别和高度不平衡(6个可能是因为我们在您共享的页面中使用了不同的数据集)。

  

此外,正如我所看到的,我们将有9个类将数据集划分为多个类。还请让我知道如何在不同的类中可视化和绘制数据。

# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X = sc.fit_transform(X)

#Applying the PCA
from sklearn.decomposition import PCA
fig = plt.figure(figsize=(12,6))
pca = PCA()
pca_all = pca.fit_transform(X)
pca1 = pca_all[:, 0]
pca2 = pca_all[:, 1]

fig.add_subplot(1,2,1)
plt.bar(np.arange(pca.n_components_), 100*pca.explained_variance_ratio_)
plt.title('Relative information content of PCA components')
plt.xlabel("PCA component number")
plt.ylabel("PCA component variance % ")

fig.add_subplot(1,2,2)
plt.scatter(pca1, pca2, c=y, marker='x', cmap='jet')
plt.title('Class distributions')
plt.xlabel("PCA Component 1")
plt.ylabel("PCA Component 2")

enter image description here

有许多用于量化多类分类性能的指标。使用accuracy

# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split

#Fiiting the Logistic Regression model to the training set
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score

classifier = LogisticRegression(random_state = 0)

# PCA 2D space
X_train, X_test, y_train, y_test = train_test_split(pd.DataFrame(data=pca_all).iloc[:,0:2], y, test_size = 0.25, random_state = 0)
classifier.fit(X_train, y_train)

y_pred = classifier.predict(X_test)
accuracy_pca_2d = accuracy_score(y_test, y_pred)

# PCA 3D space
X_train, X_test, y_train, y_test = train_test_split(pd.DataFrame(data=pca_all).iloc[:,0:3], y, test_size = 0.25, random_state = 0)
classifier.fit(X_train, y_train)

y_pred = classifier.predict(X_test)
accuracy_pca_3d = accuracy_score(y_test, y_pred)

# PCA 2D space
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
classifier.fit(X_train, y_train)

y_pred = classifier.predict(X_test)
accuracy_original = accuracy_score(y_test, y_pred)

plt.figure()
sns.barplot(x=['pca 2D space', 'pca 3D space', 'original space'], y=[accuracy_pca_2d, accuracy_pca_3d, accuracy_original])
plt.ylabel('accuracy')

enter image description here

表明在缩小的PCA 2D空间中进行分类具有负面影响;至少根据此措施和设置。

要可视化混淆矩阵,可以使用this。申请原始太空舱:

enter image description here