内核在Spyder的Windows 10中使用XGBoost死亡

时间:2019-04-24 09:50:44

标签: python-3.x windows xgboost

我正在尝试在笔记本电脑上使用xgboost来解决Spyder IDE中的一个简单分类问题。下面是代码:

# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

from sklearn.datasets import make_classification

# Creating data
X, y = make_classification(n_classes=2, class_sep=0, 
       weights=0.05,0.95],n_clusters_per_class=2, n_features=3,  
         n_samples=10000, n_informative=2, n_redundant=0, n_repeated=0)


# Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])
labelencoder_X_2 = LabelEncoder()
X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
onehotencoder = OneHotEncoder(categorical_features = [1])
X = onehotencoder.fit_transform(X).toarray()
X = X[:, 1:]

# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)

# Fitting XGBoost to the Training set
from xgboost import XGBClassifier
classifier = XGBClassifier()
#HERE THE PROBLEM !!!
classifier.fit(X_train, y_train)

# Predicting the Test set results
y_pred = classifier.predict(X_test)
print("ok")
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
# Applying k-Fold Cross Validation
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)
accuracies.mean()
accuracies.std()

问题出在kernel总是适合的地方?

我已使用Xgboost中的Windows10pip install xgboost上安装了Anaconda prompt。我能做什么 ? 最终,在具有Ubuntu 16.04的VM中,我没有这种问题。

0 个答案:

没有答案