我刚刚用Keras创建了一个人工神经网络,我想将Scikit-learn函数cross_val_score传递给它,以对数据集的某些X_train和y_train进行训练。
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
def build_classifier():
classifier = Sequential()
classifier.add(Dense(units = 16, kernel_initializer = 'uniform', activation = 'relu', input_dim = 30))
classifier.add(Dense(units = 16, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = 'rmsprop', loss = 'binary_crossentropy', metrics = ['accuracy'])
return classifier
classifier = KerasClassifier(build_fn = build_classifier, batch_size=25, epochs = 10)
results = cross_val_score(classifier, X_train, y_train, cv=10, n_jobs=-1)
我得到的输出是时代1/1重复了4次(我有4个内核),没有别的,因为在那之后它卡住了,计算永远也不会结束。 我使用任何其他Scikit学习算法测试了n_jobs = -1,并且工作正常。我不使用GPU,仅使用CPU。
要测试代码,只需添加以下标准化数据集:
from sklearn.datasets import load_breast_cancer
data = load_breast_cancer()
df = pd.DataFrame(data['data'])
target = pd.DataFrame(data['target'])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df, target, test_size = 0.2, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
玩完n_jobs(设置为1,2,3或-1)后,我得到了一些奇怪的结果,例如时代1/1仅重复了3次而不是4次(即使使用n_jobs = -1)或重复了3次这就是我得到的内核:
Process ForkPoolWorker-33:
Traceback (most recent call last):
File "/home/myname/anaconda3/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/myname/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/myname/anaconda3/lib/python3.6/multiprocessing/pool.py", line 108, in worker
task = get()
File "/home/myname/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/pool.py", line 362, in get
return recv()
File "/home/myname/anaconda3/lib/python3.6/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/home/myname/anaconda3/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/home/myname/anaconda3/lib/python3.6/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
这可能是多处理中的某件事,但我不知道如何解决。
答案 0 :(得分:0)
上面的代码对我来说很好。请升级您的模块。
步骤1) pip install --upgrade tensorflow
第2步) pip install keras
我尝试过,并且可以在TensorFlow后端中使用。
我有:
输入[7]:sklearn。版本输出[7]:“ 0.19.1”
在[8]中:keras。版本在[8]中:“ 2.2.4”
并且:
import keras
/anaconda2/lib/python2.7/site-packages/h5py/ init .py:36: FutureWarning:将issubdtype的第二个参数转换为 已弃用
float
至np.floating
。将来会被治疗 为np.float64 == np.dtype(float).type
。从._conv导入 register_converters作为_register_converters使用TensorFlow后端。
答案 1 :(得分:0)
我切换到sklearn版本= 0.20.1
由于命令运行和完成的时间比n_jobs = 1更短,因此n_jobs现在可以正常运行。
尽管如此:
1)n_jobs = 2或更高时,计算时间没有明显改善
2)在某些情况下,我会收到此警告:
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 2 concurrent workers.
/home/my_name/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/externals/loky/process_executor.py:706:
UserWarning: A worker stopped while some jobs were given to the executor.
This can be caused by a too short worker timeout or by a memory leak.
"timeout or by a memory leak.", UserWarning
最后一句话:在Jupyter笔记本中,对于n_jobs!= 1而言,神经网络与历元的交互计算不再显示,而是在终端(!?)中显示