使用starcluster Ipython并行插件的分​​布式计算实例使用

时间:2015-04-23 20:08:38

标签: scikit-learn jupyter-notebook ipython ipython-parallel starcluster

我正在使用带有 Ipython 插件的starcluster。当我使用负载平衡模式从Ipython笔记本运行Kmeans群集时。它总是拥有100%CPU使用率的Master。而其他EC2实例从不承担负担。

我尝试使用大型数据集和20个节点。结果与Master上的所有Load相同。我尝试使用node001进行直接查看,但即使这样,主人也有了所有负载。

我配置错误了吗?我是否需要在配置中使禁用队列成立?如何在所有实例上分配负载。

htop for the master and node001

  

模板文件

[cluster iptemplate]
KEYNAME = ********
CLUSTER_SIZE = 2
CLUSTER_USER = ipuser
CLUSTER_SHELL = bash
REGION = us-west-2

NODE_IMAGE_ID = ami-04bedf34
NODE_INSTANCE_TYPE = m3.medium
#DISABLE_QUEUE = True
PLUGINS = pypackages,ipcluster

[plugin ipcluster]
SETUP_CLASS = starcluster.plugins.ipcluster.IPCluster
ENABLE_NOTEBOOK = True
NOTEBOOK_PASSWD = *****

[plugin ipclusterstop]
SETUP_CLASS = starcluster.plugins.ipcluster.IPClusterStop

[plugin ipclusterrestart]
SETUP_CLASS = starcluster.plugins.ipcluster.IPClusterRestartEngines

[plugin pypackages]
setup_class = starcluster.plugins.pypkginstaller.PyPkgInstaller
packages = scikit-learn, psutil, scikit-image, numpy, pyzmq

[plugin opencvinstaller]
setup_class = ubuntu.PackageInstaller
pkg_to_install = cmake

[plugin pkginstaller]
SETUP_CLASS = starcluster.plugins.pkginstaller.PackageInstaller
# list of apt-get installable packages
PACKAGES =  python-mysqldb
  

代码

from IPython import parallel
clients = parallel.Client()
rc = clients.load_balanced_view()

def clustering(X_digits):
from sklearn.cluster import KMeans
kmeans = KMeans(20)
mu_digits = kmeans.fit(X_digits).cluster_centers_
return mu_digits

rc.block = True
rc.apply(clustering, X_digits)

1 个答案:

答案 0 :(得分:1)

我自己刚刚学习了starcluster / ipython,但这个要点似乎与@ thomas-k的评论一致,即你需要构建你的代码以便能够传递到负载平衡的地图:

https://gist.github.com/pprett/3989337

cv = KFold(X.shape[0], K, shuffle=True, random_state=0)

# instantiate the tasks - K times the number of grid cells
# FIXME use generator to limit memory consumption or do fancy
# indexing in _parallel_grid_search.
tasks = [(i, k, estimator, params, X[train], y[train], X[test], y[test])
         for i, params in enumerate(grid) for k, (train, test)
         in enumerate(cv)]

# distribute tasks on ipcluster
rc = parallel.Client()
lview = rc.load_balanced_view()
results = lview.map(_parallel_grid_search, tasks)