如何运行大型Mahout模糊k意味着聚类而不会耗尽内存?

时间:2013-04-19 20:36:45

标签: hadoop cluster-analysis mahout k-means

我正在亚马逊的EMR(AMI 2.3.1)上运行Mahout 0.7模糊k均值聚类 我的内存不足了。

  • 我的整体质疑:如何让它最轻松地运作?

这是一个调用:

./bin/mahout fkmeans \
  --input s3://.../foo/vectors.seq \
  --output s3://.../foo/fuzzyk2 \
  --numClusters 128 \
  --clusters s3://.../foo/initial_clusters/ \
  --maxIter 20 \
  --m 2 \
  --method mapreduce \
  --distanceMeasure org.apache.mahout.common.distance.TanimotoDistanceMeasure

更详细的问题:

  • 如何判断我正在使用多少内存?我在c1.xlarge实例上。如果我相信AWS docs,则设置mapred.child.java.opts = -Xmx512m。

  • 如何判断我需要多少内存?我可以尝试不同的尺寸,但它让我不知道我能处理的问题的大小。

  • 如何更改内存使用量?使用不同类别的计算机启动不同的工作流程?尝试设置mapred.child.java.opts?

  • 我的数据集似乎不大。是吗?

vectors.seq是具有50225个向量的稀疏向量的集合 (50225事与124420人有关),共有1.2M关系。

This post说set --method mapreduce,我是哪个,哪个是 默认值。

This post表示所有群集都保存在每个映射器的内存中 减速器。那将是4 * 124420 = 498K的东西,这似乎也不是 坏。

这是堆栈:

13/04/19 18:12:53 INFO mapred.JobClient: Job complete: job_201304161435_7034
13/04/19 18:12:53 INFO mapred.JobClient: Counters: 7
13/04/19 18:12:53 INFO mapred.JobClient:   Job Counters 
13/04/19 18:12:53 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=28482
13/04/19 18:12:53 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/04/19 18:12:53 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/04/19 18:12:53 INFO mapred.JobClient:     Rack-local map tasks=4
13/04/19 18:12:53 INFO mapred.JobClient:     Launched map tasks=4
13/04/19 18:12:53 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/04/19 18:12:53 INFO mapred.JobClient:     Failed map tasks=1
Exception in thread "main" java.lang.InterruptedException: Cluster Iteration 1 failed processing s3://.../foo/fuzzyk2/clusters-1
        at org.apache.mahout.clustering.iterator.ClusterIterator.iterateMR(ClusterIterator.java:186)
        at org.apache.mahout.clustering.fuzzykmeans.FuzzyKMeansDriver.buildClusters(FuzzyKMeansDriver.java:288)
        at org.apache.mahout.clustering.fuzzykmeans.FuzzyKMeansDriver.run(FuzzyKMeansDriver.java:221)
        at org.apache.mahout.clustering.fuzzykmeans.FuzzyKMeansDriver.run(FuzzyKMeansDriver.java:110)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.mahout.clustering.fuzzykmeans.FuzzyKMeansDriver.main(FuzzyKMeansDriver.java:52)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
        at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
        at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:195)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:187)

这是映射器日志的一部分:

2013-04-19 18:10:38,734 INFO org.apache.hadoop.fs.s3native.NativeS3FileSystem (main): Received IOException while reading '.../foo/vectors.seq', attempting to reopen.
java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.read(SocketInputStream.java:129)
        at com.sun.net.ssl.internal.ssl.InputRecord.readFully(InputRecord.java:293)
        at com.sun.net.ssl.internal.ssl.InputRecord.readV3Record(InputRecord.java:405)
        at com.sun.net.ssl.internal.ssl.InputRecord.read(InputRecord.java:360)
        at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:798)
        at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:755)
        at com.sun.net.ssl.internal.ssl.AppInputStream.read(AppInputStream.java:75)
        at org.apache.http.impl.io.AbstractSessionInputBuffer.read(AbstractSessionInputBuffer.java:187)
        at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:164)
        at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:138)
        at java.io.FilterInputStream.read(FilterInputStream.java:116)
        at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:291)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
        at java.io.DataInputStream.readFully(DataInputStream.java:178)
        at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
        at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
        at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2060)
        at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2194)
        at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:68)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:540)
        at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:771)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:375)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)
2013-04-19 18:10:38,737 INFO org.apache.hadoop.fs.s3native.NativeS3FileSystem (main): Stream for key '.../foo/vectors.seq' seeking to position '62584'
2013-04-19 18:10:42,619 INFO org.apache.hadoop.mapred.TaskLogsTruncater (main): Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2013-04-19 18:10:42,730 INFO org.apache.hadoop.io.nativeio.NativeIO (main): Initialized cache for UID to User mapping with a cache timeout of 14400 seconds.
2013-04-19 18:10:42,730 INFO org.apache.hadoop.io.nativeio.NativeIO (main): Got UserName hadoop for UID 106 from the native implementation
2013-04-19 18:10:42,733 FATAL org.apache.hadoop.mapred.Child (main): Error running child : java.lang.OutOfMemoryError: Java heap space
        at org.apache.mahout.math.map.OpenIntDoubleHashMap.rehash(OpenIntDoubleHashMap.java:434)
        at org.apache.mahout.math.map.OpenIntDoubleHashMap.put(OpenIntDoubleHashMap.java:387)
        at org.apache.mahout.math.RandomAccessSparseVector.setQuick(RandomAccessSparseVector.java:139)
        at org.apache.mahout.math.AbstractVector.assign(AbstractVector.java:560)
        at org.apache.mahout.clustering.AbstractCluster.observe(AbstractCluster.java:253)
        at org.apache.mahout.clustering.AbstractCluster.observe(AbstractCluster.java:241)
        at org.apache.mahout.clustering.AbstractCluster.observe(AbstractCluster.java:37)
        at org.apache.mahout.clustering.classify.ClusterClassifier.train(ClusterClassifier.java:158)
        at org.apache.mahout.clustering.iterator.CIMapper.map(CIMapper.java:55)
        at org.apache.mahout.clustering.iterator.CIMapper.map(CIMapper.java:18)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:771)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:375)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)

1 个答案:

答案 0 :(得分:1)

是的,你的内存不足了。据我所知,“内存密集型工作负载”引导操作早已被弃用,因此可能无效。请参阅该页面上的说明。

默认情况下,c1.xlarge应使用384MB per mapper。当你减去所有JVM开销,拆分和组合等空间时,你可能没有剩下很多东西。

您在引导操作中设置Hadoop参数。如果使用控制台并设置--site-key-value mapred.map.child.java.opts=-Xmx1g

之类的内容,请选择“配置Hadoop”操作

(如果您以编程方式执行此操作并遇到任何问题,请离线与我联系;我可以提供来自Myrrix的代码段,因为它会大量调整EMR群集以提高其推荐/群集作业的速度。 )

您可以将mapred.map.java.child.opts设置为与reducer分开控制映射器。您还可以调低每台机器的映射器数量以腾出更多空间,或选择高内存实例。我通常认为ml.xlarge对于EMR给定的价格与I / O比率是最佳的,并且因为大多数作业最终都受I / O限制。