我正在尝试在Raspberry Pi1 Model B +上安装Apache Spark
一旦我启动命令shell并尝试命令:
val l = sc.parallelize(List()).collect
我收到例外:
scala> val l = sc.parallelize(List()).collect
15/03/22 19:52:44 INFO SparkContext: Starting job: collect at <console>:21
15/03/22 19:52:44 INFO DAGScheduler: Got job 0 (collect at <console>:21) with 1 output partitions (allowLocal=false)
15/03/22 19:52:44 INFO DAGScheduler: Final stage: Stage 0(collect at <console>:21)
15/03/22 19:52:44 INFO DAGScheduler: Parents of final stage: List()
15/03/22 19:52:44 INFO DAGScheduler: Missing parents: List()
15/03/22 19:52:44 INFO DAGScheduler: Submitting Stage 0 (ParallelCollectionRDD[0] at parallelize at <console>:21), which has no missing parents
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGILL (0x4) at pc=0x9137c074, pid=3596, tid=2415826032
#
# JRE version: Java(TM) SE Runtime Environment (8.0-b132) (build 1.8.0-b132)
# Java VM: Java HotSpot(TM) Client VM (25.0-b70 mixed mode linux-arm )
# Problematic frame:
# C [snappy-unknown-b62d2fa0-8fdd-4b4b-8c2c-2f24ddaeee74-libsnappyjava.so+0x1074] _init+0x1a7
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/pi/spark-1.3.0-bin-hadoop2.4/bin/hs_err_pid3596.log
./spark-shell: line 55: 3596 Segmentation fault "$FWDIR"/bin/spark-submit --class org.apache.spark.repl.Main "${SUBMISSION_OPTS[@]}" spark-shell "${APPLICATION_OPTS[@]}"
启动命令shell时,我允许磁盘内存利用率:
./spark-shell --conf StorageLevel=MEMORY_AND_DISK
但仍然收到同样的例外。
当启动spark shell时,有267MB可用内存:
15/03/22 17:09:49 INFO MemoryStore: MemoryStore started with capacity 267.3 MB
这应该是足够的内存来在shell中运行Spark命令吗?
这是启动spark shell的正确命令,它将不可用的内存溢出到磁盘:./spark-shell --conf StorageLevel=MEMORY_AND_DISK
?
更新
我试过了:
./spark-shell --conf spark.driver.memory=256m
val l = sc.parallelize(List()).collect
但结果相同
答案 0 :(得分:5)
尝试使用--driver-memory
选项设置驱动程序进程的内存。例如:
./spark-shell --driver-memory 2g
2 GB的内存。