在Spark中输入snappy数据

时间:2015-08-10 12:30:29

标签: apache-spark snappy

我试图在Spark中读取snappy数据,但是以下内容在spark-shell中失败了:

scala> val rdd = sc.textFile("/var/scratch/bigdatabenchmark/5nodes/uservisits/000000_0.snappy")
//so far so good
scala> rdd.first

15/08/10 14:25:43 INFO FileInputFormat: Total input paths to process : 1
15/08/10 14:25:43 INFO SparkContext: Starting job: first at <console>:24
15/08/10 14:25:43 INFO DAGScheduler: Got job 0 (first at <console>:24) with 1 output partitions (allowLocal=true)
15/08/10 14:25:43 INFO DAGScheduler: Final stage: Stage 0(first at <console>:24)
15/08/10 14:25:43 INFO DAGScheduler: Parents of final stage: List()
15/08/10 14:25:43 INFO DAGScheduler: Missing parents: List()
15/08/10 14:25:43 INFO DAGScheduler: Submitting Stage 0 (/var/scratch/bigdatabenchmark/5nodes/uservisits/000000_0.snappy MapPartitionsRDD[1] at textFile at <console>:21), which has no missing parents
15/08/10 14:25:43 INFO MemoryStore: ensureFreeSpace(2712) called with curMem=232395, maxMem=5556991426
15/08/10 14:25:43 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.6 KB, free 5.2 GB)
15/08/10 14:25:43 INFO MemoryStore: ensureFreeSpace(1717) called with curMem=235107, maxMem=5556991426
15/08/10 14:25:43 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1717.0 B, free 5.2 GB)
15/08/10 14:25:43 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:58597 (size: 1717.0 B, free: 5.2 GB)
15/08/10 14:25:43 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
15/08/10 14:25:43 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:839
15/08/10 14:25:43 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 (/var/scratch/bigdatabenchmark/5nodes/uservisits/000000_0.snappy MapPartitionsRDD[1] at textFile at <console>:21)
15/08/10 14:25:43 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
15/08/10 14:25:43 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1332 bytes)
15/08/10 14:25:43 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
15/08/10 14:25:43 INFO HadoopRDD: Input split: file:/var/scratch/bigdatabenchmark/5nodes/uservisits/000000_0.snappy:0+28977286
15/08/10 14:25:43 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
15/08/10 14:25:43 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
15/08/10 14:25:43 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
15/08/10 14:25:43 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
15/08/10 14:25:43 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
15/08/10 14:25:43 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
        at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method)
        at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
        at org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:190)
        at org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:176)
        at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:110)
        at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)
        at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)
        at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
        at org.apache.spark.scheduler.Task.run(Task.scala:64)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

我试图在以下各处包含快速参考:

  • LD_LIBRARY_PATH
  • JAVA_LIBRARY_PATH
  • SPARK_LIBRARY_PATH 通过在spark-env.sh
  • 中包含这些内容

0 个答案:

没有答案