org.apache.spark.SparkException:写入行

时间:2017-09-06 04:01:33

标签: hadoop apache-spark

在我的火花工作中,许多任务都失败了。

我发现当我打开推测时,会出现这些错误。

关注是我的全部错误,

  

org.apache.spark.SparkException:写入行\ n \ tat时任务失败   org.apache.spark.sql.execution.datasources.FileFormatWriter $ $ .ORG阿帕奇$ $火花SQL $ $执行数据源$ $$ FileFormatWriter executeTask(FileFormatWriter.scala:204)\ n \达   org.apache.spark.sql.execution.datasources.FileFormatWriter $$ anonfun $写$ 1 $$ anonfun $ 3.apply(FileFormatWriter.scala:129)\ n \达   org.apache.spark.sql.execution.datasources.FileFormatWriter $$ anonfun $ $写1 $$ anonfun $ 3.apply(FileFormatWriter.scala:128)\ n \达   org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)\ n \达   org.apache.spark.scheduler.Task.run(Task.scala:99)\ n \达   org.apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:322)\ n \达   java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\ n \达   java.util.concurrent.ThreadPoolExecutor中的$ Worker.run(ThreadPoolExecutor.java:617)\ n \达   java.lang.Thread.run(Thread.java:748)\ n由以下各项提供:   org.apache.spark.TaskKilledException \ n \达   org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)\ n \达   scala.collection.Iterator $$匿名$ 11.hasNext(Iterator.scala:408)\ n \达   org.apache.spark.sql.catalyst.expressions.GeneratedClass $ GeneratedIterator.agg_doAggregateWithKeys $(未知   源)\ n \达   org.apache.spark.sql.catalyst.expressions.GeneratedClass $ GeneratedIterator.processNext(未知   源)\ n \达   org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)\ n \达   org.apache.spark.sql.execution.WholeStageCodegenExec $$ anonfun $ $$ 8不久$ 1.hasNext(WholeStageCodegenExec.scala:377)\ n \达   org.apache.spark.sql.execution.datasources.FileFormatWriter $ SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)\ n \达   org.apache.spark.sql.execution.datasources.FileFormatWriter $$ anonfun $ $有机阿帕奇$ $火花SQL $ $执行数据源$ $$ FileFormatWriter $ executeTask 3.apply(FileFormatWriter.scala:190)\ n \达   org.apache.spark.sql.execution.datasources.FileFormatWriter $$ anonfun $ $组织阿帕奇$火花$ SQL $执行$ $的数据源$$ FileFormatWriter $ executeTask 3.apply(FileFormatWriter.scala:188)\ n \达   org.apache.spark.util.Utils $ .tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1352)\ n \达   org.apache.spark.sql.execution.datasources.FileFormatWriter $ .ORG $阿帕奇$火花$ SQL $执行$ $的数据源$$ FileFormatWriter executeTask(FileFormatWriter.scala:193)\ n \ t ...   8更多\ n \ t抑制:java.io.InterruptedIOException:中断   在等待数据被管道确认时\ n \ t \ tat   org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2194)\ n \吨\达   org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2175)的\ n \吨\达   org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2292)\ n \吨\达   org.apache.hadoop.fs.FSDataOutputStream $ PositionCache.close(FSDataOutputStream.java:72)\ n \ t \达   org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)\ n \吨\达   org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:467)\ n \吨\达   org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:117)\ n \吨\达   org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)\ n \吨\达   org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44)\ n \吨\达   org.apache.spark.sql.execution.datasources.FileFormatWriter $ SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)\ n \ t \达   org.apache.spark.sql.execution.datasources.FileFormatWriter $$ anonfun $ $组织阿帕奇$火花$ SQL $执行$ $的数据源$$ FileFormatWriter $ executeTask $ 1.适用MCV $ SP(FileFormatWriter.scala:196)\ n \ Ť\达   org.apache.spark.util.Utils $ .tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1361)\ n \ t \ t ...   9更多\ n

0 个答案:

没有答案