Spark-SQL数据帧计数给出java.lang.ArrayIndexOutOfBoundsException

时间:2018-11-26 14:36:25

标签: scala apache-spark apache-spark-sql

我正在使用Apache Spark 2.3.1版创建数据框。当我尝试计算数据帧时,出现以下错误:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 12, analitik11.{hostname}, executor 1): java.lang.ArrayIndexOutOfBoundsException: 2
        at org.apache.spark.sql.vectorized.ColumnarBatch.column(ColumnarBatch.java:98)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.datasourcev2scan_nextBatch_0$(Unknown Source)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
  at scala.Option.foreach(Option.scala:257)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
  at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
  at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
  at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
  at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
  at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
  at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253)
  at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
  ... 49 elided
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
  at org.apache.spark.sql.vectorized.ColumnarBatch.column(ColumnarBatch.java:98)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.datasourcev2scan_nextBatch_0$(Unknown Source)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
  at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
  at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
  at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
  at org.apache.spark.scheduler.Task.run(Task.scala:109)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)

我们使用com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder连接到Hive并从中读取表。生成数据框的代码如下:

    val hive = com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder.session(spark).build() 

    val edgesTest = hive.executeQuery("select trim(s_vno) as src ,trim(a_vno) as dst, share, administrator, account, all_share " +
      "from ebyn.babs_edges_2018 where (share <> 0 or administrator <> 0 or account <> 0 or all_share <> 0) and trim(date) = '201801'")

    val share_org_edges = edgesTest.alias("df1").
                                             join(edgesTest.alias("df2"), "src").
                                             where("df1.dst <> df2.dst").
                                             groupBy(
                                                  greatest("df1.dst", "df2.dst").as("src"), 
                                                  least("df1.dst", "df2.dst").as("dst")).
                                             agg(max("df1.share").as("share"), max("df1.administrator").as("administrator"), max("df1.account").as("account"), max("df1.all_share").as("all_share")).persist

share_org_edges.count

表属性如下:

CREATE TABLE `EBYN.BABS_EDGES_2018`(                                         
   `date` string,                                                            
   `a_vno` string,                                                            
   `s_vno` string,                                                            
   `amount` double,                                                        
   `num` int,                                                            
   `share` int,                                                               
   `share_ratio` int,                                                            
   `administrator` int,                                                            
   `account` int,                                                            
   `share-all` int)                                                        
 COMMENT 'Imported by sqoop on 2018/10/11 11:10:16'                           
 ROW FORMAT SERDE                                                             
   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'                       
 WITH SERDEPROPERTIES (                                                       
   'field.delim'='',                                                         
   'line.delim'='\n',                                                         
   'serialization.format'='')                                                
 STORED AS INPUTFORMAT                                                        
   'org.apache.hadoop.mapred.TextInputFormat'                                 
 OUTPUTFORMAT                                                                 
   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'               
 LOCATION                                                                     
   'hdfs://ggmprod/warehouse/tablespace/managed/hive/ebyn.db/babs_edges_2018' 
 TBLPROPERTIES (                                                              
   'bucketing_version'='2',                                                   
   'transactional'='true',                                                    
   'transactional_properties'='insert_only',                                  
   'transient_lastDdlTime'='1539245438')                            

2 个答案:

答案 0 :(得分:1)

问题
edgesTest是一个数据框,其逻辑计划包含唯一的DataSourceV2Relation节点。该DataSourceV2Relation逻辑计划节点包含一个可变的HiveWarehouseDataSourceReader,该变量将用于读取Hive表。 edgesTest数据帧使用了两次:分别作为df1df2使用。
在Spark逻辑计划优化期间,列修剪在同一HiveWarehouseDataSourceReader可变实例上发生了两次。通过设置第二列的必需列来修剪第二列,以覆盖第一列。
在执行期间,读取器将使用第二个列修剪所需的列向Hive仓库两次触发相同的查询。 Spark生成的代码将无法从Hive查询结果中找到预期的列。

解决方案
火花2.4
DataSourceV2得到了改进,尤其是SPARK-23203 DataSourceV2 should use immutable trees

Spark 2.3
HiveWarehouseConnector数据源阅读器中禁用列修剪。

Hortonworks已经解决了此问题,如HDP 3.1.5 Release Notes所述。
我们可以在其HiveWarehouseConnector github repository中找到更正:

    if (useSpark23xReader) {
      LOG.info("Using reader HiveWarehouseDataSourceReaderForSpark23x with column pruning disabled");
      return new HiveWarehouseDataSourceReaderForSpark23x(params);
    } else if (disablePruningPushdown) {
      LOG.info("Using reader HiveWarehouseDataSourceReader with column pruning and filter pushdown disabled");
      return new HiveWarehouseDataSourceReader(params);
    } else {
      LOG.info("Using reader PrunedFilteredHiveWarehouseDataSourceReader");
      return new PrunedFilteredHiveWarehouseDataSourceReader(params);
    }

此外,HDP 3.1.5 Hive integration doc指定:

为防止此版本中的数据正确性问题,默认情况下禁用修剪和投影下推功能。
...
为防止这些问题并确保结果正确,请不要启用修剪和下推功能。

答案 1 :(得分:0)

我也遇到了同样的问题,即使禁用了数据修剪/下推功能,它也无法正常工作。

可以在https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/integrating-hive/content/hive-read-write-operations.html修剪和下推

下找到该文档

在Python中,我设置了: spark.conf.set('spark.datasource.hive.warehouse.disable.pruning.and.pushdowns','true')

但这不起作用。取而代之的是,我找到了一种解决方案/解决方法,该解决方案/解决方案是保留其中一张表(已确定有问题)。

df1 = df.filter(xx).join(xx) .persist()

我从文档中猜测,spark是否会通过项目下推来找到父数据框-当加入同一数据框的df时会发生此错误,有人可以解释吗?

另外,让我知道它是否有效