带有分区的Spark 2.3.1 insertinto表(s3)在实际写入之前运行许多查询

时间:2019-01-03 16:55:07

标签: scala apache-spark hive

我有一个非常简单的Spark作业,可以写入S3。 该表有3个不同的分区键和许多值(其中一些每小时会越来越大)。

我正在使用以下代码:

dataframe.select(reorderFields:_*).write.mode(SaveMode.Overwrite).insertInto(tableName)

开始时,这段代码非常有效。但是桌子变大之后,它变得越来越慢。

打开调试日志时,我发现很多读取都需要进行配置,甚至还没有开始数据帧的计算。

日志:

2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@5470ec7e"
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Native:58 [DEBUG]: SELECT `A0`.`COLUMN_NAME`,`A0`.`ORDER`,`A0`.`INTEGER_IDX` AS NUCORDER0 FROM `SORT_COLS` `A0` WHERE `A0`.`SD_ID` = <297323> AND `A0`.`INTEGER_IDX` >= 0 ORDER BY NUCORDER0
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Retrieve:58 [DEBUG]: Execution Time = 1 ms
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "org.datanucleus.store.rdbms.ParamLoggingPreparedStatement@325b1c61"
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "parameters" is replaced by a SCO wrapper of type "org.datanucleus.store.types.backed.Map" [cache-values=true, lazy-loading=true, queued-operations=false, allow-nulls=true]
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "parameters" loading contents to SCO wrapper from the datastore
2019-01-03 16:50:58 [main] DataNucleus.Connection:58 [DEBUG]: Connection found in the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@236ec794 [conn=com.jolbox.bonecp.ConnectionHandle@712106b5, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@132e3594 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@72c9ebfa]
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@250ebae4"
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Native:58 [DEBUG]: SELECT `A0`.`PARAM_KEY`,`A0`.`PARAM_VALUE` FROM `SD_PARAMS` `A0` WHERE `A0`.`SD_ID` = <297323> AND `A0`.`PARAM_KEY` IS NOT NULL
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Retrieve:58 [DEBUG]: Execution Time = 1 ms
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "org.datanucleus.store.rdbms.ParamLoggingPreparedStatement@798a320"
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "skewedColNames" is replaced by a SCO wrapper of type "org.datanucleus.store.types.backed.List" [cache-values=true, lazy-loading=true, queued-operations=false, allow-nulls=true]
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "skewedColNames" loading contents to SCO wrapper from the datastore
2019-01-03 16:50:58 [main] DataNucleus.Connection:58 [DEBUG]: Connection found in the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@236ec794 [conn=com.jolbox.bonecp.ConnectionHandle@712106b5, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@132e3594 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@72c9ebfa]
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@540637b0"

我尝试使用以下参数重新配置我的配置单元:

sparkConf.set("hive.auto.convert.join.noconditionaltask.size","200M")
sparkConf.set("hive.auto.convert.join.noconditionaltask","true")
sparkConf.set("hive.optimize.sort.dynamic.partition","false")
sparkConf.set("spark.sql.hive.convertMetastoreParquet.mergeSchema","false")
sparkConf.set("parquet.enable.summary-metadata","false")

还添加到hive.xml

  <property>
    <name>hive.stats.autogather</name>
    <value>false</value>
  </property>

但它的行为仍然相同。

我不使用HDFS。

我将不胜感激任何建议??

0 个答案:

没有答案
相关问题