如何修复Spark Cassandra连接器中的读取超时异常

时间:2019-05-30 12:08:01

标签: apache-spark cassandra azure-databricks

我正在Azure Databricks平台,DSE 6.0.7和Spark cassandra连接器版本2.4.0中使用spark 2.4和scala 2.11

在获得一张约有1亿条记录的表的计数时,我遇到了错误。其中一个应用程序需要确切的行数。以下是我的代码-

val count = spark.read
  .format("org.apache.spark.sql.cassandra")
  .option("table", tableName)
  .option("keyspace", keyspace)
  .load()
  .count()

以下是例外-

java.io.IOException: Exception during execution of SELECT count(*) FROM "mykeyspace"."mytable" WHERE token("id") > ? AND token("id") <= ?   ALLOW FILTERING: [/host:9042] Timed out waiting for server response
  at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:350)
  at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$17.apply(CassandraTableScanRDD.scala:367)
  at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$17.apply(CassandraTableScanRDD.scala:367)
  at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
  at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
  at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12)
  at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(Unknown Source)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
  at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
  at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:634)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
  at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
  at org.apache.spark.scheduler.Task.doRunTask(Task.scala:139)
  at org.apache.spark.scheduler.Task.run(Task.scala:112)
  at org.apache.spark.executor.Executor$TaskRunner$$anonfun$13.apply(Executor.scala:497)
  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1432)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:503)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
Caused by: com.datastax.driver.core.exceptions.OperationTimedOutException: [/host:9042] Timed out waiting for server response 

0 个答案:

没有答案