Spark作业从hbase获取数据,并将数据摄取到snappydata 1.1.0。随Snappydata 1.1.0打包的Spark作为独立集群启动(snappy和spark共享集群),并且作业通过spark restAPI提交给Spark。
Snappydata 1.1.0群集将在一周内保持稳定。一旦;柱状表的数量达到20-30;提取作业失败,并出现以下提到的异常。使用的总资源未达到50%。在高峰期;每个表的大小可以为10GB(10亿行25列)。
异常详细信息: 由以下原因引起:java.sql.SQLException:(SQLState = 40XL1严重性= 30000)(Server = sw4 / 10.49.2.117 [1527] Thread = ThriftProcessor-57)在请求的时间内无法获得锁 在io.snappydata.thrift.SnappyDataService $ executeUpdate_result $ executeUpdate_resultStandardScheme.read(SnappyDataService.java:8244) 在io.snappydata.thrift.SnappyDataService $ executeUpdate_result $ executeUpdate_resultStandardScheme.read(SnappyDataService.java:8221) 在io.snappydata.thrift.SnappyDataService $ executeUpdate_result.read(SnappyDataService.java:8160) 在org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) 在io.snappydata.thrift.SnappyDataService $ Client.recv_executeUpdate(SnappyDataService.java:285) 在io.snappydata.thrift.SnappyDataService $ Client.executeUpdate(SnappyDataService.java:269) 在io.snappydata.thrift.internal.ClientService.executeUpdate(ClientService.java:976) 在io.snappydata.thrift.internal.ClientStatement.executeUpdate(ClientStatement.java:687) 在io.snappydata.thrift.internal.ClientStatement.executeUpdate(ClientStatement.java:221) 在org.apache.spark.sql.sources.JdbcExtendedUtils $ .executeUpdate(jdbcExtensions.scala:84) 在org.apache.spark.sql.execution.columnar.impl.BaseColumnFormatRelation.createActualTables(ColumnFormatRelation.scala:376) 在org.apache.spark.sql.sources.NativeTableRowLevelSecurityRelation $ class.createTable(interfaces.scala:444) 在org.apache.spark.sql.execution.columnar.JDBCAppendableRelation.createTable(JDBCAppendableRelation.scala:46) 在org.apache.spark.sql.execution.columnar.impl.DefaultSource.createRelation(DefaultSource.scala:191) 在org.apache.spark.sql.execution.columnar.impl.DefaultSource.createRelation(DefaultSource.scala:71) 在org.apache.spark.sql.execution.columnar.impl.DefaultSource.createRelation(DefaultSource.scala:41) 在org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:328) 在org.apache.spark.sql.execution.command.CreateDataSourceTableCommand.run(createDataSourceTables.scala:73) 在org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult $ lzycompute(commands.scala:58) 在org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) 在org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) 在org.apache.spark.sql.execution.SparkPlan $$ anonfun $ execute $ 1.apply(SparkPlan.scala:114) 在org.apache.spark.sql.execution.SparkPlan $$ anonfun $ execute $ 1.apply(SparkPlan.scala:114) 在org.apache.spark.sql.execution.SparkPlan $$ anonfun $ executeQuery $ 1.apply(SparkPlan.scala:135) 在org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:151) 在org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132) 在org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113) 在org.apache.spark.sql.execution.CodegenSparkFallback $$ anonfun $ doExecute $ 1.apply(CodegenSparkFallback.scala:175) 在org.apache.spark.sql.execution.CodegenSparkFallback $$ anonfun $ doExecute $ 1.apply(CodegenSparkFallback.scala:175) 在org.apache.spark.sql.execution.CodegenSparkFallback.executeWithFallback(CodegenSparkFallback.scala:113) 在org.apache.spark.sql.execution.CodegenSparkFallback.doExecute(CodegenSparkFallback.scala:175) 在org.apache.spark.sql.execution.SparkPlan $$ anonfun $ execute $ 1.apply(SparkPlan.scala:114) 在org.apache.spark.sql.execution.SparkPlan $$ anonfun $ execute $ 1.apply(SparkPlan.scala:114) 在org.apache.spark.sql.execution.SparkPlan $$ anonfun $ executeQuery $ 1.apply(SparkPlan.scala:135) 在org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:151) 在org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132) 在org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113) 在org.apache.spark.sql.execution.QueryExecution.toRdd $ lzycompute(QueryExecution.scala:92) 在org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92) 在org.apache.spark.sql.SnappySession.createTableInternal(SnappySession.scala:1259) 在org.apache.spark.sql.SnappySession.createTable(SnappySession.scala:990) 位于com.pw.smp.csa.SuspiciousActivityDetection $ .runjob(SuspiciousActivityDetection.scala:318) 位于com.pw.smp.csa.SuspiciousActivityDetection $ .main(SuspiciousActivityDetection.scala:142) 在com.pw.smp.csa.SuspiciousActivityDetection.main(SuspiciousActivityDetection.scala) ...另外6个 引起原因:java.rmi.ServerException:服务器堆栈:java.sql.SQLTransactionRollbackException(40XL1):在请求的时间内无法获得锁 在com.pivotal.gemfirexd.internal.iapi.error.StandardException.newException(StandardException.java:456) 在com.pivotal.gemfirexd.internal.engine.locks.GfxdLocalLockService.getLockTimeoutException(GfxdLocalLockService.java:295) 在com.pivotal.gemfirexd.internal.engine.locks.GfxdDRWLockService.getLockTimeoutException(GfxdDRWLockService.java:727) 在com.pivotal.gemfirexd.internal.engine.distributed.utils.GemFireXDUtils.lockObject(GemFireXDUtils.java:1350) 在com.pivotal.gemfirexd.internal.impl.sql.catalog.GfxdDataDictionary.lockForWriting(GfxdDataDictionary.java:632) 在com.pivotal.gemfirexd.internal.impl.sql.catalog.GfxdDataDictionary.startWriting(GfxdDataDictionary.java:562) 在com.pivotal.gemfirexd.internal.impl.sql.catalog.GfxdDataDictionary.startWriting(GfxdDataDictionary.java:507) 在com.pivotal.gemfirexd.internal.impl.sql.execute.CreateTableConstantAction.executeConstantAction(CreateTableConstantAction.java:297) 在com.pivotal.gemfirexd.internal.impl.sql.execute.MiscResultSet.open(MiscResultSet.java:64) 在com.pivotal.gemfirexd.internal.impl.sql.GenericPreparedStatement.execute(GenericPreparedStatement.java:593) 在com.pivotal.gemfirexd.internal.impl.jdbc.EmbedStatement.executeStatement(EmbedStatement.java:2179) 在com.pivotal.gemfirexd.internal.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:1289) 在com.pivotal.gemfirexd.internal.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:1006) 在com.pivotal.gemfirexd.internal.impl.jdbc.EmbedStatement.executeUpdate(EmbedStatement.java:503) 在io.snappydata.thrift.server.SnappyDataServiceImpl.executeUpdate(SnappyDataServiceImpl.java:1794) 在io.snappydata.thrift.SnappyDataService $ Processor $ executeUpdate.getResult(SnappyDataService.java:1535) 在io.snappydata.thrift.SnappyDataService $ Processor $ executeUpdate.getResult(SnappyDataService.java:1519) 在org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 在io.snappydata.thrift.server.SnappyDataServiceImpl $ Processor.process(SnappyDataServiceImpl.java:201) 在io.snappydata.thrift.server.SnappyThriftServerThreadPool $ WorkerProcess.run(SnappyThriftServerThreadPool.java:270) 在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 在java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:624) 在io.snappydata.thrift.server.SnappyThriftServer $ 1.lambda $ newThread $ 0(SnappyThriftServer.java:143) 在java.lang.Thread.run(Thread.java:748) 由以下原因引起:com.gemstone.gemfire.cache.LockTimeoutException:对象的锁定超时:DefaultGfxdLockable @ a534854:GfxdDataDictionary,用于锁定:GfxdReentrantReadWriteLock @ 77629235,QSync @ 3630b21a [name = GfxdDataDictionary] [Readers = 0],请求所有者:所有者(成员= 10.49.2.117(29205):5551,XID = 2667,ownerThread = Thread [ThriftProcessor-57,5,SnappyThriftServer线程],vmCreatorThread = Thread [ThriftProcessor-57,5,SnappyThriftServer线程]) 在com.pivotal.gemfirexd.internal.engine.locks.GfxdLocalLockService.getLockTimeoutRuntimeException(GfxdLocalLockService.java:290) 在com.pivotal.gemfirexd.internal.engine.locks.GfxdLocalLockService.getLockTimeoutException(GfxdLocalLockService.java:296) ...另外22个
at io.snappydata.thrift.common.ThriftExceptionUtil.newSQLException(ThriftExceptionUtil.java:109)
at io.snappydata.thrift.internal.ClientStatement.executeUpdate(ClientStatement.java:696)
... 42 more
答案 0 :(得分:0)
看起来像您的App在数据字典被锁定时正在尝试创建表。您的应用同时执行其他工作吗?