org.apache.storm.hive.common.HiveWriter $ ConnectFailure:无法连接到EndPoint

时间:2017-08-30 14:07:56

标签: hive apache-storm

我试过以下:

  1. 完成编码为http://storm.apache.org/releases/2.0.0-SNAPSHOT/storm-hive.html

  2. 然后将所有节点上的hive-conf.xml更改为链接http://www.openkb.info/2015/06/hive-transaction-feature-in-hive-10.html

  3. 我在风暴和蜂巢流媒体方面面临问题。我收到的错误是:

      

    org.apache.storm.hive.common.HiveWriter $ ConnectFailure:失败   连接到EndPoint {metaStoreUri =' thrift://base1.rolta.com:9083',   database ='默认',table =' table_mqtt',partitionVals = [2017/08/242]}         在org.apache.storm.hive.common.HiveWriter。(HiveWriter.java:80)   〜[stormjar.jar:?]         在org.apache.storm.hive.common.HiveUtils.makeHiveWriter(HiveUtils.java:50)   〜[stormjar.jar:?]         在org.apache.storm.hive.bolt.HiveBolt.getOrCreateWriter(HiveBolt.java:271)   〜[stormjar.jar:?]         在org.apache.storm.hive.bolt.HiveBolt.execute(HiveBolt.java:114)[stormjar.jar:?]         在org.apache.storm.daemon.executor $ fn__9364 $ tuple_action_fn__9366.invoke(executor.clj:734)   [风暴芯1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]         在org.apache.storm.daemon.executor $ mk_task_receiver $ fn__9285.invoke(executor.clj:466)   [风暴芯1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]         在org.apache.storm.disruptor $ clojure_handler $ reify__8798.onEvent(disruptor.clj:40)   [风暴芯1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]         在org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)   [风暴芯1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]         在org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)   [风暴芯1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]         在org.apache.storm.disruptor $ consume_batch_when_available.invoke(disruptor.clj:73)   [风暴芯1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]         在org.apache.storm.daemon.executor $ fn__9364 $ fn__9377 $ fn__9430.invoke(executor.clj:853)   [风暴芯1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]         at org.apache.storm.util $ async_loop $ fn__656.invoke(util.clj:484)[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]         在clojure.lang.AFn.run(AFn.java:22)[clojure-1.7.0.jar:?]         在java.lang.Thread.run(Thread.java:745)[?:1.8.0_77]       引起:org.apache.storm.hive.common.HiveWriter $ TxnBatchFailure:失败   从EndPoint获取事务批处理:   {metaStoreUri =' thrift://base1.rolta.com:9083',database ='默认',   table =' table_mqtt',partitionVals = [2017/08/242]}         在org.apache.storm.hive.common.HiveWriter.nextTxnBatch(HiveWriter.java:264)   〜[stormjar.jar:?]         在org.apache.storm.hive.common.HiveWriter。(HiveWriter.java:72)   〜[stormjar.jar:?]         ......还有13个       引起:org.apache.hive.hcatalog.streaming.TransactionError:无法获取锁定   {metaStoreUri =' thrift://base1.rolta.com:9083',database ='默认',   table =' table_mqtt',partitionVals = [2017/08/242]}         在org.apache.hive.hcatalog.streaming.HiveEndPoint $ TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:575)   〜[stormjar.jar:?]         在org.apache.hive.hcatalog.streaming.HiveEndPoint $ TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:544)   〜[stormjar.jar:?]         在org.apache.storm.hive.common.HiveWriter.nextTxnBatch(HiveWriter.java:259)   〜[stormjar.jar:?]         在org.apache.storm.hive.common.HiveWriter。(HiveWriter.java:72)   〜[stormjar.jar:?]         ......还有13个       引起:org.apache.thrift.transport.TTransportException         在org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)   〜[stormjar.jar:?]         在org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)   〜[stormjar.jar:?]         在org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)   〜[stormjar.jar:?]         在org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)   〜[stormjar.jar:?]         在org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)   〜[stormjar.jar:?]         在org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)   〜[stormjar.jar:?]         在org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore $ Client.recv_lock(ThriftHiveMetastore.java:3781)   〜[stormjar.jar:?]         在org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore $ Client.lock(ThriftHiveMetastore.java:3768)   〜[stormjar.jar:?]         在org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:1736)   〜[stormjar.jar:?]         在org.apache.hive.hcatalog.streaming.HiveEndPoint $ TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:570)   〜[stormjar.jar:?]         在org.apache.hive.hcatalog.streaming.HiveEndPoint $ TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:544)   〜[stormjar.jar:?]         在org.apache.storm.hive.common.HiveWriter.nextTxnBatch(HiveWriter.java:259)   〜[stormjar.jar:?]         在org.apache.storm.hive.common.HiveWriter。(HiveWriter.java:72)   〜[stormjar.jar:?]         ......还有13个

    下面是我的pom。

    <dependencies>
    <dependency> 
       <groupId>joda-time</groupId> 
       <artifactId>joda-time</artifactId> 
       <version>2.9.9</version> 
    </dependency> 
    <dependency> 
       <groupId>org.apache.storm</groupId> 
       <artifactId>storm-core</artifactId> 
       <version>1.0.1</version> 
       <scope>provided</scope> 
    </dependency>
     <dependency>
    <groupId>org.apache.storm</groupId>
    <artifactId>storm-hive</artifactId>
    <version>1.0.1</version>
    </dependency>
    <dependency>
      <groupId>org.eclipse.paho</groupId>
      <artifactId>org.eclipse.paho.client.mqttv3</artifactId>
      <version>1.1.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.httpcomponents</groupId>
        <artifactId>httpclient</artifactId>
        <version>4.3.3</version>
    </dependency>
     <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-client</artifactId>
        <version>2.2.0</version>
        <exclusions>
            <exclusion>
                <groupId>org.slf4j</groupId>
                <artifactId>slf4j-log4j12</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-hdfs</artifactId>
        <version>2.2.0</version>
        <exclusions>
            <exclusion>
                <groupId>org.slf4j</groupId>
                <artifactId>slf4j-log4j12</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    

    任何人都可以帮我提一下建议。

0 个答案:

没有答案