java.io.IOException:无法在127.0.0.1:9160

时间:2015-09-28 01:25:30

标签: cassandra apache-spark

我在调用Dataframe上的任何方法时遇到以下错误,比如Datafrataframe.show()..当我调用这个prom tomcat时,我正在发生这种情况。从测试用例开始,它就可以运行了。

异常追踪:  attributes:keyspace_name,durable_writes,strategy_class,strategy_options 2015-09-27 17:21:52 [http-apr-8080-exec-2] INFO c.d.d.c.Cluster - 添加了新的Cassandra主机/127.0.0.1:9042 2015-09-27 17:21:52 [http-apr-8080-exec-2] INFO c.d.s.c.c.CassandraConnector - 连接到Cassandra集群:测试集群 2015-09-27 17:21:52 [http-apr-8080-exec-2]错误c.s.j.s.c.ContainerResponse - MappableContainerException中包含的异常无法映射到响应,重新抛出到HTTP容器 java.io.IOException:无法在127.0.0.1:9160打开与Cassandra的thrift连接     在com.datastax.spark.connector.cql.CassandraConnector.createThriftClient(CassandraConnector.scala:139)〜[spark-cassandra-connector_2.10-1.3.0-M1.jar:1.3.0-M1]     在com.datastax.spark.connector.cql.CassandraConnector.createThriftClient(CassandraConnector.scala:145)〜[spark-cassandra-connector_2.10-1.3.0-M1.jar:1.3.0-M1]     在com.datastax.spark.connector.cql.CassandraConnector.withCassandraClientDo(CassandraConnector.scala:151)〜[spark-cassandra-connector_2.10-1.3.0-M1.jar:1.3.0-M1]     在com.datastax.spark.connector.rdd.partitioner.CassandraRDDPartitioner.partitions(CassandraRDDPartitioner.scala:131)〜[spark-cassandra-connector_2.10-1.3.0-M1.jar:1.3.0-M1]     在com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:120)〜[spark-cassandra-connector_2.10-1.3.0-M1.jar:1.3.0-M1]     在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:219)〜[spark-core_2.10-1.3.1.jar:1.3.1]     在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:217)〜[spark-core_2.10-1.3.1.jar:1.3.1]     在scala.Option.getOrElse(Option.scala:120)〜[scala-library-2.10.5.jar:na]     在org.apache.spark.rdd.RDD.partitions(RDD.scala:217)〜[spark-core_2.10-1.3.1.jar:1.3.1]     在org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)〜[spark-core_2.10-1.3.1.jar:1.3.1]     在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:219)〜[spark-core_2.10-1.3.1.jar:1.3.1]     在org.apache.spark.rdd.RDD $$ anonfun $ partitions $ 2.apply(RDD.scala:217)〜[spark-core_2.10-1.3.1.jar:1.3.1]

追加追踪: 引起:java.lang.NoSuchMethodError:org.apache.cassandra.thrift.TFramedTransportFactory.openTransport(Ljava / lang / String; I)Lorg / apache / thrift / transport / TTransport;     在com.datastax.spark.connector.cql.DefaultConnectionFactory $ .createThriftClient(CassandraConnectionFactory.scala:41)〜[spark-cassandra-connector_2.10-1.3.0-M1.jar:1.3.0-M1]     在com.datastax.spark.connector.cql.CassandraConnector.createThriftClient(CassandraConnector.scala:134)〜[spark-cassandra-connector_2.10-1.3.0-M1.jar:1.3.0-M1]     ...省略了103个共同帧 2015-09-27 17:21:52 [http-apr-8080-exec-2] DEBUG c.p.w.j.WebApplicationWrapper -

0 个答案:

没有答案