尝试使用spark streaming连接cassandra数据库时出错

时间:2016-05-25 12:48:22

标签: java apache-spark cassandra apache-kafka spark-streaming

我正在一个使用Spark流媒体,Apache kafka和Cassandra的项目中工作。 我使用streaming-kafka集成。在kafka中,我有一个使用此配置发送数据的生产者:

props.put("metadata.broker.list", KafkaProperties.ZOOKEEPER); props.put("bootstrap.servers", KafkaProperties.SERVER); props.put("client.id", "DemoProducer");

其中ZOOKEEPER = localhost:2181SERVER = localhost:9092

一旦我发送数据,我可以用火花接收它,我也可以消耗它。我的火花配置是:

SparkConf sparkConf = new SparkConf().setAppName("org.kakfa.spark.ConsumerData").setMaster("local[4]");
sparkConf.set("spark.cassandra.connection.host", "localhost");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));

之后我试图将这些数据存储到cassandra数据库中。但是当我尝试用这个打开会话时:

CassandraConnector connector = CassandraConnector.apply(jssc.sparkContext().getConf());
Session session = connector.openSession();

我收到以下错误:

Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:220)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1231)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:334)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:182)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:161)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:161)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:36)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:61)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:70)
at org.kakfa.spark.ConsumerData.main(ConsumerData.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

关于cassandra,我使用默认配置:

start_native_transport: true
native_transport_port: 9042
- seeds: "127.0.0.1"
cluster_name: 'Test Cluster'
rpc_address: localhost
rpc_port: 9160
start_rpc: true

我可以使用cqlsh localhost从命令行连接到cassandra,获取以下消息:

Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.0.5 | CQL spec 3.4.0 | Native protocol v4] Use HELP for help. cqlsh> 

我也使用了nodetool状态,这显示了我:

http://pastebin.com/ZQ5YyDyB

为了运行cassandra,我调用了bin/cassandra -f

我想要运行的是:

try (Session session = connector.openSession()) {
        System.out.println("dentro del try");
        session.execute("DROP KEYSPACE IF EXISTS test");
        System.out.println("dentro del try - 1");
        session.execute("CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}");
        System.out.println("dentro del try - 2");
        session.execute("CREATE TABLE test.users (id TEXT PRIMARY KEY, name TEXT)");
        System.out.println("dentro del try - 3");
    }

我的pom.xml文件如下所示:

<dependencies>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming-kafka_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>
    <dependency>
        <groupId>com.datastax.spark</groupId>
        <artifactId>spark-cassandra-connector-java_2.10</artifactId>
        <version>1.6.0-M1</version>
    </dependency>
    <dependency>
        <groupId>com.datastax.spark</groupId>
        <artifactId>spark-cassandra-connector_2.10</artifactId>
        <version>1.6.0-M2</version>
    </dependency>
    <dependency>
        <groupId>com.datastax.spark</groupId>
        <artifactId>spark-cassandra-connector_2.10</artifactId>
        <version>1.1.0-alpha2</version>
    </dependency>
    <dependency>
        <groupId>com.datastax.spark</groupId>
        <artifactId>spark-cassandra-connector-java_2.10</artifactId>
        <version>1.1.0-alpha2</version>
    </dependency>

    <dependency>
        <groupId>org.json</groupId>
        <artifactId>json</artifactId>
        <version>20160212</version>
    </dependency>
</dependencies>

我不知道为什么我无法使用spark连接到cassandra,配置错误或我做错了什么?

谢谢!

1 个答案:

答案 0 :(得分:0)

  

com.datastax.driver.core.exceptions.InvalidQueryException:   未配置的表schema_keyspaces)

该错误表示旧驱动程序使用新的Cassandra版本。查看POM文件,我们发现有两次声明spark-cassandra-connector依赖项。 一个使用版本1.6.0-m2(GOOD),另一个使用1.1.0-alpha2(旧版)。

从配置中删除对旧依赖项1.1.0-alpha2的引用:

<dependency>
    <groupId>com.datastax.spark</groupId>
    <artifactId>spark-cassandra-connector_2.10</artifactId>
    <version>1.1.0-alpha2</version>
</dependency>
<dependency>
    <groupId>com.datastax.spark</groupId>
    <artifactId>spark-cassandra-connector-java_2.10</artifactId>
    <version>1.1.0-alpha2</version>
</dependency>