我正在尝试创建消费者 - 生产者应用程序。
应用程序的生产者将生成关于特定主题的一些数据。消费者将使用相同主题消费此数据并使用spark api处理它并将此数据存储为cassandra表。
以下面的字符串格式提交的传入数据 -
100 = NO | 101 = III | 102 = 0.0771387731911 | 103 = -0.7076915761 100 = NO | 101 = AAA | 102 = 0.8961325446464 | 103 = -0.5465463154
我以下面的方式创造了消费者:
from kafka import KafkaConsumer
from StringIO import StringIO
import pandas as pd
from cassandra.cluster import Cluster
from pyspark import SparkConf, SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
def main():
sc = SparkContext(appName="StreamingContext")
ssc = StreamingContext(sc, 3)
kafka_stream = KafkaUtils.createStream(ssc, "localhost:2181", "sample-kafka-app", {"NO-topic": 1})
raw = kafka_stream.flatMap(lambda kafkaS: [kafkaS])
clean = raw.map(lambda xs: xs[1].split("|"))
my_row = clean.map(lambda x: {
"pk": "uuid()",
"a": x[0],
"b": x[1],
"c": x[2],
"d": x[3],
})
my_row.saveToCassandra("users", "data")
stream.start()
stream.awaitTermination()
if __name__ == "__main__":
main()
Cassandra表结构 -
cqlsh:users> select * from data;
pk | a | b | c | d
----+---+---+---+---
CREATE TABLE users.data (
pk uuid PRIMARY KEY,
a text,
b text,
c text,
d text
)
我面临以下错误 -
Traceback (most recent call last):
File "consumer_no.py", line 84, in <module>
main()
File "consumer_no.py", line 53, in main
my_row.saveToCassandra("users", "data")
AttributeError: 'TransformedDStream' object has no attribute 'saveToCassandra'
17/04/04 14:29:22 INFO SparkContext: Invoking stop() from shutdown hook
我是否正在以正确的方式实现上述解释?如果没有,那么给我建议实现这一点,如果是,那么上面的代码中有什么错误/缺失?
答案 0 :(得分:0)
您应该将每个RDD从该DStream保存到cassandra,而不是直接尝试将TransformedDStream保存到Cassandra。
如果您执行以下操作,您的代码应该可以运行:
my_row.foreachRDD(lambda x: x.saveToCassandra("users", "data"))