我正在运行一个python脚本来从新闻提供商那里收集数据,并将该脚本采购到flume.conf文件中。
newsAgent.sources = r1
newsAgent.sinks = spark
newsAgent.channels = MemChannel
# Describe/configure the source
newsAgent.sources.r1.type = exec
newsAgent.sources.r1.command = python path_to/data_collector.py
# Describe the sink
newsAgent.sinks.spark.type = avro
newsAgent.sinks.spark.channel = memoryChannel
newsAgent.sinks.spark.hostname = localhost
newsAgent.sinks.spark.port = 4040
# Use a channel which buffers events in memory
newsAgent.channels.MemChannel.type = memory
newsAgent.channels.MemChannel.capacity = 10000
newsAgent.channels.MemChannel.transactionCapacity = 100
# Bind the source and sink to the channel
newsAgent.sources.r1.channels = MemChannel
newsAgent.sinks.spark.channel = MemChannel
insolation中的python脚本运行良好,我可以看到json数据被打印出来。但是当我通过水槽执行它并下沉数据以引发低于警告消息的情况时。
18/08/04 07:36:20 WARN HttpParser: Illegal character 0x0 in state=START
for buffer HeapByteBuffer@5ae61d8b[p=1,l=8192,c=8192,r=8191]= . {\x00<<<\x00\x00\x01\x00\x00\x00\x06\x00\x00\x000\x86\xAa\xDa\xE2\xC4T...ing town", "sum>>>}
18/08/04 07:36:20 WARN HttpParser: bad HTTP parsed: 400 Illegal character 0x0 for HttpChannelOverHttp@46691f53{r=0,c=false,a=IDLE,uri=null}
def process():
for k, v in news_source.items():
feeds = feedparser.parse(v)
for e in feeds.entries:
doc = json.dumps(
{"news_provider": k, "title": e.title.strip(), "summary": BeautifulSoup(e.summary, 'lxml').text.strip(),
"id": e.id.strip(), "published": e.published if e.has_key('published') else None})
print("%s"%doc)
def func():
sc = SparkContext(master="local[*]", appName="App")
ssc = StreamingContext(sc, 300)
flume_strm = FlumeUtils.createStream(ssc, "localhost", 9999)
lines = flume_strm.map(lambda v: json.loads(v[1]))
lines.pprint()
ssc.start()
ssc.awaitTermination()
bin/flume-ng agent --conf conf --conf-file libexec/conf/test.conf --name Agent -Dflume.root.logger=INFO,console
spark-submit --packages org.apache.spark:spark-streaming-flume_2.11:2.2.0 path_to/streaming_script.py
我无法摆脱这些警告消息,并且我希望使用pprint()将相同的json数据打印在Spark日志中,稍后我可以相应地处理这些消息。
在读取流式内容时我是否缺少任何特定配置? 我需要指定任何特定的编码器吗?
任何帮助表示赞赏。
答案 0 :(得分:0)
我一定和您看过相同的教程。我尝试了许多不同的选择。大多数没有成功。但是我发现了一种解决方法:在flume.conf中使用exec源,并完全按照您的方式调用脚本。但是,在您的python脚本中,将数据写入文件。然后,在脚本(data_collector.py)停止执行之前,先“捕获”文件。
我认为这是因为exec源需要“流式处理”数据,并且仅打印输出将不起作用。
我的设置与您的设置非常相似:
stream.py(为了便于理解,删除了逻辑):
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.flume import FlumeUtils
if __name__ == "__main__":
sc = SparkContext(appName="test");
ssc = StreamingContext(sc, 30)
stream = FlumeUtils.createStream(ssc, "127.0.0.1", 55555)
stream.pprint()
这是我的data_collector.py(请注意“ cat”命令的最后一行):
#! /usr/bin/python
import requests
import random
class RandResp():
def __init__(self):
self.url = "https://swapi.co/api/people/"
self.rand = str(random.randint(0, 17))
self.r = requests.get(self.url + self.rand)
def get_r(self):
return(self.r.text)
if __name__ == "__main__":
import os
with open("exec.txt", "w") as file_in:
file_in.write(RandResp().get_r())
os.system("cat exec.txt")
这是我的flume.conf:
# list sources, sinks and channels in the agent
agent.sources = tail-file
agent.channels = c1
agent.sinks=avro-sink
# define the flow
agent.sources.tail-file.channels = c1
agent.sinks.avro-sink.channel = c1
agent.channels.c1.type = memory
agent.channels.c1.capacity = 1000
# define source and sink
agent.sources.tail-file.type = exec
agent.sources.tail-file.command = python /home/james/Desktop/testing/data_collector.py
agent.sources.tail-file.channels = c1
agent.sinks.avro-sink.type = avro
agent.sinks.avro-sink.hostname = 127.0.0.1
agent.sinks.avro-sink.port = 55555
因此,基本上在我的data_collector.py中,我只需执行所需的逻辑,将其写入名为exec.txt的文件中,然后立即“捕获”该文件。很好...祝你好运