使用Firehose将Twitter数据流式传输到S3存储桶

时间:2020-09-22 22:28:46

标签: python amazon-s3 twitter boto3 amazon-kinesis-firehose

我正在尝试将数据从Twitter流式传输到AWS Bucket。好消息是,我可以将数据流式传输到我的存储桶中,但是数据以大约20 kb块的形式传入(我认为这可能是由于某些firehose设置所致),即使我在python中将其指定为JSON也不会保存为JSON使用JSON.LOAD的代码。我的S3存储桶中的数据没有保存为JSON,而是看起来没有文件扩展名,并且包含一长串字母数字字符。我认为这可能与client.put_record()

中使用的参数有关

非常感谢您的帮助!

请在下面找到我的代码,该代码来自github here


from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
import json
import boto3
import time


#Variables that contains the user credentials to access Twitter API
consumer_key = "MY_CONSUMER_KEY"
consumer_secret = "MY_CONSUMER_SECRET"
access_token = "MY_ACCESS_TOKEN"
access_token_secret = "MY_SECRET_ACCESS_TOKEN"


#This is a basic listener that just prints received tweets to stdout.
class StdOutListener(StreamListener):

    def on_data(self, data):
        tweet = json.loads(data)
        try:
            if 'extended_tweet' in tweet.keys():
                #print (tweet['text'])
                message_lst = [str(tweet['id']),
                       str(tweet['user']['name']),
                       str(tweet['user']['screen_name']),
                       tweet['extended_tweet']['full_text'],
                       str(tweet['user']['followers_count']),
                       str(tweet['user']['location']),
                       str(tweet['geo']),
                       str(tweet['created_at']),
                       '\n'
                       ]
                message = '\t'.join(message_lst)
                print(message)
                client.put_record(
                    DeliveryStreamName=delivery_stream,
                    Record={
                    'Data': message
                    }
                )
            elif 'text' in tweet.keys():
                #print (tweet['text'])
                message_lst = [str(tweet['id']),
                       str(tweet['user']['name']),
                       str(tweet['user']['screen_name']),
                       tweet['text'].replace('\n',' ').replace('\r',' '),
                       str(tweet['user']['followers_count']),
                       str(tweet['user']['location']),
                       str(tweet['geo']),
                       str(tweet['created_at']),
                       '\n'
                       ]
                message = '\t'.join(message_lst)
                print(message)
                client.put_record(
                    DeliveryStreamName=delivery_stream,
                    Record={
                    'Data': message
                    }
                )
        except (AttributeError, Exception) as e:
                print (e)
        return True

    def on_error(self, status):
        print (status)
        
        
        
        
        
if __name__ == '__main__':

    #This handles Twitter authetification and the connection to Twitter Streaming API
    listener = StdOutListener()
    auth = OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_token_secret)

    #tweets = Table('tweets_ft',connection=conn)
    client = boto3.client('firehose', 
                          region_name='us-east-1',
                          aws_access_key_id='MY ACCESS KEY',
                          aws_secret_access_key='MY SECRET KEY' 
                          )

    delivery_stream = 'my_firehose'
    #This line filter Twitter Streams to capture data by the keywords: 'python', 'javascript', 'ruby'
    #stream.filter(track=['trump'], stall_warnings=True)
    while True:
        try:
            print('Twitter streaming...')
            stream = Stream(auth, listener)
            stream.filter(track=['brexit'], languages=['en'], stall_warnings=True)
        except Exception as e:
            print(e)
            print('Disconnected...')
            time.sleep(5)
            continue   

2 个答案:

答案 0 :(得分:0)

可能已为firehose启用了S3压缩。如果要将原始json数据存储在存储桶中,请确保禁用压缩:

enter image description here

您还可以对firehose进行一些转换,将 or otherwise transform的json消息编码为其他格式。

答案 1 :(得分:0)

所以看起来文件采用JSON格式,我只需要用firefox在S3中打开文件,就可以看到文件的内容。文件大小的问题是由于firehose缓冲区设置引起的,我将它们设置为最低,这就是为什么文件要以这么小的块发送的原因