我是一名Python初学者,我发现很难找出一个简单的Tweepy Streaming API。
基本上,我正在尝试执行以下操作。
流式葡萄牙语推文。
显示每条推文的情绪。
我无法串流语言推文。 有人可以帮我弄清楚我做错了什么吗。
import tweepy
from textblob import TextBlob
### I have the keys updated on those veriables
auth = tweepy.OAuthHandler(CONSUMER_KEY,CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN,ACCESS_TOKEN_SECRET)
API = tweepy.API(auth)
class MyStreamListener(tweepy.StreamListener):
def on_status(self, status):
print("--------------------")
print(status.text)
analysis = TextBlob(status.text)
if analysis.sentiment.polarity > 0:
print("sentiment is positiv")
elif analysis.sentiment.polarity == 0:
print("sentiment is Neutral")
else:
print("sentiment is Negative")
print("--------------------\n")
myStreamListener = MyStreamListener()
myStream = tweepy.Stream(auth = API.auth, listener=myStreamListener, tweet_mode='extended', lang='pt')
myStream.filter(track=['trump'])
示例o / p为
RT @SAGEOceanTweets: Innovation Hack Week 2019: @nesta_uk is exploring the possibility of holding a hack week in 2019, focused on state-of-�
但是它在几条推文之后停止了,我得到了这个错误
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode
character '\U0001f4ca' in position 76: character maps to <undefined>
[Finished in 85.488s]
同样,这些推文也不是葡萄牙语。 如何连续流式传输并获得葡萄牙语的推文并进行情感分析
您能否也请我指导如何流语言推文,然后使用textblob分析情感。
谢谢
答案 0 :(得分:0)
此代码可以帮助您实现目标:
它从Twitter收集数据并分析情绪。但是,如果要开发葡萄牙语的情感分析,则应该使用经过培训的葡萄牙语维基百科(Word2Vec),以获得经过培训的模型的词嵌入。那是您可靠地做到这一点的唯一方法。 NLTK和Gensim的英语水平更好,NLTK的葡萄牙语功能非常有限。
from nltk import sent_tokenize, word_tokenize, pos_tag
from nltk import sent_tokenize, word_tokenize, pos_tag
import nltk
import numpy as np
from nltk.stem import WordNetLemmatizer
import tweepy
from tweepy import OAuthHandler
from tweepy import Stream
from tweepy.streaming import StreamListener
import re
consumer_key = '12345'
consumer_secret = '12345'
access_token = '123-12345'
access_secret = '12345'
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth)
number_tweets=100
data=[]
for status in tweepy.Cursor(api.search,q="trump").items(number_tweets):
try:
URLless_string = re.sub(r'\w+:\/{2}[\d\w-]+(\.[\d\w-]+)*(?:(?:\/[^\s/]*))*', '', status.text)
data.append(URLless_string)
except:
pass
lemmatizer = WordNetLemmatizer()
text=data
sentences = sent_tokenize(str(text))
sentences2=sentences
sentences2
tokens = word_tokenize(str(text))
tokens=[lemmatizer.lemmatize(tokens[i]) for i in range(0,len(tokens))]
len(tokens)
tagged_tokens = pos_tag(tokens)
tagged_tokens