如何使用tweepy标记所有推文?

时间:2017-07-06 12:16:10

标签: python twitter tweepy tweetstream

我正在尝试用标签中的每条公开推文,但我的代码不会超过299条推文。

我还试图仅在2015年5月和2016年7月从特定时间线上发送推文,如推文。在主要流程中是否有任何方法可以做到这一点,还是应该为它编写一些代码?

这是我的代码:

# if this is the first time, creates a new array which
# will store max id of the tweets for each keyword
if not os.path.isfile("max_ids.npy"):
    max_ids = np.empty(len(keywords))
    # every value is initialized as -1 in order to start from the beginning the first time program run
    max_ids.fill(-1)
else:
    max_ids = np.load("max_ids.npy")  # loads the previous max ids

# if there is any new keywords added, extends the max_ids array in order to correspond every keyword
if len(keywords) > len(max_ids):
    new_indexes = np.empty(len(keywords) - len(max_ids))
    new_indexes.fill(-1)
    max_ids = np.append(arr=max_ids, values=new_indexes)

count = 0
for i in range(len(keywords)):
    since_date="2015-01-01"
    sinceId = None
    tweetCount = 0
    maxTweets = 5000000000000000000000  # maximum tweets to find per keyword
    tweetsPerQry = 100
    searchQuery = "#{0}".format(keywords[i])
    while tweetCount < maxTweets:
        if max_ids[i] < 0:
                if (not sinceId):
                    new_tweets = api.search(q=searchQuery, count=tweetsPerQry)
                else:
                    new_tweets = api.search(q=searchQuery, count=tweetsPerQry,
                                            since_id=sinceId)
        else:
                if (not sinceId):
                    new_tweets = api.search(q=searchQuery, count=tweetsPerQry,
                                            max_id=str(max_ids - 1))
                else:
                    new_tweets = api.search(q=searchQuery, count=tweetsPerQry,
                                            max_id=str(max_ids - 1),
                                            since_id=sinceId)
        if not new_tweets:
            print("Keyword: {0}      No more tweets found".format(searchQuery))
            break
        for tweet in new_tweets:
            count += 1
            print(count)

            file_write.write(
                       .
                       .
                       .
                         )

            item = {
                .
                .
                .
                .
                .
            }

            # instead of using mongo's id for _id, using tweet's id
            raw_data = tweet._json
            raw_data["_id"] = tweet.id
            raw_data.pop("id", None)

            try:
                db["Tweets"].insert_one(item)
            except pymongo.errors.DuplicateKeyError as e:
                print("Already exists in 'Tweets' collection.")
            try:
                db["RawTweets"].insert_one(raw_data)
            except pymongo.errors.DuplicateKeyError as e:
                print("Already exists in 'RawTweets' collection.")

        tweetCount += len(new_tweets)
        print("Downloaded {0} tweets".format(tweetCount))
        max_ids[i] = new_tweets[-1].id

np.save(arr=max_ids, file="max_ids.npy")  # saving in order to continue mining from where left next time program run

4 个答案:

答案 0 :(得分:2)

看看这个:https://tweepy.readthedocs.io/en/v3.5.0/cursor_tutorial.html

试试这个:

import tweepy

auth = tweepy.OAuthHandler(CONSUMER_TOKEN, CONSUMER_SECRET)
api = tweepy.API(auth)

for tweet in tweepy.Cursor(api.search, q='#python', rpp=100).items():
    # Do something
    pass

在您的情况下,您可以获得最多的推文,因此根据您可以执行的链接教程:

import tweepy

MAX_TWEETS = 5000000000000000000000

auth = tweepy.OAuthHandler(CONSUMER_TOKEN, CONSUMER_SECRET)
api = tweepy.API(auth)

for tweet in tweepy.Cursor(api.search, q='#python', rpp=100).items(MAX_TWEETS):
    # Do something
    pass

如果您想要在给定ID之后发送推文,您也可以传递该参数。

答案 1 :(得分:0)

查看twitter api文档,可能只允许解析300条推文。 我建议忘记api,使用流媒体请求。 api是具有限制的请求的实现。

答案 2 :(得分:0)

对不起,我的评论太久了。 :)

当然:)检查这个例子: 高级搜索#data关键字2015年5月 - 2016年 得到这个网址:https://twitter.com/search?l=&q=%23data%20since%3A2015-05-01%20until%3A2016-07-31&src=typd

session = requests.session()
keyword = 'data'
date1 = '2015-05-01'
date2 = 2016-07-31
session.get('https://twitter.com/search?l=&q=%23+keyword+%20since%3A+date1+%20until%3A+date2&src=typd', streaming = True)

现在我们收到了所有要求的推文, 可能你会遇到'分页'的问题 分页网址 - &gt;

https://twitter.com/i/search/timeline?vertical=news&q=%23data%20since%3A2015-05-01%20until%3A2016-07-31&src=typd&include_available_features=1&include_entities=1&max_position=TWEET-759522481271078912-759538448860581892-BD1UO2FFu9QAAAAAAAAETAAAAAcAAAASAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&reset_error_state=false

可能你可以输入一个随机的推文ID,或者你可以先解析,或者从twitter请求一些数据。它可以做到。

使用Chrome的网络标签查找所有要求的信息:)

答案 3 :(得分:0)

此代码对我有用。

import tweepy
import pandas as pd
import os

#Twitter Access
auth = tweepy.OAuthHandler( 'xxx','xxx')
auth.set_access_token('xxx-xxx','xxx')
api = tweepy.API(auth,wait_on_rate_limit = True)

df = pd.DataFrame(columns=['text', 'source', 'url'])
msgs = []
msg =[]

for tweet in tweepy.Cursor(api.search, q='#bmw', rpp=100).items(10):
    msg = [tweet.text, tweet.source, tweet.source_url] 
    msg = tuple(msg)                    
    msgs.append(msg)

df = pd.DataFrame(msgs)