使用Python Tweepy从Twitter API中挖掘出7天的推文,并使用一个标签

时间:2016-06-24 19:53:25

标签: python django orm celery tweepy

我使用Python,Tweepy,Django,Celery,Django REST框架从Twitter API挖掘了7天的推文。

我每分钟使用芹菜拍子发送请求,并使用Django ORM将收集的数据存储到Postgresql数据库。

为了确保api在每次调用时都不会发送相同的100条推文,我正在检查数据库中的min(tweet.id),并在每次新请求之前将其设置为max_id参数

我遇到了一个问题:一旦我得到7天的推文,我该如何重置max_id

models.py

class Tweet(models.Model):
    tweet_id = models.CharField(
        max_length=200,
        unique=True,
        primary_key=True
    )
    tweet_date = models.DateTimeField()
    tweet_source = models.TextField()
    tweet_favorite_cnt = models.CharField(max_length=200)
    tweet_retweet_cnt = models.CharField(max_length=200)
    tweet_text = models.TextField()

    def __str__(self):
        return self.tweet_id + '  |  ' + str(self.tweet_date)

tasks.py

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)

# Instantiate an instance of the API class from the tweepy library.
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)


@shared_task(name='cleanup')
def cleanup():
    """
    Check database for records older than 7 days.
    Delete them if they exist.
    """
    Tweet.objects.filter(tweet_date__lte=datetime.now() - timedelta(days=7)).delete()


@shared_task(name='get_tweets')
def get_tweets():
    """Get some tweets from the twitter api and store them to the db."""

    # Subtasks
    chain = cleanup.s()
    chain()

    # Check for the minimum tweet_id and set it as max_id.
    # This ensures the API call doesn't keep getting the same tweets.
    max_id = min([tweet.tweet_id for tweet in Tweet.objects.all()])

    # Make the call to the Twitter Search API.
    tweets = api.search(
        q='#python',
        max_id=max_id,
        count=100
    )

    # Store the collected data into lists.
    tweets_date = [tweet.created_at for tweet in tweets]
    tweets_id = [tweet.id for tweet in tweets]
    tweets_source = [tweet.source for tweet in tweets]
    tweets_favorite_cnt = [tweet.favorite_count for tweet in tweets]
    tweets_retweet_cnt = [tweet.retweet_count for tweet in tweets]
    tweets_text = [tweet.text for tweet in tweets]

    # Iterate over these lists and save the items as fields for new records in the database.
    for i, j, k, l, m, n in zip(
            tweets_id,
            tweets_date,
            tweets_source,
            tweets_favorite_cnt,
            tweets_retweet_cnt,
            tweets_text
    ):
        try:
            Tweet.objects.create(
                tweet_id=i,
                tweet_date=j,
                tweet_source=k,
                tweet_favorite_cnt=l,
                tweet_retweet_cnt=m,
                tweet_text=n,
            )
        except IntegrityError:
            pass

1 个答案:

答案 0 :(得分:0)

试试这个:

# Check for the minimum tweet_id and set it as max_id.
# This ensures the API call doesn't keep getting the same tweets.

date_partition = get_seven_day_partition
## Since you're cutting off every seven days, you should know how
## to separate your weeks into seven day sections

max_id = min([tweet.tweet_id for tweet in Tweet.objects.all()
              if tweet.tweet_date > date_partition])

您没有详细说明您如何提取这些推文,以及您知道如何在特定日期停止(以及执行此计划),因此很难提供建议正确的记录日期的方法。

我可以告诉您的是,根据您的使用情况相应地设置date_partitionmax_id分配的这一添加将正确获取最长7天的最长日期