Tweepy将推文列表(json)转换为数据帧

时间:2018-02-11 16:41:51

标签: python json dataframe tweepy

我正在尝试通过下面的推文列表获取数据框。但是,在线解决方案都没有帮助。

  1. 当我尝试将搜索结果保存在json文件中时,我得到了
      

    '字典'对象没有属性' _json

  2. def write_tweets(tweets,filename):     '''将推文附加到文件的功能。 '''

    使用以下代码:

    def write_tweets(tweets, filename):
    ''' Function that appends tweets to a file. '''
    
    with open(filename, 'a') as f:
        for tweet in tweets:
            json.dump(tweet._json, f)
            f.write('\n')                               
    write_tweets(searched_tweets,"data.json")
    
    1. 尝试将结果转换为数据框也失败了:

      DataSet [' tweetID'] = [tweet.id for tweet in searching_tweets]

    2. 我的完整代码如下,并返回researched_results,这是一个列表。

      import tweepy
      import pandas as pd
      import json
      
      df= pd.read_excel(dataNLP.xlsx")
      
      IDs = df["TweetID"].tolist()
      
      def load_api():
          ''' Function that loads the twitter API after authorizing
              the user. '''
         # ApI Keys
          consumer_key = "--"
          consumer_secret = "--"
          access_token = "-----"
          access_token_secret = "-"
          #Pass our consumer key and consumer secret to Tweepy's user authentication handler
          auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
          #Pass our access token and access secret to Tweepy's user authentication handler
          auth.set_access_token(access_token, access_token_secret)
          # load the twitter API via tweepy
          return tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True,parser=tweepy.parsers.JSONParser())
      
      #Connect
      api = load_api()
      
      #Creating a twitter API wrapper using tweepy
      i = 0
      #jump 188 as per tweeter api
      step = 100
      searched_tweets=[]
      cant_find_tweets_for_those_ids = []
      cant_find_tweets_for_those_ids_whole =[]
      while i <= len(IDs):
          for each_id in IDs[i:(i+step)]:
              try:
                  new_tweets =   api.api.statuses_lookup(IDs[i:(i+step)])
                  print( "found", len(new_tweets),"tweets")
                  searched_tweets.extend(new_tweets)
                  print( "added", len(searched_tweets),"in searched_tweets")
                  i= i + step +1
              except Exception as e:
                  cant_find_tweets_for_those_ids.append(each_id)
                  cant_find_tweets_for_those_ids_whole.extend(cant_find_tweets_for_those_ids)
      

      示例ID:597576902212063232,565586175864610817。

      预期的数据框结果可能包含以下字段: ID,文本,user_location,hastags,粉丝数,朋友数,重要推文数。 有人可以解释我做错了什么,或者我如何从带有json对象的searching_tweets列表中获得可行的daframe?

      searching_tweets列表的一个元素:

      {'truncated': False, 'in_reply_to_user_id': 297535251, 'place': None, 'retweet_count': 0, 'created_at': 'Mon Feb 23 20:28:36 +0000 2015', 'in_reply_to_screen_name': 'OutworldDOTA2', 'source': '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>', 'favorited': False, 'contributors': None, 'is_quote_status': False, 'geo': None, 'id': 569957017655226369, 'in_reply_to_status_id_str': '569956825057120256', 'in_reply_to_status_id': 569956825057120256, 'coordinates': None, 'in_reply_to_user_id_str': '297535251', 'id_str': '569957017655226369', 'lang': 'en', 'user': {'description': 'Founder, Online Abuse Prevention Initiative. v (gaming account: @grandma_kj)', 'default_profile': False, 'profile_sidebar_border_color': '181A1E', 'name': 'needlessly obscenity-laced', 'time_zone': 'Pacific Time (US & Canada)', 'profile_banner_url': 'link', 'screen_name': 'randileeharper', 'favourites_count': 66157, 'translator_type': 'regular', 'contributors_enabled': False, 'created_at': 'Sat Feb 23 07:27:19 +0000 2008', 'protected': False, 'notifications': False, 'profile_background_color': '1A1B1F', 'following': False, 'id_str': '13857342', 'location': 'Portland, OR', 'entities': {'description': {'urls': [{'url': 'link, 'expanded_url': 'link', 'indices': [45, 68], 'display_url': 'patreon.com/freebsdgirl'}]}, 'url': {'urls': [{'url': 'link': 'link', 'indices': [0, 23], 'display_url': 'blog.randi.io'}]}}, 'id': 13857342, 'utc_offset': -28800, 'has_extended_profile': True, 'profile_sidebar_fill_color': '252429', 'profile_image_url': 'link', 'friends_count': 787, 'verified': True, 'link': 'link', 'profile_background_image_url': 'link', 'profile_link_color': '2FC2EF', 'profile_text_color': '666666', 'is_translator': False, 'lang': 'en', 'geo_enabled': True, 'statuses_count': 123525, 'profile_image_url_link', 'default_profile_image': False, 'url': 'link', 'listed_count': 901, 'followers_count': 20638, 'follow_request_sent': False, 'profile_use_background_image': True, 'profile_background_tile': False, 'is_translation_enabled': False}, 'text': '@OutworldDOTA2 i\'m very entertained that all it takes is "155 IQ" for me to know precisely who is being discussed.', 'retweeted': False, 'entities': {'hashtags': [], 'urls': [], 'symbols': [], 'user_mentions': [{'id_str': '297535251', 'screen_name': 'OutworldDOTA2', 'name': 'Follow Your Leader', 'indices': [0, 14], 'id': 297535251}]}, 'favorite_count': 0}
      

2 个答案:

答案 0 :(得分:0)

感谢您抽出宝贵时间。两种解决方案都会产生相同的错&#39;列表&#39;对象没有属性&#39; _json&#39;。 Searched_tweets是推文列表。我希望能够将其保存为json格式,然后将其转换为数据帧。唯一似乎有用的是来自pandas.io.json import

json_normalize
data_nested = json_normalize(searched_tweets)
data = data_nested[["entities.hashtags","favorite_count","id","id_str","lang","retweet_count","retweeted","text","user.description","user.entities.description.urls","user.favourites_count","user.follow_request_sent","user.followers_count","user.following","user.friends_count","user.geo_enabled","user.has_extended_profile","user.id","user.id_str","user.is_translation_enabled","user.is_translator","user.lang","user.listed_count","user.location","user.name","user.notifications","user.translator_type","user.verified"]].copy()

我不确定如何对其进行转换并将其保存为json文件。

答案 1 :(得分:0)

您的代码:

for tweet in tweets:
    json.dump(tweet._json, f)

tweet._json返回一个字典。因此错误消息。就个人而言,这就是我喜欢的方式。

如果您只想要一个推文列表,则声明一个列表并将每个推文字典添加到其中。

tweet_list = []
for tweet in tweets:
    tweet_list.append(tweet._json)

然后,您可以按列表索引和字典键访问tweet属性。

print(tweet_list[0].get("text"))
print(tweet_list[0].get("created_at")

如果要将列表保存到文件,可以使用pickle。

import pickle

def pickle_data(filename, tweet_list):
    with open(filename, "wb") as handle:
        pickle.dump(tweet_list, handle, protocol=pickle.HIGHEST_PROTOCOL)

当您要加载文件时,可以将其直接加载到词典列表中。

def unpickle_data(filename):
    with open(filename, "rb") as handle:
        tweet_list = pickle.load(handle)
    return tweet_list

不确定这正是您所寻找的,但我希望您可以使用某些东西。