无法弄清楚如何正确输出数据

时间:2018-11-07 10:24:53

标签: python python-3.x

我是python的相对新手,但还是设法以某种方式为Instagram构建了一个scraper。我现在想更进一步,将IG配置文件中5个最常用的标签输出到我的CSV输出文件中。

当前输出:

我设法隔离了5个最常用的标签,但是我在csv中得到了这个结果:

  

[('#striveforgreatness',3),('#jamesgang',3),('#thekidfromakron',   2),('#togetherwecanchangetheworld',1),('#halloweenchronicles ,, 1)]

csv output

所需的输出:

我最终希望得到的结果是.CSV末尾有5列,输出第X个最常用的值。

在这行中有一些内容:

desired output

我已经搜索了一段时间,并设法将它们分别隔离,但是我总是以'('#thekidfromakron',2)'作为输出。我似乎缺少了困惑的一部分:(。

这是我目前正在使用的东西:

import csv
import requests
from bs4 import BeautifulSoup
import json
import re
import time
from collections import Counter
ts = time.gmtime()


def get_csv_header(top_numb):
        fieldnames = ['USER','MEDIA COUNT','FOLLOWERCOUNT','TOTAL LIKES','TOTAL COMMENTS','ER','ER IN %', 'BIO', 'ALL CAPTION TEXT','HASHTAGS COUNTED','MOST COMMON HASHTAGS']
        return fieldnames


def write_csv_header(filename, headers):
        with open(filename, 'w', newline='') as f_out:
            writer = csv.DictWriter(f_out, fieldnames=headers)
            writer.writeheader()
        return

def read_user_name(t_file):
        with open(t_file) as f:
            user_list = f.read().splitlines()
        return user_list
if __name__ == '__main__':

    # HERE YOU CAN SPECIFY YOUR USERLIST FILE NAME,
    # Which contains a list of usernames's BY DEFAULT <current working directory>/userlist.txt
    USER_FILE = 'userlist.txt'

    # HERE YOU CAN SPECIFY YOUR DATA FILE NAME, BY DEFAULT (data.csv)', Where your final result stays
    DATA_FILE = 'users_with_er.csv'
    MAX_POST = 12  # MAX POST

    print('Starting the engagement calculations... Please wait until it finishes!')


    users = read_user_name(USER_FILE)
    """ Writing data to csv file """
    csv_headers = get_csv_header(MAX_POST)
    write_csv_header(DATA_FILE, csv_headers)

    for user  in users:

        post_info = {'USER': user}
        url = 'https://www.instagram.com/' + user + '/'

        #for troubleshooting, un-comment the next two lines:
        #print(user)
        #print(url)

        try: 
            r = requests.get(url)
            if r.status_code != 200: 
                print(timestamp,' user {0} not found or page unavailable! Skipping...'.format(user))
                continue
            soup = BeautifulSoup(r.content, "html.parser")
            scripts = soup.find_all('script', type="text/javascript", text=re.compile('window._sharedData'))
            stringified_json = scripts[0].get_text().replace('window._sharedData = ', '')[:-1]

            j = json.loads(stringified_json)['entry_data']['ProfilePage'][0]
            timestamp = time.strftime("%d-%m-%Y %H:%M:%S", ts)
        except ValueError:
            print(timestamp,'ValueError for username {0}...Skipping...'.format(user))
            continue
        except IndexError as error:
        # Output expected IndexErrors.
            print(timestamp, error)
            continue
        if j['graphql']['user']['edge_followed_by']['count'] <=0:
            print(timestamp,'user {0} has no followers! Skipping...'.format(user))
            continue
        if j['graphql']['user']['edge_owner_to_timeline_media']['count'] <12:
            print(timestamp,'user {0} has less than 12 posts! Skipping...'.format(user))
            continue
        if j['graphql']['user']['is_private'] is True:
            print(timestamp,'user {0} has a private profile! Skipping...'.format(user))
            continue
        media_count = j['graphql']['user']['edge_owner_to_timeline_media']['count']
        accountname = j['graphql']['user']['username']
        followercount = j['graphql']['user']['edge_followed_by']['count']
        bio = j['graphql']['user']['biography']
        i = 0
        total_likes = 0
        total_comments = 0
        all_captiontext = ''
        while i <= 11: 
                total_likes += j['graphql']['user']['edge_owner_to_timeline_media']['edges'][i]['node']['edge_liked_by']['count']
                total_comments += j['graphql']['user']['edge_owner_to_timeline_media']['edges'][i]['node']['edge_media_to_comment']['count']
                captions = j['graphql']['user']['edge_owner_to_timeline_media']['edges'][i]['node']['edge_media_to_caption']
                caption_detail = captions['edges'][0]['node']['text']
                all_captiontext += caption_detail
                i += 1
        engagement_rate_percentage = '{0:.4f}'.format((((total_likes + total_comments) / followercount)/12)*100) + '%'
        engagement_rate = (((total_likes + total_comments) / followercount)/12*100)

        #isolate and count hashtags
        hashtags = re.findall(r'#\w*', all_captiontext)
        hashtags_counted = Counter(hashtags)
        most_common = hashtags_counted.most_common(5)

        with open('users_with_er.csv', 'a', newline='',  encoding='utf-8') as data_out:

            print(timestamp,'Writing Data for user {0}...'.format(user))            
            post_info["USER"] = accountname
            post_info["FOLLOWERCOUNT"] = followercount
            post_info["MEDIA COUNT"] = media_count
            post_info["TOTAL LIKES"] = total_likes
            post_info["TOTAL COMMENTS"] = total_comments
            post_info["ER"] = engagement_rate
            post_info["ER IN %"] = engagement_rate_percentage
            post_info["BIO"] = bio
            post_info["ALL CAPTION TEXT"] = all_captiontext
            post_info["HASHTAGS COUNTED"] = hashtags_counted
            csv_writer = csv.DictWriter(data_out, fieldnames=csv_headers)
            csv_writer.writerow(post_info)

""" Done with the script """
print('ALL DONE !!!! ')

在此之前的代码只是简单地抓取网页,并将最后12个帖子中的所有标题编译为“ all_captiontext”。

解决这个(可能很简单)问题的任何帮助将不胜感激,因为我已经为此奋斗了好几天(再次,我是个菜鸟:'))。

2 个答案:

答案 0 :(得分:1)

替换行

post_info["MOST COMMON HASHTAGS"] = most_common

具有:

for i, counter_tuple in enumerate(most_common):
  tag_name = counter_tuple[0].replace('#','')
  label = "Top %d" % (i + 1)
  post_info[label] = tag_name

也缺少一些代码。例如,您的代码不包含csv_headers变量,我想应该是

csv_headers = post_info.keys()

似乎您正在打开一个文件以仅写一行。我认为这不是故意的,因此您要做的是将结果收集到词典列表中。较干净的解决方案是使用熊猫的数据框,您可以output straight into a csv file

答案 1 :(得分:0)

most_common是对hashtags_counted.most_common的调用的输出,我在这里查看了文档:https://docs.python.org/2/library/collections.html#collections.Counter.most_common

如果格式化以下格式,则输出:[(key, value), (key, value), ...],并以出现次数的重要性递减的顺序排列。

因此,要仅获取名称而不是出现的次数,应替换:

post_info["MOST COMMON HASHTAGS"] = most_common

通过

post_info["MOST COMMON HASHTAGS"] = [x[0] for x in most_common]

您有一个元组列表。该语句动态建立每个元组的第一个元素的列表,并保持排序顺序。