如何从讨论区中提取用户名,帖子和发布日期?

时间:2020-06-21 17:16:56

标签: python web-scraping beautifulsoup python-requests data-mining

我如何使用bs4和请求进行此Web抓取项目?我正在尝试从论坛站点(确切为myfitnesspal:https://community.myfitnesspal.com/en/discussion/10703170/what-were-eating/p1)中提取用户信息,尤其是用户名,消息和发布日期,并将其加载到csv的列中。到目前为止,我已经有了这段代码,但是不确定如何进行:

from bs4 import BeautifulSoup
import csv
import requests

# get page source and create a BS object
print('Reading page...')

page= requests.get('https://community.myfitnesspal.com/en/discussion/10703170/what-were-eating/p1')
src = page.content

soup = BeautifulSoup(src, 'html.parser')

#container = soup.select('#vanilla_discussion_index > div.container')

container = soup.select('#vanilla_discussion_index > div.container > div.row > div.content.column > div.CommentsWrap > div.DataBox.DataBox-Comments > ul')

postdata = soup.select('div.Message')

user = []
date = []
text = []

for post in postdata:
    text.append(BeautifulSoup(str(post), 'html.parser').get_text().encode('utf-8').strip())

print(text) # this stores the text of each comment/post in a list,
            # so next I'd want to store this in a csv with columns 
            # user, date posted, post with this under the post column
            # and do the same for user and date

3 个答案:

答案 0 :(得分:2)

此脚本将从页面中获取所有消息,并将它们保存在data.csv中:

import csv
import requests
from bs4 import BeautifulSoup


url = 'https://community.myfitnesspal.com/en/discussion/10703170/what-were-eating/p1'

soup = BeautifulSoup(requests.get(url).content, 'html.parser')

all_data = []
for u, d, m in zip(soup.select('.Username'), soup.select('.DateCreated'), soup.select('.Message')):
    all_data.append([u.text, d.get_text(strip=True),m.get_text(strip=True, separator='\n')])

with open('data.csv', 'w', newline='') as csvfile:
    writer = csv.writer(csvfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
    for row in all_data:
        writer.writerow(row)

LibreOffice的屏幕截图:

enter image description here

答案 1 :(得分:2)

我想遵循的一个经验法则是尽可能特定,而不需要获取不必要的信息。因此,例如,如果我想选择一个用户名,则需要检查包含所需信息的元素:

<a class="Username" href="...">Username</a>

由于我正在尝试收集用户名,因此最有可能通过“用户名”类进行选择:

soup.select("a.Username")

这给了我页面上找到的所有用户名的列表,这很好,但是,如果我们想选择“ packages”中的数据(在您的示例中按帖子,我们需要分别收集每个帖子。

要实现此目的,您可以执行以下操作:

comments = soup.select("div.comment")

这将使执行以下操作更加容易:

with open('file.csv', 'w', newline='') as file:
    writer = csv.writer(file)
    writer.writerow(['user', 'date', 'text']
    for comment in comments:
        username = comment.select_one("div.Username")
        date = comment.select_one("span.BodyDate")
        message = comment.select_one("div.Message")
        writer.writerow([username, date, message])

以这种方式进行操作还可以确保即使缺少元素,您的数据也可以保持秩序。

答案 2 :(得分:1)

您在这里:

from bs4 import BeautifulSoup
import csv
import requests


page= requests.get('https://community.myfitnesspal.com/en/discussion/10703170/what-were-eating/p1')
soup = BeautifulSoup(page.content, 'html.parser')
container = soup.select('#vanilla_discussion_index > div.container > div.row > div.content.column > div.CommentsWrap > div.DataBox.DataBox-Comments > ul > li')

with open('data.csv', 'w') as f:
    writer = csv.DictWriter(f, fieldnames=['user', 'date', 'text'])
    writer.writeheader()
    for comment in container:
        writer.writerow({
            'user': comment.find('a', {'class': 'Username'}).get_text(),
            'date': comment.find('span', {'class': 'BodyDate DateCreated'}).get_text().strip(),
            'text': comment.find('div', {'class': 'Message'}).get_text().strip()
        })