如何使用Python从在线NYT文章中的评论标签中获取数据?

时间:2017-01-04 21:17:50

标签: python beautifulsoup

以下是“纽约时报”文章的网址:包含评论标签的网址为http://www.nytimes.com/2017/01/04/world/asia/china-xinhua-donald-trump-twitter.html

它有一个评论标签,我想从使用Python的BeautifulSoup lib获取网站上的所有评论来实现我的目标。

以下是我的代码。但它产生空洞的结果。我想这是一个不告诉计算机究竟在哪里找到源链接的问题。有人可以修改吗?谢谢!

import bs4
import requests
session = requests.Session()
url = "http://www.nytimes.com/2017/01/04/world/asia/china-xinhua-donald-trump-twitter.html"
page  = session.get(url).text
soup = bs4.BeautifulSoup(page)
comments= soup.find_all(class_='comments-panel')
for e in comments:
    print comments.string

1 个答案:

答案 0 :(得分:1)

隐藏所有评论的评论标签,并通过javascript事件显示。根据@eLRuLL的建议,您可以使用selenium打开注释选项卡并检索这样的注释(在Python 3中):

import time
from bs4 import BeautifulSoup
from selenium import webdriver

driver = webdriver.firefox.webdriver.WebDriver(executable_path='.../geckodriver') # adapt the path to the geckodriver

# set the browser window size to desktop view
driver.set_window_size(2024, 1000)

url = "http://www.nytimes.com/2017/01/04/world/asia/china-xinhua-donald-trump-twitter.html"
driver.get(url)

# waiting for the page is fully loaded
time.sleep(5)

# select the link 'SEE ALL COMMENTS' and click it
elem = driver.find_element_by_css_selector('li.comment-count').click()

# get source code and close the browser
page  = driver.page_source
driver.close()

soup = BeautifulSoup(page)

comments = soup.find_all('div', class_='comments-panel')
print(comments[0].prettify())

编辑:

要检索评论的所有评论和所有回复,您需要1)选择元素' READ MORE'和'查看所有重复',2)迭代并单击它们。 我已相应修改了我的代码示例:

import time
from bs4 import BeautifulSoup
from selenium import webdriver, common

driver = webdriver.firefox.webdriver.WebDriver(executable_path='.../geckodriver') # adapt the path to the geckodriver

# set the browser window size to desktop view
driver.set_window_size(2024, 1000)

url = 'http://www.nytimes.com/2017/01/04/world/asia/china-xinhua-donald-trump-twitter.html'
driver.get(url)

# waiting for the page is fully loaded
time.sleep(5)

# select the link 'SEE ALL COMMENTS' and READ MORE and click them
elem = driver.find_element_by_css_selector('button.button.comments-button.theme-speech-bubble').click()
while True:
    try:
        driver.find_element_by_css_selector('div.comments-expand.comments-thread-expand').click()
        time.sleep(3)
    except common.exceptions.ElementNotVisibleException:
        break

# select the links SEE ALL REPLIES and click them
replies = driver.find_elements_by_css_selector('div.comments-expand.comments-subthread-expand')
for reply in replies:
    reply.click()
    time.sleep(3)

# get source code and close the browser
page  = driver.page_source
driver.close()

soup = BeautifulSoup(page, 'html.parser')

comments = soup.find_all('div', class_='comments-panel')
print(comments[0].prettify())