我正在尝试使用Selinium和Beutifulsoup从网站上抓取评论。我试图抓取的网站是由Javascript动态生成的,这几乎超出了我在所见的教程中所学的内容(我对javascript非常不熟悉)。到目前为止,我最好的解决方案是:
browser = webdriver.Chrome(executable_path=chromedriver_path)
browser.get('https://nationen.ebcomments.dk/embed/stream?asset_id=7627366')
def load_data():
time.sleep(1) # The site needs to load
browser.execute_script("document.querySelector('#stream > div.talk-stream-tab-container.Stream__tabContainer___2trkn > div:nth-child(2) > div > div > div > div > div:nth-child(3) > button').click()") # Click on load more comments button
htmlSource = browser.page_source
soup = BeautifulSoup(browser.page_source, 'html.parser')
load_data() # i should call this few times to load all comments, but in this example i only do it once.
for text in soup.findAll(class_="talk-plugin-rich-text-text"):
print(text.get_text(), "\n") # Print the comments
它可以工作-但速度很慢,而且我确信有更好的解决方案,尤其是如果我想抓取数百篇带有评论的文章。
我认为所有评论都是JSON格式的(我已经查看了网络下的Chrome开发者标签,并且我看到有一个包含带有评论的JSON的响应-参见图片)。然后,我尝试使用SeliniumRequest来获取数据,但根本不确定我在做什么,并且它不起作用。它说“ b'POST主体丢失。您忘了使用body-parser中间件吗?'”。也许我可以从注释API中获取JSON,但不确定是否可行?
from seleniumrequests import Chrome
chromedriver_path = 'C:/chromedriver.exe'
webdriver = Chrome(executable_path=chromedriver_path)
response = webdriver.request('POST', 'https://nationen.ebcomments.dk/api/v1/graph/ql/', data={"assetId": "7627366", "assetUrl": "", "commentId": "","excludeIgnored": "false","hasComment": "false", "sortBy": "CREATED_AT", "sortOrder": "DESC"})
答案 0 :(得分:1)
如果仅是您的评论,则可以通过以下实现将您带到那里:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
link = "https://nationen.ebcomments.dk/embed/stream?asset_id=7627366"
with webdriver.Chrome() as driver:
wait = WebDriverWait(driver,10)
driver.get(link)
while True:
try:
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,".talk-load-more > button"))).click()
except Exception: break
for item in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR,"[data-slot-name='commentContent'] > .CommentContent__content___ZGv1q"))):
print(item.text)