用漂亮的汤和硒抓刮

时间:2018-10-13 14:53:42

标签: python selenium web-scraping beautifulsoup

现在我正在对该网址https://www.lazada.com.my/products/xiaomi-mi-a1-4gb-ram-32gb-rom-i253761547-s336359472.html?spm=a2o4k.searchlistcategory.list.64.71546883QBZiNT&search=1进行抓取

我想取消所有对产品的评论,但会出错。 任何帮助我都非常感谢您:)

我的代码

import requests
from selenium import webdriver
from bs4 import BeautifulSoup as soup
import time
from selenium.webdriver.chrome.options import Options


url = 'https://www.lazada.com.my/products/xiaomi-mi-a1-4gb-ram-32gb- rom-i253761547-s336359472.html? spm=a2o4k.searchlistcategory.list.64.71546883QBZiNT&search=1'

chrome_options = Options()
#chrome_options.add_argument("--headless")
browser = webdriver.Chrome('/Users/e5/fyp/chromedriver', 
chrome_options=chrome_options)
browser.get(url)
time.sleep(0.1)


d = soup(requests.get('https://www.lazada.com.my/products/xiaomi-mi-a1-4gb-ram-32gb-rom-i253761547-s336359472.html?spm=a2o4k.searchlistcategory.list.64.71546883QBZiNT&search=1').text, 'html.parser')
results = list(map(int, filter(None, [i.text for i in d.find_all('button', {'class':'next-pagination-item'})])))
print (results)
for i in range(min(results), max(results)+1):

    browser.find_element_by_xpath('//*[@id="module_product_review"]/div/div[3]/div[2]/div/div/button[{i}]').click()
    page_soups = soup(browser.page_source, 'html.parser')
    headline = page_soups.findAll('div',attrs={"class":"item-content"})

    for item in headline:
        top = item.div
        text_headlines = top.text
        print(text_headlines)

我的错误

InvalidSelectorException: Message: invalid selector: Unable to locate an element with the xpath expression //*[@id="module_product_review"]/div/div[3]/div[2]/div/div/button[{i}] because of the following error:
SyntaxError: Failed to execute 'evaluate' on 'Document': The string '//*[@id="module_product_review"]/div/div[3]/div[2]/div/div/button[{i}]' is not a valid XPath expression.
  (Session info: chrome=69.0.3497.100)
  (Driver info: chromedriver=2.37.544315 (730aa6a5fdba159ac9f4c1e8cbc59bf1b5ce12b7),platform=Windows NT 10.0.17134 x86_64)

1 个答案:

答案 0 :(得分:1)

只需使用他们的json API,无需硒或BeautifulSoup。

import requests

count = 0
for i in range(3):
    count+=1
    url = ('https://my.lazada.com.my/pdp/review/getReviewList?'
        'itemId=253761547&pageSize=5&filter=0&sort=0&pageNo='+str(count))
    req = requests.get(url)
    data = req.json()
    for i in data['model']['items']:
        buyerName = i['buyerName']
        reviewContent = i['reviewContent']
        print(buyerName, reviewContent)
相关问题