Web抓取时如何继续循环

时间:2019-02-20 21:12:50

标签: python python-3.x web-scraping beautifulsoup

我写了一个简单的脚本,以探索如何使用python进行网络抓取。我选择了以下URL:https://www.ebay.co.uk/b/Mens-Coats-Jackets/57988/bn_692010

页面上有48个项目,每个项目都有一个品牌,样式等详细信息,但第16个项目除外,而我的代码在第16个项目时就停止了。所以我的问题是如何继续循环或如何说通过这些细节。这是下面的代码;

    from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup


my_url = 'https://www.ebay.co.uk/b/Mens-Coats-Jackets/57988/bn_692010'

#opening up connection, grabbing the page
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()

# html parsing
page_soup = soup(page_html, 'html.parser')

#grabs each product
containers = page_soup.findAll('div',{'class':'s-item__wrapper clearfix'})

filename = 'ebayproducts1.csv'
f = open(filename, 'w+')

headers = 'product_name, item_price, item_style, shipping_detail\n'

f.write(headers)

contain = containers[0]
container = containers[0]

for container in containers:
    product_name = container.h3.text

    item_details_container = container.findAll('div',{'class':'s-item__details clearfix'})
    item_price = item_details_container[0].div.text

    item_style = item_details_container[0].findAll('span',{'class':'s-item__detail s-item__detail--secondary'})[0].text

    shipping_detail = item_details_container[0].findAll('span',{'class':'s-item__shipping s-item__logisticsCost'})[0].text


    print('product_name: '+ product_name)

    print('item_price: ' + item_price)

    print('item_style: ' + item_style)

    print('shipping_detail: ' + shipping_detail)

    f.write("%s,%s,%s,%s\n" %( product_name, item_price, item_style, shipping_detail))

2 个答案:

答案 0 :(得分:1)

您是正确的,有些项目不存在,您无法在所有情况下都单独测试位置或选择器,例如样式。您可以测试容器文本中是否存在样式。拥有更多Python知识的人可能可以将其整理成更pythonic和更有效的方式

import requests
from bs4 import BeautifulSoup as bs
import re
import pandas as pd
pattern = re.compile(r'Style:')
url = 'https://www.ebay.co.uk/b/Mens-Coats-Jackets/57988/bn_692010?_pgn=1'
res = requests.get(url)
soup = bs(res.content, 'lxml')
results = []
for item in soup.select('.s-item'):
    x = item.select_one('.s-item__title')
    title = x.text if x else None
    x = item.select_one('.s-item__price')
    price = x.text if x else None
    x = item.select_one('.s-item__shipping')
    shipping = x.text if x else None
    x = item.find('span', text=pattern)
    style = x.text.replace('Style: ','') if x else None
    results.append([title, price, shipping, style])

df = pd.DataFrame(results)
print(df)

答案 1 :(得分:0)

您可能会在containers列表中遇到某些元素或标记,这些元素或标记与您要搜索的所有其他元素都不相同。

您可以通过更改containers方法中的搜索参数来更改指定soup.findAll()列表的方式。

尝试打印containers并找出该列表中第16个项目为何不同的原因,然后相应地调整搜索范围。

或者,您可以尝试一下,除了类似这样的东西:

for container in containers:
    try:
       product_name = container.h3.text
       item_details_container = container.findAll('div',{'class':'s-item__details clearfix'})
       item_price = item_details_container[0].div.text
       item_style = item_details_container[0].findAll('span',{'class':'s-item__detail s- 
       item__detail--secondary'})[0].text
       shipping_detail = item_details_container[0].findAll('span',{'class':'s-item__shipping s-item__logisticsCost'})[0].text

       # etc ...

    except <name of your error here, eg. TypeError>:
        print(f'except triggered for {container}')