Python美丽汤Sc,Newegg

时间:2018-11-16 21:11:32

标签: python web-scraping beautifulsoup

我是python的新手,所以我想尝试学习制作网络抓取工具。因此,我试图在Newegg网站上刮取图形卡,但似乎在错误上遇到了一些麻烦。我要做的就是获取数据并将其导入到我可以查看的cvs文件中。但是,如果我评论说我又遇到了另一个错误,则似乎无法解决。任何帮助表示赞赏!

文件“ webScrape.py”,第32行,在     价格= price_container [0] .text.strip(“ |”) IndexError:列表索引超出范围

# import beautiful soup 4 and use urllib to import urlopen
import bs4
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup

# url where we will grab the product data
my_url = 'https://www.newegg.com/Product/ProductList.aspxSubmit=ENE&DEPA=0&Order=BESTMATCH&Description=graphics+card&ignorear=0&N=-1&isNodeId=1'

# open connection and grab the URL page information, read it, then close it
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()

# parse html from the page
page_soup = soup(page_html, "html.parser")

# find each product within the item-container class
containers = page_soup.findAll("div",{"class":"item-container"})

# write a file named products.csv with the data returned
filename = "products.csv"
f = open(filename, "w")

# create headers for products
headers = "price, product_name, shipping\n"

f.write("")

# define containers based on location on webpage and their DOM elements
for container in containers:
       price_container = container.findAll("li", {"class":"price-current"})
       price = price_container[0].text.strip("|")

       title_container = container.findAll("a", {"class":"item-title"})
       product_name = title_container[0].text

       shipping_container = container.findAll("li",{"class":"price-ship"})
       shipping = shipping_container[0].text.strip()

        f.write(price + "," + product_name.replace(",", "|") + "," + shipping + "\n")

f.close()

1 个答案:

答案 0 :(得分:0)

您可以写入数据框,并且很容易将其导出到csv。我向.list-wrap添加了一个附加的类选择器titles,以确保所有列表的长度相同。

from bs4 import BeautifulSoup
import requests
import re
import pandas as pd

def main():

    url = 'https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=+graphics+cards&N=-1&isNodeId=1'
    res = requests.get(url)
    soup = BeautifulSoup(res.content, "lxml")
    prices = soup.select('.price-current')
    titles = soup.select('.list-wrap .item-title')
    shipping = soup.select('.price-ship')   
    items = list(zip(titles,prices, shipping))   
    results = [[title.text.strip(),re.search('\$\d+.\d+',price.text.strip()).group(0),ship.text.strip()] for title, price,ship in items]

    df = pd.DataFrame(results,columns=['title', 'current price', 'shipping cost'])
    df.to_csv(r'C:\Users\User\Desktop\Data.csv', sep=',', encoding='utf-8',index = False )

if __name__ == "__main__":
    main()