抓取网页<ul> <li>(Python)

时间:2018-12-28 23:56:49

标签: python html python-3.x beautifulsoup python-requests-html

问题:

有一个网站https://au.pcpartpicker.com/products/cpu/overall-list/#page=1,在<li>下有一个列表<ul>,列表中的每个项目都包含一个<div>,其类别为 title 该类中还有2个<div>元素,第一个有一些文本示例,例如 3.4 GHz 6核(尖顶岭),我想删除括号中未包含的所有文本以获取顶峰岭。刮下列表后,我想通过更改 #page = 进入下一页。

代码:

我不太确定只有片段,但这里是:

从requests_html导入HTMLSession session = HTMLSession()

r = session.get('https://au.pcpartpicker.com/product/cpu/overall-list/#page=' + page)

table = r.html.find('.ul')

//not sure find each <li> get first <div>

junk, name = div.split('(')

name.replace("(", "")

name.replace(")", "")

预期结果:

我想遍历每个页面,直到找不到每个列表并获得不需要保存的名称为止,因为我有创建它时需要保存的代码。

如果您需要更多信息,请告诉我

谢谢

1 个答案:

答案 0 :(得分:1)

网站是动态的,因此,您必须使用selenium才能产生所需的结果:

from bs4 import BeautifulSoup as soup
from selenium import webdriver
import time, re
d = webdriver.Chrome('/path/to/chromdriver')
d.get('https://au.pcpartpicker.com/products/cpu/overall-list/#page=1')
def cpus(_source):
  result = soup(_source, 'html.parser').find('ul', {'id':'category_content'}).find_all('li')
  _titles = list(filter(None, [(lambda x:'' if x is None else x.text)(i.find('div', {'class':'title'})) for i in result]))
  data = [list(filter(None, [re.findall('(?<=\().*?(?=\))', c.text) for c in i.find_all('div')])) for i in result]
  return _titles, [a for *_, [a] in filter(None, data)]


_titles, _cpus = cpus(d.page_source))
conn.executemany("INSERT INTO cpu (name, family) VALUES (?, ?)", list(zip(_titles, _cpus)))
_last_page = soup(d.page_source, 'html.parser').find_all('a', {'href':re.compile('#page\=\d+')})[-1].text
for i in range(2, int(_last_page)+1):
   d.get(f'https://au.pcpartpicker.com/products/cpu/overall-list/#page={i}') 
   time.sleep(3)
   _titles, _cpus = cpus(d.page_source))
   conn.executemany("INSERT INTO cpu (name, family) VALUES (?, ?)", list(zip(_titles, _cpus)))