如何通过Beautiful Soup解析下一页?

时间:2016-03-04 12:55:58

标签: html python-3.x web-scraping bs4

我使用下面的代码来解析下一页的页面:

def parseNextThemeUrl(url):
  ret = []
  ret1 = []
  html = urllib.request.urlopen(url)
  html = BeautifulSoup(html, PARSER)
  html = html.find('a', class_='pager_next')
  if html:
    html = urljoin(url, html.get('href'))
    ret1 = parseNextThemeUrl(html)

    for r in ret1:
        ret.append(r)
  else:
    ret.append(url)
  return ret

但是我收到如下错误,如果有链接,如何解析下一个链接。

Traceback (most recent call last):
html = urllib.request.urlopen(url)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 162, in urlopen
return opener.open(url, data, timeout)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 456, in open
req.timeout = timeout
AttributeError: 'list' object has no attribute 'timeout'

1 个答案:

答案 0 :(得分:0)

我得到了自己的答案如下:

def parseNextThemeUrl(url):
  urls = []
  urls.append(url)
  html = urllib.request.urlopen(url)
  soup = BeautifulSoup(html, 'lxml')
  new_page = soup.find('a', class_='pager_next')

  if new_page:
    new_url = urljoin(url, new_page.get('href'))
    urls1 = parseNextThemeUrl(new_url)

    for url1 in urls1:
        urls.append(url1)
  return urls