网络搜寻器无法处理多个网页

时间:2018-06-19 13:15:53

标签: python beautifulsoup web-crawler urllib

我正在尝试使用以下程序从网页中提取有关mtg卡的信息,但我反复检索有关给定初始页面(InitUrl)的信息。搜寻器无法继续进行。我开始相信我没有使用正确的url,或者使用urllib的限制可能会引起我的注意。这是我几个星期以来一直在努力的代码:

import re
from math import ceil
from urllib.request import urlopen as uReq, Request
from bs4 import BeautifulSoup as soup

InitUrl = "https://mtgsingles.gr/search?q=dragon"
NumOfCrawledPages = 0
URL_Next = ""
NumOfPages = 4   # depth of pages to be retrieved

query = InitUrl.split("?")[1]


for i in range(0, NumOfPages):
    if i == 0:
        Url = InitUrl
    else:
        Url = URL_Next

    print(Url)

    UClient = uReq(Url)  # downloading the url
    page_html = UClient.read()
    UClient.close()

    page_soup = soup(page_html, "html.parser")

    cards = page_soup.findAll("div", {"class": ["iso-item", "item-row-view"]})

    for card in cards:
        card_name = card.div.div.strong.span.contents[3].contents[0].replace("\xa0 ", "")

        if len(card.div.contents) > 3:
            cardP_T = card.div.contents[3].contents[1].text.replace("\n", "").strip()
        else:
            cardP_T = "Does not exist"

        cardType = card.contents[3].text
        print(card_name + "\n" + cardP_T + "\n" + cardType + "\n")

    try:
        URL_Next = InitUrl + "&page=" + str(i + 2)

        print("The next URL is: " + URL_Next + "\n")
    except IndexError:
        print("Crawling process completed! No more infomation to retrieve!")
    else:
        NumOfCrawledPages += 1
        Url = URL_Next
    finally:
        print("Moving to page : " + str(NumOfCrawledPages + 1) + "\n")

2 个答案:

答案 0 :(得分:1)

您的代码失败的原因之一是您不使用cookie。该网站似乎要求这些内容允许分页。

一种干净而简单的提取感兴趣数据的方法将是这样的:

import requests
from bs4 import BeautifulSoup

# the site actually uses this url under the hood for paging - check out Google Dev Tools
paging_url = "https://mtgsingles.gr/search?ajax=products-listing&lang=en&page={}&q=dragon"
return_list = []
# the page-scroll will only work when we support cookies
# so we fetch the page in a session
session = requests.Session()
session.get("https://mtgsingles.gr/")

除最后一个按钮外,所有页面都有一个下一个按钮。因此,我们使用此知识进行循环,直到下一个按钮消失为止。当这样做时-意味着到达最后一页-该按钮将替换为带有“下一个隐藏”类的“ li”标签。它仅存在于最后一页

现在我们准备开始循环播放

page = 1 # set count for start page
keep_paging = True # use flag to end loop when last page is reached
while keep_paging:
    print("[*] Extracting data for page {}".format(page))
    r = session.get(paging_url.format(page))
    soup = BeautifulSoup(r.text, "html.parser")
    items = soup.select('.iso-item.item-row-view.clearfix')
    for item in items:
        name = item.find('div', class_='col-md-10').get_text().strip().split('\xa0')[0]
        toughness_element = item.find('div', class_='card-power-toughness')
        try:
            toughness = toughness_element.get_text().strip()
        except:
            toughness = None
        cardtype = item.find('div', class_='cardtype').get_text()
        card_dict = {
            "name": name,
            "toughness": toughness,
            "cardtype": cardtype
        }
        return_list.append(card_dict)

    if soup.select('li.next.hidden'): # this element only exists if the last page is reached
        keep_paging = False
        print("[*] Scraper is done. Quitting...")
    else:
        page += 1

# do stuff with your list of dicts - e.g. load it into pandas and save it to a spreadsheet

这将滚动直到不存在更多页面为止-无论站点中有多少子页面。

我在上面的评论中的观点仅仅是,如果您在代码中遇到异常,则页面数将永远不会增加。那可能不是您想要的,这就是为什么我建议您多了解整个try-except-else-finally交易行为的原因。

答案 1 :(得分:0)

我也被请求给了相同的答复而虚张声势,忽略了page参数。作为一种肮脏的解决方案,我可以为您提供一个首先将page-size设置为足够高的数字,以获取所需的所有商品(此参数出于某些原因而起作用...)

  import re
  from math import ceil
  import requests
  from bs4 import BeautifulSoup as soup

  InitUrl = Url = "https://mtgsingles.gr/search"
  NumOfCrawledPages = 0
  URL_Next = ""
  NumOfPages = 2   # depth of pages to be retrieved

  query = "dragon"
  cardSet=set()

  for i in range(1, NumOfPages):
      page_html = requests.get(InitUrl,params={"page":i,"q":query,"page-size":999})
      print(page_html.url)
      page_soup = soup(page_html.text, "html.parser")

      cards = page_soup.findAll("div", {"class": ["iso-item", "item-row-view"]})

      for card in cards:
          card_name = card.div.div.strong.span.contents[3].contents[0].replace("\xa0 ", "")

          if len(card.div.contents) > 3:
              cardP_T = card.div.contents[3].contents[1].text.replace("\n", "").strip()
          else:
              cardP_T = "Does not exist"

          cardType = card.contents[3].text
          cardString=card_name + "\n" + cardP_T + "\n" + cardType + "\n"
          cardSet.add(cardString)
          print(cardString)
      NumOfCrawledPages += 1
      print("Moving to page : " + str(NumOfCrawledPages + 1) + " with " +str(len(cards)) +"(cards)\n")