Python中的Crawler,urlopen无法正常工作

时间:2018-05-24 22:54:56

标签: python-3.x web-crawler urlopen

我正在尝试从网页中提取一些信息,我有以下代码:

import re
from math import ceil
from urllib.request import urlopen as uReq, Request
from bs4 import BeautifulSoup as soup

InitUrl="https://mtgsingles.gr/search?q="
NumOfCrawledPages = 0
URL_Next = ""
NumOfPages=5

for i in range(0, NumOfPages):
    if i == 0:
        Url = InitUrl
    else:
        Url = URL_Next

    UClient = uReq(Url)  # downloading the url
    page_html = UClient.read()
    UClient.close()

    page_soup = soup(page_html, "html.parser")


    cards = page_soup.findAll("div", {"class": ["iso-item", "item-row-view"]})


    for card in cards:
        card_name = card.div.div.strong.span.contents[3].contents[0].replace("\xa0 ", "")

        if len(card.div.contents) > 3:
            cardP_T = card.div.contents[3].contents[1].text.replace("\n", "").strip()
        else:
            cardP_T = "Does not exist"

        cardType = card.contents[3].text
        print(card_name + "\n" + cardP_T + "\n" + cardType + "\n")


    try:
        URL_Next = "https://mtgsingles.gr" + page_soup.findAll("li", {"class": "next"})[0].contents[0].get("href")
        print("The next URL is: " + URL_Next + "\n")
    except IndexError:
        print("Crawling process completed! No more infomation to retrieve!")
    else:
        print("The next URL is: " + URL_Next + "\n")
        NumOfCrawledPages += 1
        Url= URL_Next

    finally:
        print("Moving to page : " + str(NumOfCrawledPages + 1) + "\n")

代码运行正常,没有错误发生但结果不符合预期。我试图从页面中提取一些信息以及下一页的网址。最终我希望该程序运行5次并抓取5页。但是这段代码会抓取给定的初始页面(InitUrl =“xyz.com”)5次,并且不会在提取的下一页url中继续。我尝试通过输入一些打印语句来调试它,看看问题出在哪里,我认为问题在于这些陈述:

 UClient = uReq(Url) 
 page_html = UClient.read()
 UClient.close()

由于某种原因,urlopen在for循环中不能重复工作。为什么会这样?在for语句中使用urlopen是错误的吗?

1 个答案:

答案 0 :(得分:0)

此站点通过Ajax请求获取数据。因此,您必须发送post个请求以获取数据。

提示:正确选择网址,例如:https://mtgsingles.gr/search?ajax=products-listing&q= enter image description here