递归地解析所有类别链接并获取所有产品

时间:2017-11-29 17:00:54

标签: python python-3.x xpath web-scraping lxml

我一直在玩网页报废(使用Python 3.6.2进行此练习),我觉得我有点失去它。给定this示例链接,这是我想要做的事情:

首先,如您所见,页面上有多个类别。点击上面的每个类别会给我其他类别,然后是其他类别,依此类推,直到我到达产品页面。所以我必须深入 x 次。我认为递归会帮助我实现这一点,但在某些地方我做错了。

代码:

在这里,我将解释我解决问题的方式。首先,我创建了一个会话和一个简单的泛型函数,它将返回一个lxml.html.HtmlElement对象:

from lxml import html
from requests import Session


HEADERS = {
    "User-Agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) "
                  "Chrome/62.0.3202.94 Safari/537.36"
}
TEST_LINK = 'https://www.richelieu.com/us/en/category/custom-made-cabinet-doors-and-drawers/1000128'

session_ = Session()


def get_page(url):
    page = session_.get(url, headers=HEADERS).text
    return html.fromstring(page)

然后,我想我还需要另外两个功能:

  • 一个获取类别链接
  • 和另一个获取产品链接

为了区分彼此,我发现只有在类别页面上,每次都有一个包含CATEGORIES的标题,所以我使用了它:

def read_categories(page):
    categs = []
    try:
        if 'CATEGORIES' in page.xpath('//div[@class="boxData"][2]/h2')[0].text.strip():
            for a in page.xpath('//*[@id="carouselSegment2b"]//li//a'):
                categs.append(a.attrib["href"])
            return categs
        else:
            return None
    except Exception:
        return None


def read_products(page):
    return [
        a_tag.attrib["href"]
        for a_tag in page.xpath("//ul[@id='prodResult']/li//div[@class='imgWrapper']/a")
    ]

现在,唯一剩下的就是递归部分,我确信我做错了什么:

def read_all_categories(page):
    cat = read_categories(page)
    if not cat:
        yield read_products(page)
    else:
        yield from read_all_categories(page)


def main():
    main_page = get_page(TEST_LINK)

    for links in read_all_categories(main_page):
        print(links)

以下是所有代码:

from lxml import html
from requests import Session


HEADERS = {
    "User-Agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) "
                  "Chrome/62.0.3202.94 Safari/537.36"
}
TEST_LINK = 'https://www.richelieu.com/us/en/category/custom-made-cabinet-doors-and-drawers/1000128'

session_ = Session()


def get_page(url):
    page = session_.get(url, headers=HEADERS).text
    return html.fromstring(page)


def read_categories(page):
    categs = []
    try:
        if 'CATEGORIES' in page.xpath('//div[@class="boxData"][2]/h2')[0].text.strip():
            for a in page.xpath('//*[@id="carouselSegment2b"]//li//a'):
                categs.append(a.attrib["href"])
            return categs
        else:
            return None
    except Exception:
        return None


def read_products(page):
    return [
        a_tag.attrib["href"]
        for a_tag in page.xpath("//ul[@id='prodResult']/li//div[@class='imgWrapper']/a")
    ]


def read_all_categories(page):
    cat = read_categories(page)
    if not cat:
        yield read_products(page)
    else:
        yield from read_all_categories(page)


def main():
    main_page = get_page(TEST_LINK)

    for links in read_all_categories(main_page):
        print(links)


if __name__ == '__main__':
    main()

有人可以指出我关于递归函数的正确方向吗?

2 个答案:

答案 0 :(得分:2)

以下是我将如何解决这个问题:

from lxml import html as html_parser
from requests import Session

HEADERS = {
    "User-Agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 "
                  "(KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36"
}

def dig_up_products(url, session=Session()):
    html = session.get(url, headers=HEADERS).text
    page = html_parser.fromstring(html)

    # if it appears to be a categories page, recurse
    for link in page.xpath('//h2[contains(., "CATEGORIES")]/'
                           'following-sibling::div[@id="carouselSegment1b"]//li//a'):
        yield from dig_up_products(link.attrib["href"], session)

    # if it appears to be a products page, return the links
    for link in page.xpath('//ul[@id="prodResult"]/li//div[@class="imgWrapper"]/a'):
        yield link.attrib["href"]

def main():
    start = 'https://www.richelieu.com/us/en/category/custom-made-cabinet-doors-and-drawers/1000128'

    for link in dig_up_products(start):
        print(link)

if __name__ == '__main__':
    main()

迭代空的XPath表达式结果没有任何问题,因此您可以简单地将两种情况(类别页面/产品页面)放入同一个函数中,只要XPath表达式足够具体以识别每个案例。 / p>

答案 1 :(得分:1)

你也可以这样做,使你的脚本略显简洁。我使用lxml库和css selector来完成这项工作。该脚本将解析category下的所有链接,并查找死角,当它出现时,它会从那里解析标题并反复执行所有内容,直到所有链接都用完为止。

from lxml.html import fromstring
import requests

def products_links(link):
    res = requests.get(link, headers={"User-Agent": "Mozilla/5.0"})
    page = fromstring(res.text)

    try:
        for item in page.cssselect(".contentHeading h1"): #check for the match available in target page
            print(item.text)
    except:
        pass

    for link in page.cssselect("h2:contains('CATEGORIES')+[id^='carouselSegment'] .touchcarousel-item a"):
        products_links(link.attrib["href"])

if __name__ == '__main__':

    main_page = 'https://www.richelieu.com/us/en/category/custom-made-cabinet-doors-and-drawers/1000128'
    products_links(main_page)

部分结果:

BRILLANTÉ DOORS
BRILLANTÉ DRAWER FRONTS
BRILLANTÉ CUT TO SIZE PANELS
BRILLANTÉ EDGEBANDING
LACQUERED ZENIT DOORS
ZENIT CUT-TO-SIZE PANELS
EDGEBANDING
ZENIT CUT-TO-SIZE PANELS