报废/抓取多个页面

时间:2017-02-17 14:19:17

标签: python web-scraping scrapy scrapy-spider

到目前为止,我已经找到了如何使用相同的网址抓取一个页面或多个页面,但更改了数字。但是,我找不到如何使用子类别及其子类别来抓取页面,最后获得所需的内容。 我试图抓住这个网站:http://www.askislam.org/index.html 我正在使用Scrapy,但我不知道从哪里开始。 或者你可以建议一个更好的选择,我只是使用python并从那里检查。

由于

# -*- coding: utf-8 -*-
from scrapy.spiders                 import CrawlSpider, Rule
from scrapy.linkextractors          import LinkExtractor
from scrapy.spiders                 import Spider
from scrapy                         import Selector
from ask_islam.items                import AskIslamItem
from scrapy.http                    import Request
from scrapy.linkextractors          import LinkExtractor
import  re

class AskislamSpider(Spider):
    name = "askislam"
    allowed_domains = ["askislam.org"]
    start_urls = ['http://www.askislam.org/']
    rules = [Rule(LinkExtractor(allow = ()), callback = 'parse', follow=True)]

    def parse(self, response):
        hxs = Selector(response)
        links = hxs.css('div[id="categories"] li a::attr(href)').extract()
        for link in links:
            url = 'http://www.askislam.org' + link.replace('index.html', '')
            yield Request(url, callback=self.parse_page)

    def parse_page(self, response):
        hxs = Selector(response)
        categories = hxs.css('div[id="categories"] li').extract()
        questions = hxs.xpath('a').extract()
        if(categories):
            for categoryLink in categories:
                url = 'http://www.askislam.org' + categoryLink.replace('index.html', '')
                yield Request(url, callback=self.parse_page)
                # print (question)

修改

def start_requests(self):
    yield Request("http://www.askislam.org", callback=self.parse_page)

def parse_page(self, response):
    hxs = Selector(response)
    categories = hxs.css('#categories li')
    for cat in categories:
        item = AskIslamItem()
        link = cat.css('a::attr(href)').extract()[0]
        link = "http://www.askislam.org" + link

        item['catLink'] = link

        logging.info("Scraping Link: %s" % (link))

        yield Request(link, callback=self.parse_page)
        yield Request(link, callback=self.parse_categories)

def parse_categories(self, response):
    logging.info("The Cat Url")

1 个答案:

答案 0 :(得分:1)

使用这些子类别的xPath或CSS选择器从http://www.askislam.org/index.html页面读取链接,然后执行另一个Request()

修改

import logging

class AskislamSpider(Spider):

    name = "askislam"

    def start_requests(self):

        yield Request("http://www.askislam.org/", callback=self.parse_page)

    def parse_page(self, response):
        categories = response.css('#categories li').extract()
        for cat in categories:
            link = cat.css("a::attr(href)").extract()[0]
            link = "http://www.askislam.org/" + link

            logging.info("Scraping Link: %s" % (link))

            yield Request(link, callback=self.parse_page)