如何跟上垃圾邮件的链接

时间:2011-12-01 16:04:00

标签: python scrapy

如何关注此示例中的链接:http://snippets.scrapy.org/snippets/7/? 访问第一页的链接后脚本停止。

class MySpider(BaseSpider):
"""Our ad-hoc spider"""
name = "myspider"
start_urls = ["http://stackoverflow.com/"]

question_list_xpath = '//div[@id="content"]//div[contains(@class, "question-summary")]'

def parse(self, response):
    hxs = HtmlXPathSelector(response)

    for qxs in hxs.select(self.question_list_xpath):
        loader = XPathItemLoader(QuestionItem(), selector=qxs)
        loader.add_xpath('title', './/h3/a/text()')
        loader.add_xpath('summary', './/h3/a/@title')
        loader.add_xpath('tags', './/a[@rel="tag"]/text()')
        loader.add_xpath('user', './/div[@class="started"]/a[2]/text()')
        loader.add_xpath('posted', './/div[@class="started"]/a[1]/span/@title')
        loader.add_xpath('votes', './/div[@class="votes"]/div[1]/text()')
        loader.add_xpath('answers', './/div[contains(@class, "answered")]/div[1]/text()')
        loader.add_xpath('views', './/div[@class="views"]/div[1]/text()')

        yield loader.load_item()

我试图改变:

class MySpider(BaseSpider):

class MySpider(CrawlSpider)

并添加

rules = (
    Rule(SgmlLinkExtractor(allow=()),
         callback='parse',follow=True),
)

但它不会抓取所有网站

谢谢,

1 个答案:

答案 0 :(得分:0)

是的,您需要继承CrawlSpider,并将parse函数重命名为parse_page,因为CrawlSpider使用parse开始抓取。 This was already answered