Scrapy抛出属性错误

时间:2017-04-14 21:15:23

标签: python web-scraping scrapy

通过我在这里编写代码的方式,我得到了来自不同网站的结果,但由于某种原因,该网站引发了错误。由于我是scrapy的新编码员,我自己没有能力解决这个问题。 Xpath很好。我正在附上我在终端中看到的以及代码:

items.py

import scrapy
class OlxItem(scrapy.Item):
    Title = scrapy.Field()
    Url = scrapy.Field()

olxsp.py

from scrapy.contrib.spiders import CrawlSpider, Rule 
from scrapy.linkextractors import LinkExtractor

class OlxspSpider(CrawlSpider):
    name = "olxsp"
    allowed_domains = ['olx.com.pk']
    start_urls = ['https://www.olx.com.pk/']

    rules = [Rule(LinkExtractor(restrict_xpaths='//div[@class="lheight16 rel homeIconHeight"]')),
             Rule(LinkExtractor(restrict_xpaths='//li[@class="fleft tcenter"]'),
             callback='parse_items', follow=True)]

    def parse_items(self, response):
        page=response.xpath('//h3[@class="large lheight20 margintop10"]')
        for post in page:
            AA=post.xpath('.//a[@class="marginright5 link linkWithHash detailsLink"]/span/text()').extract()
            CC=post.xpath('.//a[@class="marginright5 link linkWithHash detailsLink"]/@href').extract()
            yield {'Title':AA,'Url':CC}

settings.py

BOT_NAME = 'olx'
SPIDER_MODULES = ['olx.spiders']
NEWSPIDER_MODULE = 'olx.spiders'

ROBOTSTXT_OBEY = True

scrapy完成运行后终端的图像: enter image description here

1 个答案:

答案 0 :(得分:1)

  1. 您有ROBOTSTXT_OBEY = True告诉scrapy检查其抓取的域的robots.txt文件,以便它可以确定如何礼貌地访问这些网站。

  2. 您允许allowed_domains = ['www.olx.com']中的其他域名与您实际抓取的域名不同。如果您只是要抓取olx.com.pk个网站,请将allowed_domains更改为['olx.com.pk']。如果您实际上并不知道要抓取的网站,只需删除allowed_domains属性。