为什么我不能用Scrapy抓取这个网站

时间:2014-01-13 11:22:17

标签: python scrapy web-crawler

我无法抓取此网站? :

http://www.itbanen.nl/vacature/zoeken/overzicht/wijzigingsdatum/query//distance/30/output/html/items_per_page/15/page/1/ignore_ids

我尝试了一个非常简单的scrapy代码,看看我是否可以从网站上获得一些东西,但无论我尝试什么,我什么也都得不到..

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy.log import *
from vacatures.settings import *
from vacatures.items import *
from scrapy.http import Request

class VacaturesSpider(CrawlSpider):

    name = 'vacatures_spider'
    allowed_domains = ['www.itbanen.nl']
    start_urls = ['http://www.itbanen.nl/vacature/zoeken/overzicht/wijzigingsdatum/query//distance/30/output/html/items_per_page/15/page/1/ignore_ids']



    def parse(self, response):
        self.log('Nieuwe pagina! %s' % response.url)
        #hxs = HtmlXPathSelector(response)
        sel = Selector(response)
        # HXS to find url that goes to detail page
        test = sel.xpath('//div[@id="resultlist"]/div[@class="resultlist"]/h2/text()').extract()
        print test

        links = sel.xpath('//div[@class="container"]/h2/text()')
        print links
        for link in links:
            link_item = link.extract()
            print link_item
            #yield Request(complete_url(link_item), callback=self.parse_category)

1 个答案:

答案 0 :(得分:2)

我使用scrapy shell并尝试了一下

>>> a = sel.xpath('//div[@class="result-item-header"]//h2/a')
>>> a.xpath('text()').extract()
[u'Service Desk Engineer (Unified C...', u'Virtualisatie specialist', 
                          u'Medior beheerder ICT', ... ]
>>> a.xpath('@href').extract()
[u'http://www.itbanen.nl/vacature/topbaan/3030450/Service+Desk+Engineer+%28Unified+Communications%29', 
 u'http://www.itbanen.nl/vacature/topbaan/3025022/Virtualisatie+specialist', 
 u'http://www.itbanen.nl/vacature/3043979/Medior+beheerder+ICT/0', 
 ...]

所以我猜你的请求生成应该类似于:

for link in a.xpath('@href').extract():
    yield Request(link, callback=self.parse_category)