Scrapy - 从filterbox获取数据(Python)

时间:2017-04-29 14:04:02

标签: python scrapy web-crawler scrapy-spider

我遇到了Scrapy的问题。我需要从下面链接的图像中的红色圆圈部分获取所有城市名称。但是凭借我的代码,我无法返回任何内容。我尝试了许多替代品但没有成功。如何解决这个问题并获得这些城市名称?图像和源代码的链接如下。

import scrapy
from scrapy.spiders import CrawlSpider
#from city_crawl.items import CityCrawlItem


class details(CrawlSpider):
    name = "city_crawling"  
        start_urls = ['https://www.booking.com/searchresults.tr.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=cfc09bd0db4d07c7b55902c6d0ae81a5&track_lsso=1&sb=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Findex.tr.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3Dcfc09bd0db4d07c7b55902c6d0ae81a5%3Bsb_price_type%3Dtotal%26%3B&ss=isve%C3%A7&checkin_monthday=&checkin_month=&checkin_year=&checkout_monthday=&checkout_month=&checkout_year=&room1=A%2CA&no_rooms=1&group_adults=2&group_children=0']


def parse(self, response):
    for content in response.xpath('//a[contains(@data-name, "uf")]'):
        yield {
            'text': content.css('span.filter_label::text').extract()
        }

Image of the source which i need to parse the data. The red circled part in the left is what i need to get

3 个答案:

答案 0 :(得分:0)

您的for循环用于选择<a>元素,class包含“uf”,它将不返回任何内容。如果选择data-name包含“uf”的元素,您可以像这样更改代码:

for content in response.xpath('//a[contains(@data-name, "uf")]'):
    yield {
        'text': content.css('span.filter_label::text').extract()
    }

<强>更新

我测试了你的网址链接,你是对的,它什么都不会返回。根本原因是scrapy重定向三次,最后转到错误的页面,它在错误的页面“https://www.booking.com/country/se.tr.html”中乱写,并且此页面与图像中显示的页面不同。日志如下:

2017-04-30 15:18:47 [scrapy] DEBUG: Redirecting (301) to <GET https://www.bookin
g.com/searchresults.tr.html?ss=isve%25C3%25A7> from <GET https://www.booking.com
/searchresults.tr.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEK
d2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=cfc09bd0db4d07c7b55902c6d0ae81a5&tra
ck_lsso=1&sb=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Fi
ndex.tr.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG
93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3Dcfc09bd0db4d07c7b55902c6d0ae81a5%3Bsb_
price_type%3Dtotal%26%3B&ss=isve%C3%A7&checkin_monthday=&checkin_month=&checkin_
year=&checkout_monthday=&checkout_month=&checkout_year=&room1=A%2CA&no_rooms=1&g
roup_adults=2&group_children=0>
2017-04-30 15:18:48 [scrapy] DEBUG: Redirecting (301) to <GET https://www.bookin
g.com/searchresults.tr.html?ss=isve%C3%A7> from <GET https://www.booking.com/sea
rchresults.tr.html?ss=isve%25C3%25A7>
2017-04-30 15:18:48 [scrapy] DEBUG: Redirecting (302) to <GET https://www.bookin
g.com/country/se.tr.html> from <GET https://www.booking.com/searchresults.tr.htm
l?ss=isve%C3%A7>
2017-04-30 15:18:49 [scrapy] DEBUG: Crawled (200) <GET https://www.booking.com/c
ountry/se.tr.html> (referer: None)
2017-04-30 15:18:49 [scrapy] INFO: Closing spider (finished)

<强>解决方案:

您可以尝试将html文件保存在本地PC中,就像我一样,以“Booking.html”命名,然后将代码更改为:

import scrapy

class CitiesSpider(scrapy.Spider):
    name = "city_crawling"
    start_urls = [
        'file:///F:/algorithm%20study/python/StackOverFlow/Booking.html', # put the saved html file directory here
#        'https://www.booking.com/searchresults.tr.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=cfc09bd0db4d07c7b55902c6d0ae81a5&track_lsso=1&sb=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Findex.tr.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3Dcfc09bd0db4d07c7b55902c6d0ae81a5%3Bsb_price_type%3Dtotal%26%3B&ss=isve%C3%A7&checkin_monthday=&checkin_month=&checkin_year=&checkout_monthday=&checkout_month=&checkout_year=&room1=A%2CA&no_rooms=1&group_adults=2&group_children=0',
    ]

    def parse(self, response):
        #self.logger.info('A response from %s just arrived!', response.url)
        for content in response.xpath('//a[contains(@data-name, "uf")]'):
            #self.logger.info('TEST %s TEST', content.css('span.filter_label::text').extract()) 
            yield {
                'text': content.css('span.filter_label::text').extract()
            }

在你的scrapy项目中运行scrawl命令:scrapy crawl city_crawling,它会让你开始乱写你想要的信息,检查下面的日志和输出:

2017-04-30 15:33:31 [scrapy] DEBUG: Crawled (200) <GET file:///F:/algorithm%20st
udy/python/StackOverFlow/Booking.html> (referer: None)
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nStockholm\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nG\xf6teborg\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nVisby\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nFalkenberg\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nMalm\xf6\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nLysekil\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nSimrishamn\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nLund\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nK\xf6pingsvik\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nBorgholm\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nJ\xf6nk\xf6ping\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nUppsala\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nF\xe4rjestaden\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nHelsingborg\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nRonneby\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nYstad\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nHalmstad\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nKivik\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nBorrby\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nFj\xe4llbacka\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nKarlskrona\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nGr\xe4nna\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nL\xf6ttorp\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nNorrk\xf6ping\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\n\xd6rebro\n']}
2017-04-30 15:33:31 [scrapy] INFO: Closing spider (finished)

答案 1 :(得分:0)

`
def parse(self, response):
    for content in response.xpath('//a[contains(@class, "uf")]'):
        yield {

             'text':content.css('span.filter_label::text').extract(),
            }

` 你需要在

结尾处保留逗号

&#34;&#39;文本&#39;:content.css(&#39; span.filter_label ::文本&#39)中提取()&#34;

答案 2 :(得分:0)

{{1}}

立即检查 它的工作原理