未正确解析前后空格链接

时间:2014-09-23 19:12:19

标签: python web-scraping scrapy

我有一个我抓取的网站,其在网址前后有空格

<a href="   /c/96894   ">Test</a>

而不是抓取这个:

http://www.stores.com/c/96894/ 

它抓住了这个:

http://www.store.com/c/%0A%0A/c/96894%0A%0A

此外,它会导致包含相同链接的链接无限循环:

http://www.store.com/cp/%0A%0A/cp/96894%0A%0A/cp/96894%0A%0A

所有浏览器都会忽略URL之前和之后的任何空格(\r\n\t和空格)。如何修剪已爬网URL的空白?

这是我的代码。

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

from wallspider.items import Website

class StoreSpider(CrawlSpider):
    name = "cpages"
    allowed_domains = ["www.store.com"]
    start_urls = ["http://www.sore.com",]

    rules = (
    Rule (SgmlLinkExtractor(allow=('/c/', ),deny=('grid=false', 'sort=', 'stores=', '\|\|', 'page=',))
    , callback="parse_items", follow= True, process_links=lambda links: [link for link in links if not link.nofollow],),
    Rule(SgmlLinkExtractor(allow=(),deny=('grid=false', 'sort=', 'stores=', '\|\|', 'page='))),
    )

    def parse_items(self, response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//html')
        items = []

        for site in sites:
            item = Website()
            item['url'] = response.url
            item['referer'] = response.request.headers.get('Referer')
            item['anchor'] = response.meta.get('link_text')
            item['canonical'] = site.xpath('//head/link[@rel="canonical"]/@href').extract()
            item['robots'] = site.select('//meta[@name="robots"]/@content').extract()
            items.append(item)

        return items

2 个答案:

答案 0 :(得分:1)

我在LinkExtractor实例中使用了process_value = cleanurl

def cleanurl(link_text):
    return link_text.strip("\t\r\n ")

代码如果有人遇到同样的问题:

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

from wallspider.items import Website


class storeSpider(CrawlSpider):
    name = "cppages"
    allowed_domains = ["www.store.com"]
    start_urls = ["http://www.store.com",]

    def cleanurl(link_text):
        return link_text.strip("\t\r\n '\"")

    rules = (
    Rule (SgmlLinkExtractor(allow=('/cp/', ),deny=('grid=false', 'sort=', 'stores=', r'\|\|', 'page=',), process_value=cleanurl)
    , callback="parse_items", follow= True, process_links=lambda links: [link for link in links if not link.nofollow],),
    Rule(SgmlLinkExtractor(allow=('/cp/', '/browse/', ),deny=('grid=false', 'sort=', 'stores=', r'\|\|', 'page='), process_value=cleanurl)),
    )

    def parse_items(self, response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//html')
        items = []

        for site in sites:
            item = Website()
            item['url'] = response.url
            item['referer'] = response.request.headers.get('Referer')
            item['anchor'] = response.meta.get('link_text')
            item['canonical'] = site.xpath('//head/link[@rel="canonical"]/@href').extract()
            item['robots'] = site.select('//meta[@name="robots"]/@content').extract()
            items.append(item)

        return items

答案 1 :(得分:0)

您可以将''替换为空格

url = response.url
item['url'] = url.replace(' ', '')

或者,使用正则表达式,

import re
url = response.url
item['url'] = re.sub(r'\s', '', url)