scrapy:请求url必须是str或unicode,得到Selector

时间:2016-06-03 02:17:18

标签: python-2.7 scrapy screen-scraping

我正在使用Scrapy编写一个蜘蛛,以抓取Pinterest的用户细节。我正在尝试获取用户及其关注者的详细信息(依此类推,直到最后一个节点)。

以下是蜘蛛代码:

来自scrapy.spider的

导入BaseSpider

导入scrapy 来自pinners.items导入PinterestItem 来自scrapy.http导入FormRequest 来自urlparse import urlparse

class Sample(BaseSpider):

name = 'sample'
allowed_domains = ['pinterest.com']
start_urls = ['https://www.pinterest.com/banka/followers', ]

def parse(self, response):
    for base_url in response.xpath('//div[@class="Module User gridItem"]/a/@href'):
        list_a = response.urljoin(base_url.extract())
        for new_urls in response.xpath('//div[@class="Module User gridItem"]/a/@href'):
            yield scrapy.Request(new_urls, callback=self.Next)
    yield scrapy.Request(list_a, callback=self.Next)

def Next(self, response):
    href_base = response.xpath('//div[@class = "tabs"]/ul/li/a')
    href_board = href_base.xpath('//div[@class="BoardCount Module"]')
    href_pin = href_base.xpath('.//div[@class="Module PinCount"]')
    href_like = href_base.xpath('.//div[@class="LikeCount Module"]')
    href_followers = href_base.xpath('.//div[@class="FollowerCount Module"]')
    href_following = href_base.xpath('.//div[@class="FollowingCount Module"]')
    item = PinterestItem()
    item["Board_Count"] = href_board.xpath('.//span[@class="value"]/text()').extract()[0]
    item["Pin_Count"] = href_pin.xpath('.//span[@class="value"]/text()').extract()
    item["Like_Count"] = href_like.xpath('.//span[@class="value"]/text()').extract()
    item["Followers_Count"] = href_followers.xpath('.//span[@class="value"]/text()').extract()
    item["Following_Count"] = href_following.xpath('.//span[@class="value"]/text()').extract()
    item["User_ID"] = response.xpath('//link[@rel="canonical"]/@href').extract()[0]
    yield item

我收到以下错误:

raise TypeError('Request url must be str or unicode, got %s:' % type(url).__name__)
TypeError: Request url must be str or unicode, got Selector:

我确实检查了list_a的类型(提取的网址)。它给了我unicode。

1 个答案:

答案 0 :(得分:4)

错误由解析方法中的内部for循环生成:

for new_urls in response.xpath('//div[@class="Module User gridItem"]/a/@href'):
        yield scrapy.Request(new_urls, callback=self.Next)

new_urls变量实际上是一个选择器,请尝试这样的事情:

for base_url in response.xpath('//div[@class="Module User gridItem"]/a/@href'):
    list_a = response.urljoin(base_url.extract())        
    yield scrapy.Request(list_a, callback=self.Next)