Scrapy中的变量

时间:2014-03-01 15:01:52

标签: python scrapy

我可以在start_urls中使用变量吗?请参阅下面的脚本:

此脚本可以正常工作:

from scrapy.spider import Spider
from scrapy.selector import Selector
from example.items import ExampleItem

class ExampleSpider(Spider):
name = "example"
allowed_domains = ["example.com"]
start_urls = [

"http://www.example.com/search-keywords=['0750692995']",
"http://www.example.com/search-keywords=['0205343929']",
"http://www.example.com/search-keywords=['0874367379']",

]

def parse(self, response):
   hxs = Selector(response)
   item = ExampleItem()
   item['url'] = response.url
   item['price'] = hxs.select("//li[@class='mpbold']/a/text()").extract()
   item['title'] = hxs.select("//span[@class='title L']/text()").extract()
   return item

但我想这样:

from scrapy.spider import Spider
from scrapy.selector import Selector
from example.items import ExampleItem

class ExampleSpider(Spider):
name = "example"
allowed_domains = ["example.com"]
pro_id = ["0750692995", "0205343929", "0874367379"] ***(I added this line)
start_urls = [

"http://www.example.com/search-keywords=['pro_id']", ***(and I changed this line)

]

def parse(self, response):
   hxs = Selector(response)
   item = ExampleItem()
   item['url'] = response.url
   item['price'] = hxs.select("//li[@class='mpbold']/a/text()").extract()
   item['title'] = hxs.select("//span[@class='title L']/text()").extract()
   return item

我想通过将pro_id号码逐个拉入start_urls函数来运行此脚本。有没有办法做到这一点?我运行脚本,但网址仍然像“http://www.example.com/search-keywords=['pro_id']”not“http://www.example.com/search-keywords=0750692995”。脚本应该如何?谢谢你的帮助。

编辑:在进行@paul t的建议更改后,发生以下错误

2014-03-02 08:39:44+0700 [example] ERROR: Obtaining request from start requests
    Traceback (most recent call last):
      File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1192, in run
        self.mainLoop()
      File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1201, in mainLoop
        self.runUntilCurrent()
      File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 824, in runUntilCurrent
        call.func(*call.args, **call.kw)
      File "C:\Python27\lib\site-packages\scrapy-0.22.2-py2.7.egg\scrapy\utils\reactor.py", line 41, in __call__
        return self._func(*self._a, **self._kw)
    --- <exception caught here> ---
      File "C:\Python27\lib\site-packages\scrapy-0.22.2-py2.7.egg\scrapy\core\engine.py", line 111, in _next_request

        request = next(slot.start_requests)
      File "C:\Users\S\desktop\example\example\spiders\example_spider.py", line 13, in start_requests
        yield Request(self.start_urls_base % pro_id, dont_filter=True)
    exceptions.NameError: global name 'Request' is not defined

3 个答案:

答案 0 :(得分:5)

这样做的一种方法是覆盖蜘蛛的start_requests()方法:

class ExampleSpider(Spider):
    name = "example"
    allowed_domains = ["example.com"]
    pro_ids = ["0750692995", "0205343929", "0874367379"]
    start_urls_base = "http://www.example.com/search-keywords=['%s']"

    def start_requests(self):
        for pro_id in self.pro_ids:
            yield Request(self.start_urls_base % pro_id, dont_filter=True)

答案 1 :(得分:0)

首先,您必须导入请求

from scrapy.http import Request

在此之后你可以按照保罗的建议

    def start_requests(self):
    for pro_id in self.pro_ids:
        yield Request(self.start_urls_base % pro_id, dont_filter=True)

答案 2 :(得分:0)

我认为你可以使用for循环解决它,如下所示:

start_urls = [

"http://www.example.com/search-keywords="+i for i in pro_id

]