如何从start_requests方法返回的不同URL获取xpath

时间:2015-01-06 08:02:39

标签: python xpath scrapy web-crawler

这是我的scrapy代码:

import scrapy
from scrapy.spider import BaseSpider
from scrapy.selector import Selector
import MySQLdb


class AmazonSpider(BaseSpider):
    name = "amazon"
    allowed_domains = ["amazon.com"]
    start_urls = []

    def parse(self, response):
        print self.start_urls

    def start_requests(self):
        conn = MySQLdb.connect(user='root',passwd='root',db='mydb',host='localhost')
        cursor = conn.cursor()
        cursor.execute(
            'SELECT url FROM products;'
            )
        rows = cursor.fetchall()
        for row in rows:
            yield self.make_requests_from_url(row[0])
        conn.close()

如何获取start_requests函数返回的网址的xpath?

注意:网址属于不同的域名,不一样。

1 个答案:

答案 0 :(得分:1)

yield使start_requests成为生成器。使用for循环来获取从中返回的每个结果。

像这样:

...
my_spider = AmazonSpider()
for my_url in my_spider.start_requests():
    print 'we get URL: %s' % str(my_url)
...