将Scrapy请求URL添加到Parsed Array中

时间:2014-04-24 19:28:09

标签: python csv web-scraping scrapy

我正在使用以下Scrapy代码,该代码功能齐全,可以从网站上抓取数据。刮刀输入产品ID的文本列表,该列表生成到第10行的URL中。如何将当前的start_url作为附加元素添加到我的项目数组中?

from scrapy.spider import Spider
from scrapy.selector import Selector
from site_scraper.items import SiteScraperItem

class MySpider(Spider):
    name = "product"
    allowed_domains = ["site.com"]
    url_list = open("productIDs.txt")
    base_url = "http://www.site.com/p/"
    start_urls = [base_url + url.strip() for url in url_list.readlines()]
    url_list.close()

def parse(self, response):
    hxs = Selector(response)
    titles = hxs.xpath("//span[@itemprop='name']")
    items = []
    item = SiteScraperItem()
    item ["Classification"] = titles.xpath("//div[@class='productSoldMessage']/text()").extract()[1:]
    item ["Price"] = titles.xpath("//span[@class='pReg']/text()").extract()
    item ["Name"] = titles.xpath("//span[@itemprop='name']/text()").extract()
    try:
        titles.xpath("//link[@itemprop='availability']/@href").extract()[0] == 'http://schema.org/InStock'
        item ["Availability"] = 'In Stock'
    except:
        item ["Availability"] = 'Out of Stock'
    if len(item ["Name"]) == 0:
        item ["OnlineStatus"] = 'Offline'
        item ["Availability"] = ''
    else:
        item ["OnlineStatus"] = 'Online'
    items.append(item)
    return items

我使用以下命令行代码将此数据导出为CSV,并希望该网址是我的CSV文件中的附加值。

scrapy crawl product -o items.csv -t csv

提前感谢您的帮助!

1 个答案:

答案 0 :(得分:2)

SiteScraperItem Field课程中添加新的Item,并在parse()方法中将其设置为response.url