从脚本

时间:2015-07-26 19:19:41

标签: python django postgresql web-scraping scrapy

我在scrapy项目中编写了一个爬行蜘蛛,它正确地从URL中抓取数据并将响应管道传递到postgresql表中,但仅在使用 scrapy crawl 命令时。当从项目的根目录中的脚本运行spider时,似乎只调用了spider类的parse方法,因为在使用python命令运行脚本时没有创建表。我认为问题在于crawl命令有一个特定的协议,用于查找和调用spiders包上面的目录中的特定模块(例如模型,管道和设置模块),这些模块在运行蜘蛛时不会被调用一个剧本。

我按照docs中包含的说明进行操作,但它们在抓取后似乎无法解决流水线数据问题。这提出了一个问题,即我应该尝试运行脚本来运行蜘蛛,或者我是否应该以某种方式使用scrapy crawl命令。问题是,我计划在django项目中运行scrapy spider,当用户以一种形式提交文本时会将我引导到this SO post,但提供的答案似乎并没有解决我的问题。我还需要传递表单中的文本以添加到蜘蛛网址(我以前只是使用raw_input来创建网址)。我应该如何适当地运行蜘蛛? 如果需要,我有脚本和蜘蛛的代码。任何帮助/代码将不胜感激,谢谢。

脚本文件

from ticket_city_scraper import *
from ticket_city_scraper.spiders import tc_spider 

tc_spider.spiderCrawl()

蜘蛛文件

import scrapy
import re
import json
from scrapy.crawler import CrawlerProcess
from scrapy import Request
from scrapy.contrib.spiders import CrawlSpider , Rule
from scrapy.selector import HtmlXPathSelector
from scrapy.selector import Selector
from scrapy.contrib.loader import ItemLoader
from scrapy.contrib.loader import XPathItemLoader
from scrapy.contrib.loader.processor import Join, MapCompose
from ticket_city_scraper.items import ComparatorItem
from urlparse import urljoin

bandname = raw_input("Enter bandname\n")
tc_url = "https://www.ticketcity.com/concerts/" + bandname + "-tickets.html"  

class MySpider3(CrawlSpider):
    handle_httpstatus_list = [416]
    name = 'comparator'
    allowed_domains = ["www.ticketcity.com"]

    start_urls = [tc_url]
    tickets_list_xpath = './/div[@class = "vevent"]'
    def create_link(self, bandname):
        tc_url = "https://www.ticketcity.com/concerts/" + bandname + "-tickets.html"  
        self.start_urls = [tc_url]
        #return tc_url      

    tickets_list_xpath = './/div[@class = "vevent"]'

    def parse_json(self, response):
        loader = response.meta['loader']
        jsonresponse = json.loads(response.body_as_unicode())
        ticket_info = jsonresponse.get('B')
        price_list = [i.get('P') for i in ticket_info]
        if len(price_list) > 0:
            str_Price = str(price_list[0])
            ticketPrice = unicode(str_Price, "utf-8")
            loader.add_value('ticketPrice', ticketPrice)
        else:
            ticketPrice = unicode("sold out", "utf-8")
            loader.add_value('ticketPrice', ticketPrice)
        return loader.load_item()

    def parse_price(self, response):
        print "parse price function entered \n"
        loader = response.meta['loader']
        event_City = response.xpath('.//span[@itemprop="addressLocality"]/text()').extract() 
        eventCity = ''.join(event_City) 
        loader.add_value('eventCity' , eventCity)
        event_State = response.xpath('.//span[@itemprop="addressRegion"]/text()').extract() 
        eventState = ''.join(event_State) 
        loader.add_value('eventState' , eventState) 
        event_Date = response.xpath('.//span[@class="event_datetime"]/text()').extract() 
        eventDate = ''.join(event_Date)  
        loader.add_value('eventDate' , eventDate)    
        ticketsLink = loader.get_output_value("ticketsLink")
        json_id_list= re.findall(r"(\d+)[^-]*$", ticketsLink)
        json_id=  "".join(json_id_list)
        json_url = "https://www.ticketcity.com/Catalog/public/v1/events/" + json_id + "/ticketblocks?P=0,99999999&q=0&per_page=250&page=1&sort=p.asc&f.t=s&_=1436642392938"
        yield scrapy.Request(json_url, meta={'loader': loader}, callback = self.parse_json, dont_filter = True) 

    def parse(self, response):
        """
        # """
        selector = HtmlXPathSelector(response)
        # iterate over tickets
        for ticket in selector.select(self.tickets_list_xpath):
            loader = XPathItemLoader(ComparatorItem(), selector=ticket)
            # define loader
            loader.default_input_processor = MapCompose(unicode.strip)
            loader.default_output_processor = Join()
            # iterate over fields and add xpaths to the loader
            loader.add_xpath('eventName' , './/span[@class="summary listingEventName"]/text()')
            loader.add_xpath('eventLocation' , './/div[@class="divVenue location"]/text()')
            loader.add_xpath('ticketsLink' , './/a[@class="divEventDetails url"]/@href')
            #loader.add_xpath('eventDateTime' , '//div[@id="divEventDate"]/@title') #datetime type
            #loader.add_xpath('eventTime' , './/*[@class = "productionsTime"]/text()')

            print "Here is ticket link \n" + loader.get_output_value("ticketsLink")
            #sel.xpath("//span[@id='PractitionerDetails1_Label4']/text()").extract()
            ticketsURL = "https://www.ticketcity.com/" + loader.get_output_value("ticketsLink")
            ticketsURL = urljoin(response.url, ticketsURL)
            yield scrapy.Request(ticketsURL, meta={'loader': loader}, callback = self.parse_price, dont_filter = True)

def spiderCrawl():
   process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
   })
   process.crawl(MySpider3)
   process.start()

1 个答案:

答案 0 :(得分:3)

回答你的问题

  1. Scrapy不区分抓取命令和抓取命令行(来自脚本)执行。
  2. 只有你缺少的部分(和差异)是:

    1. scrapy crawl命令...始终必须从内部执行 项目目录..where scrapy.cfg文件位于....如果 仔细观察,它包含设置文件的位置 找到..和设置文件是你所有的中心位置 项目特定设置位于.. like ...缓存政策, 管道,标题设置,代理设置..等等 所以当使用scrapy crawl ..所有这个设置都是内部加载的
    2. 从脚本执行Scrapy ...你只是提供了 蜘蛛的位置及其位置和执行位置 没有来自settings.py文件的任何自定义设置
    3. 使此设置生效..使用项目设置创建crawlprocess对象..

      settings = get_project_settings()
      settings.set('USER_AGENT','Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)')
      process = CrawlerProcess(settings)
      process.crawl(MySpider3)
      process.start()