我使用Scrapy Framework创建了一个webscraper来获取此website的音乐会门票数据。我已经能够成功地为几个选择器抓取数据,这些选择器基本上只是html文本,但是其他一些选择器正在收集任何东西。当我尝试从每个故障单中刮取音乐会日期时,尽管我使用的xpath在开发人员控制台中运行时返回所有正确的日期,但在响应中返回一个空数组。我在类定义中定义项目的方式是否有问题。任何帮助将不胜感激:
from scrapy.contrib.spiders import CrawlSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.selector import Selector
from scrapy.contrib.loader import XPathItemLoader
from scrapy.contrib.loader.processor import Join, MapCompose
from concert_comparator.items import ComparatorItem
bandname = raw_input("Enter a bandname \n")
vs_url = "http://www.vividseats.com/concerts/" + bandname + "-tickets.html"
class MySpider(CrawlSpider):
handle_httpstatus_list = [416]
name = 'comparator'
allowed_domains = ["www.vividseats.com"]
start_urls = [vs_url]
#rules = (Rule(LinkExtractor(allow=('-tickets/.*', )), callback='parse_item'))
# item = ComparatorItem()
tickets_list_xpath = './/*[@itemtype="http://schema.org/Event"]'
item_fields = {
'eventName' : './/*[@class="productionsEvent"]/text()',
#'ticketPrice' : '//*[@class="eventTickets lastChild"]/div/div/@data-origin-price',
'eventLocation' : './/*[@class = "productionsVenue"]/span[@itemprop = "name"]/text()',
'ticketsLink' : './/a/@href',
#returns empty set
'eventDate' : './/*[@class = "productionsDateCol productionsDateCol sorting_3"]/div[@class = "productionsDate"]/text()',
'eventCity' : './/*[@class = "productionsVenue"]/span[@itemprop = "address"]/span[@itemprop = "addressLocality"]/text()',
'eventState' : './/*[@class = "productionsVenue"]/span[@itemprop = "address"]/span[@itemprop = "addressRegion"]/text()',
#returns empty set
'eventTime' : './/*[@class = "productionsDateCol productionsDateCol sorting_3"]/div[@class = "productionsTime"]/text()'
}
def parse(self, response):
selector = HtmlXPathSelector(response)
# iterate over tickets
for ticket in selector.select(self.tickets_list_xpath):
loader = XPathItemLoader(ComparatorItem(), selector=ticket)
# define loader
loader.default_input_processor = MapCompose(unicode.strip)
loader.default_output_processor = Join()
# iterate over fields and add xpaths to the loader
for field, xpath in self.item_fields.iteritems():
loader.add_xpath(field, xpath)
yield loader.load_item()
答案 0 :(得分:0)
不完全确定原因,但经过一些试验和错误后,我找到了正确的xpaths。只需使用我试图提取文本的标签中的类赋值语句,我就能够抓取页面上所有票据的元素。
例如。 eventDate:'.//* [@ class =“productionsDate”] / text()'