我已经制作了一个蜘蛛来抓取新闻,这是
的代码class AbcSpider(XMLFeedSpider):
handle_httpstatus_list = [404, 500]
name = 'abctv'
allowed_domains = ['abctvnepal.com.np']
start_urls = [
'http://www.abctvnepal.com.np',
]
def parse(self, response):
if response.status in self.handle_httpstatus_list:
return Request(url="http://google.com", callback=self.after_404)
hxs = HtmlXPathSelector(response) # The XPath selector
sites = hxs.select('//div[@class="marlr respo-left"]/div/div/h3')
items = []
for site in sites:
item = NewsItem()
item['title'] = escape(''.join(site.select('a/text()').extract())).strip()
item['link'] = escape(''.join(site.select('a/@href').extract())).strip()
item['description'] = escape(''.join(site.select('p/text()').extract()))
item = Request(item['link'],meta={'item': item},callback=self.parse_detail)
items.append(item)
return items
def parse_detail(self, response):
item = response.meta['item']
sel = HtmlXPathSelector(response)
details = sel.select('//div[@class="entry"]/p/text()').extract()
detail = ''
for piece in details:
detail = detail + piece
item['details'] = detail
item['location'] = detail.split(",",1)[0]
item['published_date'] = (detail.split(" ",1)[1]).split(" ",1)[0]+' '+((detail.split(" ",1)[1]).split(" ",1)[1]).split(" ",1)[0]
return item
def after_404(self, response):
print response.url
我想要的是如果蜘蛛不工作或不爬行然后我想显示一个状态页面说蜘蛛不工作。我怎样才能做到这一点??我怎样才能建立状态页面?任何帮助?
我已将此与django集成。我可以在django中创建状态然后显示。如果是,那么
答案 0 :(得分:0)
我只能采取措施而不提供任何明确的例子(无论如何更好的感谢链接)