在Django中使用scrapy与scrapyd不进入def(解析)

时间:2018-06-01 14:15:06

标签: python django scrapy scrapyd

我还在学习scrapy,我正在尝试在Django项目中使用scrap with Scrapy。

但我注意到蜘蛛不会进入def(解析)

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

class NewsSpider(CrawlSpider):
    print("Start SPIDER")
    name = 'detik'
    allowed_domains = ['news.detik.com']
    start_urls = ['https://news.detik.com/indeks/all/?date=02/28/2018']

def parse(self, response):
    print("SEARCH LINK")
    urls = response.xpath("//article/div/a/@href").extract()        
    for url in urls:
        url = response.urljoin(url)
        yield scrapy.Request(url=url, callback=self.parse_detail)

def parse_detail(self,response):
    print("SCRAPEEE")
    x = {}
    x['breadcrumbs'] = response.xpath("//div[@class='breadcrumb']/a/text()").extract()
    x['tanggal'] = response.xpath("//div[@class='date']/text()").extract_first()
    x['penulis'] = response.xpath("//div[@class='author']/text()").extract_first()
    x['judul'] = response.xpath("//h1/text()").extract_first()
    x['berita'] = response.xpath("normalize-space(//div[@class='detail_text'])").extract_first()
    x['tag'] = response.xpath("//div[@class='detail_tag']/a/text()").extract()
    x['url'] = response.request.url
    return x

打印(“Start Spider”)在日志中,但打印(“搜索链接”)不在。

我也有这种错误

  [Launcher,3804/stderr] Unhandled error in Deferred:  

请帮忙。 PS:当我在Django之外运行时,它工作得很好

谢谢

1 个答案:

答案 0 :(得分:0)

在我看来,你错过了蜘蛛中的爬行规则。

尝试添加

KwSpiderSpider.rules = [
    Rule(LinkExtractor(allow=".+", unique=True),callback='parse'),
]
start_urls之后

到您的代码 我不明白它是如何在django之外工作的。