这是我的代码。我的parse_item方法没有被调用。
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
class SjsuSpider(CrawlSpider):
name = 'sjsu'
allowed_domains = ['sjsu.edu']
start_urls = ['http://cs.sjsu.edu/']
# allow=() is used to match all links
rules = [Rule(SgmlLinkExtractor(allow=()), follow=True),
Rule(SgmlLinkExtractor(allow=()), callback='parse_item')]
def parse_item(self, response):
print "some message"
open("sjsupages", 'a').write(response.body)
答案 0 :(得分:6)
您允许的域名应为'cs.sjsu.edu'
。
Scrapy不允许允许域的子域。
此外,您的规则可以写成:
rules = [Rule(SgmlLinkExtractor(), follow=True, callback='parse_item')]