目前我想从weixin.sogou.com获取一些新闻。但是我遇到了一个问题,即无论我如何更改规则,都不会调用parse_item。
这是我的代码
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class SogouCrawlSpider(CrawlSpider):
name = 'sogou'
allowed_domains = ['weixin.sogou.com']
start_urls = ['http://weixin.sogou.com']
rules=(
Rule(
LinkExtractor(
restrict_xpaths=('//a')
),
callback="parse_item",follow=False),
)
def parse_item(self, response):
print(response.url)
yield response.url
运行结果就像
$ scrapy crawl sogou
2018-03-16 23:42:44 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: hotwords_crawler)
2018-03-16 23:42:44 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'hotwords_crawler.spiders', 'SPIDER_MODULES': ['hotwords_crawler.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'hotwords_crawler'}
2018-03-16 23:42:44 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2018-03-16 23:42:45 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-03-16 23:42:45 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-03-16 23:42:45 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-03-16 23:42:45 [scrapy.core.engine] INFO: Spider opened
2018-03-16 23:42:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-03-16 23:42:45 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6025
2018-03-16 23:42:45 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://weixin.sogou.com/robots.txt> (referer: None)
2018-03-16 23:42:45 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET http://weixin.sogou.com>
2018-03-16 23:42:46 [scrapy.core.engine] INFO: Closing spider (finished)
2018-03-16 23:42:46 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
'downloader/exception_type_count/scrapy.exceptions.IgnoreRequest': 1,
'downloader/request_bytes': 224,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 975,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 3, 16, 15, 42, 46, 97869),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'memusage/max': 34717696,
'memusage/startup': 34717696,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2018, 3, 16, 15, 42, 45, 497564)}
2018-03-16 23:42:46 [scrapy.core.engine] INFO: Spider closed (finished)
我不知道为什么没有调用parse_item。我真的很困惑如何获取数据。
答案 0 :(得分:0)
restrict_xpaths
定义了html文件的一部分,没有实际的链接,因此//a
正是不要需要放在那里的。
答案 1 :(得分:0)
对于链接提取器,最好使用LxmlLinkExtractor
,这是推荐的链接提取器,带有方便的过滤选项。它是使用lxml强大的HTMLParser实现的。
使用如下:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class SogouCrawlSpider(CrawlSpider):
name = 'sogou'
allowed_domains = ['weixin.sogou.com']
start_urls = ['http://weixin.sogou.com']
rules = (
Rule(LxmlLinkExtractor(allow=()), callback='parse_item', follow=True),
)
def parse_item(self, response):
print(response.url)
yield response.url
要控制allow
/ deny
个网址等,您可以在API中找到here