将执行以下代码,创建没有错误的文件。但是,它不保存到json文件。
我关闭了autothrottle,这在过去曾干扰下载数据,但它没有解决问题。
Scrapy == 1.4.0
class MySpider(CrawlSpider):
name = "spidy"
allowed_domains = ["cnn.com"]
start_urls = ["http://www.cnn.com"]
rules = [Rule(LinkExtractor(allow=['cnn.com/.+']), callback='parse_item', follow=True)]
def parse_item(self, response):
print('went to: {}'.format(response.url))
yield {'url': response.url}
FILE_NAME = 'my_data.json'
SETTINGS = {
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
'FEED_FORMAT': 'json',
'FEED_URI': FILE_NAME,
}
process = CrawlerProcess(SETTINGS)
process.crawl(MySpider)
process.start()
编辑:
刮刀正在获取日志中显示的数据:
2017-11-21 11:07:55 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot)
2017-11-21 11:07:55 [scrapy.utils.log] INFO: Overridden settings: {'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)', 'FEED_URI': 'my_data.json', 'FEED_FORMAT': 'json'}
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.feedexport.FeedExporter']
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-11-21 11:07:55 [scrapy.core.engine] INFO: Spider opened
2017-11-21 11:07:55 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-11-21 11:07:55 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6041
2017-11-21 11:07:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com> (referer: None)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/us> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/congress-capitol-hill> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/president-donald-trump-45> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/us-security> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/trumpmerica> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/state-cnn-politics-magazine> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/opinion/opinion-social-issues> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/opinions/cnnireport> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/vr/vr-archives> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/middle-east> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://imagesource.cnn.com> from <GET http://www.cnn.com/collection>
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/supreme-court-nine> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://transcripts.cnn.com/TRANSCRIPTS/> from <GET http://www.cnn.com/transcripts>
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/pf/> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/luxury/> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/data/markets/> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/technology/> (referer: http://www.cnn.com)
went to: http://www.cnn.com/us
2017-11-21 11:07:56 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.cnn.com/us>
{'url': 'http://www.cnn.com/us'}
2017-11-21 11:07:56 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET http://www.cnn.com/us> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2017-11-21 11:07:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://www.cnn.com/email/subscription> from <GET http://www.cnn.com/newsletters>
...
我们可以看到抓取工具正在访问网址,抓取网页上的其他网址,获取响应网址(如“转到:”所示),然后使用“{'url':}返回数据,例如{' url':'http://www.cnn.com/us'}
答案 0 :(得分:0)
所以你的代码工作正常,但我假设你停止它两次或杀死它使json空白。我会改变两件事。
一次使用jsonlines
代替json
。这样可以确保即使我杀了蜘蛛也不会丢失太多物品。然后每一行本身都是一个有效的JSON,所以我可以附加到同一个文件。此外,如果您在两者之间中断程序,您将获得无效的JSON
其次,我会将并发项设置为较低的值,以便更频繁地导出项目(默认值为100)
from scrapy.crawler import CrawlerProcess
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractor import LinkExtractor
class MySpider(CrawlSpider):
name = "spidy"
allowed_domains = ["cnn.com"]
start_urls = ["http://www.cnn.com"]
rules = [Rule(LinkExtractor(allow=['cnn.com/.+']), callback='parse_item', follow=True)]
def parse_item(self, response):
print('went to: {}'.format(response.url))
yield {'url': response.url}
FILE_NAME = 'my_data.jsonl'
SETTINGS = {
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
'FEED_FORMAT': 'jsonlines',
'FEED_URI': FILE_NAME,
'CONCURRENT_ITEMS': 1
}
process = CrawlerProcess(SETTINGS)
process.crawl(MySpider)
process.start()
之后你会发现数据输出正确