我正在开发一个Django项目,我想向主页提供一些新闻源。我最近与scrapy进行了交互,当我使用“scarpy shell”运行给定代码时,此代码能够成功获取数据。但是,当我将此代码放入脚本以自动将新闻源添加到模板时,此代码将无法工作并引发“错误提取未定义”
import scrapy
from scrapy import *
fetch("https://www.google.co.in/search?q=cholera&safe=strict&source=lnms&tbm=nws&sa")
news_links = response.css(".r").extract()[0].encode('utf-8')
news_texts = response.css(".st").extract()[0].encode('utf-8')
news_images = response.css(".th").extract()[0].encode('utf-8')
我试过这个:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
def start_requests(self):
urls = [
'https://www.google.co.in/search?q=cholera&safe=strict&source=lnms&tbm=nws&sa']
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
page = response.css(".r").extract()
filename = 'quotes-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
使用命令:
scrapy crawl quotes
哪个也无效,如何将此代码转换为脚本。
错误日志:
scrapy crawl news
2017-12-22 14:24:33 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: newsfee)
2017-12-22 14:24:33 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_M}
2017-12-22 14:24:33 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2017-12-22 14:24:33 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-12-22 14:24:33 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-12-22 14:24:33 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-12-22 14:24:33 [scrapy.core.engine] INFO: Spider opened
2017-12-22 14:24:33 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pa)
2017-12-22 14:24:33 [scrapy.extensions.telnet] DEBUG: Telnet console listening 4
2017-12-22 14:24:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.)
2017-12-22 14:24:33 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden b>
2017-12-22 14:24:33 [scrapy.core.engine] INFO: Closing spider (finished)
2017-12-22 14:24:33 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
'downloader/exception_type_count/scrapy.exceptions.IgnoreRequest': 1,
'downloader/request_bytes': 224,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 2348,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 12, 22, 8, 54, 33, 954793),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'memusage/max': 65478656,
'memusage/startup': 65478656,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 12, 22, 8, 54, 33, 295945)}
2017-12-22 14:24:33 [scrapy.core.engine] INFO: Spider closed (finished)
答案 0 :(得分:0)
2017-12-22 14:24:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.)
2017-12-22 14:24:33 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden b>
从这两行日志中,它应该被谷歌禁止。
我认为您需要首先为avoid getting banned添加一些更灵活有效的方法。
除此之外,您可以先scrapy shell <url>
使用debug or test your scrapying url directly。