我是scrapy的新手,但是处理一个涉及表单和java代码的特别复杂的网站。我试图从联合国网站上抓取新闻稿数据,但我不认为该网站正在正确呈现,因为没有什么可以被刮掉。下面是我的scrapy代码和输出。
Scrapy代码
import scrapy
import scrapy_splash
from scrapy_splash import SplashRequest
class OhchrSpider(scrapy.Spider):
name = 'OHCHR'
custom_settings = {
'SPLASH_URL': 'http://localhost:8050',
'DOWNLOADER_MIDDLEWARES': {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
},
'SPIDER_MIDDLEWARES': {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
},
'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter',
}
def start_requests(self):
yield SplashRequest(
url='http://www.ohchr.org/EN/NewsEvents/Pages/NewsSearch.aspx',
callback=self.parse,
)
def parse(self, response):
data = {
'#ctl00_PlaceHolderMain_SearchNewsID_RadDatePickerFromDate_dateInput_text': '1/1/2016',
'#ctl00_PlaceHolderMain_SearchNewsID_RadDatePickerToDate_dateInput_text': '2/1/2016',
}
return scrapy_splash.SplashFormRequest.from_response(
response,
formdata=data,
callback=self.parse_table
)
def parse_table(self, response):
yield {
'title': response.css('#ctl00_PlaceHolderMain_SearchNewsID_gvNewsSearchresult_ctl03_lblTitle::text').extract(),
'date': response.css('#ctl00_PlaceHolderMain_SearchNewsID_gvNewsSearchresult_ctl03_lblDate::text').extract(),
'type': response.css('#ctl00_PlaceHolderMain_SearchNewsID_gvNewsSearchresult_ctl03_NewsType li::text').extract(),
'country': response.css('#ctl00_PlaceHolderMain_SearchNewsID_gvNewsSearchresult_ctl03_CountryID li::text').extract(),
'mandate': response.css('#ctl00_PlaceHolderMain_SearchNewsID_gvNewsSearchresult_ctl03_MandateID li::text').extract(),
'subject': response.css('#ctl00_PlaceHolderMain_SearchNewsID_gvNewsSearchresult_ctl03_SubjectID li::text').extract(),
}
输出
$ scrapy runspider OHCHR.py
2018-04-25 13:24:55 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: scrapybot)
2018-04-25 13:24:55 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted 17.9.0, Python 3.6.2 (v3.6.2:5fd33b5926, Jul 16 2017, 20:11:06) - [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)], pyOpenSSL 17.5.0 (OpenSSL 1.1.0h 27 Mar 2018), cryptography 2.2.2, Platform Darwin-17.4.0-x86_64-i386-64bit
2018-04-25 13:24:55 [scrapy.crawler] INFO: Overridden settings: {'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter', 'SPIDER_LOADER_WARN_ONLY': True}
2018-04-25 13:24:56 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2018-04-25 13:24:56 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy_splash.SplashCookiesMiddleware',
'scrapy_splash.SplashMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-04-25 13:24:56 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy_splash.SplashDeduplicateArgsMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-04-25 13:24:56 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-04-25 13:24:56 [scrapy.core.engine] INFO: Spider opened
2018-04-25 13:24:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-04-25 13:24:56 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-04-25 13:25:00 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.ohchr.org/EN/NewsEvents/Pages/NewsSearch.aspx via http://localhost:8050/render.html> (referer: None)
2018-04-25 13:25:04 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://www.ohchr.org/EN/NewsEvents/Pages/NewsSearch.aspx via http://localhost:8050/render.html> (referer: None)
2018-04-25 13:25:04 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.ohchr.org/EN/NewsEvents/Pages/NewsSearch.aspx>
{'title': [], 'date': [], 'type': [], 'country': [], 'mandate': [], 'subject': []}
2018-04-25 13:25:04 [scrapy.core.engine] INFO: Closing spider (finished)
2018-04-25 13:25:04 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 124315,
'downloader/request_count': 2,
'downloader/request_method_count/POST': 2,
'downloader/response_bytes': 595937,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 4, 25, 17, 25, 4, 589322),
'item_scraped_count': 1,
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'memusage/max': 66322432,
'memusage/startup': 66322432,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 4,
'scheduler/dequeued/memory': 4,
'scheduler/enqueued': 4,
'scheduler/enqueued/memory': 4,
'splash/render.html/request_count': 2,
'splash/render.html/response_count/200': 2,
'start_time': datetime.datetime(2018, 4, 25, 17, 24, 56, 257819)}
2018-04-25 13:25:04 [scrapy.core.engine] INFO: Spider closed (finished)
非常感谢任何帮助!
答案 0 :(得分:0)
您的日志表明一切正常,scrapy正在返回一项:
2018-04-25 13:25:04 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.ohchr.org/EN/NewsEvents/Pages/NewsSearch.aspx>
{'title': [], 'date': [], 'type': [], 'country': [], 'mandate': [], 'subject': []}
但是,您的xpath似乎没有在页面上找到任何内容。
要调试这个,您可以使用inspect_response
调试功能scrapy:
def parse_table(self, response):
from scrapy.shell import inspect_response
inspect_response(response, self)
然后运行spider和python一旦达到parse_table
方法就会打开shell。
在那里,您可以通过以下功能检查响应并查看您正在接收的页面:
view(response)
- 将在您的浏览器中打开页面。
response.xpath
- 你可以尝试你的xpath。
等
答案 1 :(得分:0)
我认为这是与scrapy_splash相关的问题,我可以通过以下方式获取元素:
检查chrome devtools网络中的POST请求,复制正文并在FormRequest中使用它。您可能想要更改fromdate
和todate
。
import scrapy
from scrapy.http import FormRequest
class OhchrSpider(scrapy.Spider):
name = 'ohchr'
allowed_domains = ['www.ohchr.org']
start_urls = ['http://www.ohchr.org/EN/NewsEvents/Pages/NewsSearch.aspx']
def parse(self, response):
data = {
# copy all the data in the post body to here
'MSOTlPn_View': '0',
'MSOTlPn_ShowSettings': 'False',
# ......
'ctl00$PlaceHolderMain$SearchNewsID$RadDatePickerFromDate$dateInput': '2016-01-01-00-00-00',
'ctl00$PlaceHolderMain$SearchNewsID$RadDatePickerToDate$dateInput': '2016-02-01-00-00-00',
}
return [FormRequest.from_response(response, \
formdata = data,\
callback = self.parse_table)]
def parse_table(self, response):
yield {
'title': response.css('#ctl00_PlaceHolderMain_SearchNewsID_gvNewsSearchresult_ctl03_lblTitle::text').extract(),
'date': response.css('#ctl00_PlaceHolderMain_SearchNewsID_gvNewsSearchresult_ctl03_lblDate::text').extract(),
'type': response.css('#ctl00_PlaceHolderMain_SearchNewsID_gvNewsSearchresult_ctl03_NewsType li::text').extract(),
'country': response.css('#ctl00_PlaceHolderMain_SearchNewsID_gvNewsSearchresult_ctl03_CountryID li::text').extract(),
'mandate': response.css('#ctl00_PlaceHolderMain_SearchNewsID_gvNewsSearchresult_ctl03_MandateID li::text').extract(),
'subject': response.css('#ctl00_PlaceHolderMain_SearchNewsID_gvNewsSearchresult_ctl03_SubjectID li::text').extract(),
}