ValueError:请求网址中缺少方案Scrapy

时间:2019-01-30 08:01:08

标签: python-3.x scrapy

我正在尝试使用https://www.skynewsarabia.com/抓取Scrapy并且出现此错误ValueError: Missing scheme in request url: 我尝试了在stackoverflow上找到的每个解决方案,但没有一个对我有用。 这是我的蜘蛛:

name = 'skynews'
allowed_domains = ['www.skynewsarabia.com']
start_urls = ['https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1']
}
    def parse(self, response):
    link = "https://www.skynewsarabia.com"
    # get the urls of each article
    urls = response.css("a.item-wrapper::attr(href)").extract()
    # for each article make a request to get the text of that article
    for url in urls:
        # get the info of that article using the parse_details function
        yield scrapy.Request(url=link +url, callback=self.parse_details)
    # go and get the link for the next article
    next_article = response.css("a.item-wrapper::attr(href)").extract_first()
    if next_article:
        # keep repeating the process until the bot visits all the links in the website!
        yield scrapy.Request(url=next_article, callback=self.parse)  # keep calling yourself!

这是整个错误:

2019-01-30 11:49:34 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

2019-01-30 11:49:34 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2019-01-30 11:49:35 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.skynewsarabia.com/robots.txt> (referer: None)
2019-01-30 11:49:35 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1> (referer: None)
2019-01-30 11:49:35 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1> (referer: None)
Traceback (most recent call last):
File "c:\users\hozrifai\desktop\scraping\venv\lib\site- 
      packages\scrapy\utils\defer.py", line 102, in iter_errback
      yield next(it)
     File "c:\users\hozrifai\desktop\scraping\venv\lib\site- 
    packages\scrapy\spidermiddlewares\offsite.py", line 30, in 
 process_spider_output
       for x in result:
      File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
return (_set_referer(r) for r in result or ())
File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\HozRifai\Desktop\scraping\articles\articles\spiders\skynews.py", line 28, in parse
   yield scrapy.Request(url=next_article, callback=self.parse)  # keep calling yourself!
 File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\http\request\__init__.py", line 25, in __init__
   self._set_url(url)
 File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\http\request\__init__.py", line 62, in _set_url
   raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: /sport/1222754-%D8%A8%D9%8A%D8%B1%D9%86%D9%84%D9%8A-%D9%8A%D8%B6%D8%B9-%D8%AD%D8%AF%D8%A7-%D9%84%D8%B3%D9%84%D8%B3%D9%84%D8%A9-%D8%A7%D9%86%D8%AA%D8%B5%D8%A7%D8%B1%D8%A7%D8%AA-%D8%B3%D9%88%D9%84%D8%B4%D8%A7%D8%B1
2019-01-30 11:49:36 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.skynewsarabia.com/sport/1222754-%D8%A8%D9%8A%D8%B1%D9%86%D9%84%D9%8A-%D9%8A%D8%B6%D8%B9-%D8%AD%D8%AF%D8%A7-%D9%84%D8%B3%D9%84%D8%B3%D9%84%D8%A9-%D8%A7%D9%86%D8%AA%D8%B5%D8%A7%D8%B1%D8%A7%D8%AA-%D8%B3%D9%88%D9%84%D8%B4%D8%A7%D8%B1> (referer: https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1)

预先感谢

2 个答案:

答案 0 :(得分:0)

您有foo个网址,没有方案。试试:

next_article

答案 1 :(得分:0)

在您的下一篇文章检索中:

next_article = response.css("a.item-wrapper::attr(href)").extract_first()

确定要从http/https开始获得完整链接吗?

对于更好的方法,如果我们不确定所接收的网址,请始终使用urljoin作为

url = response.urljoin(next_article)     # you can also use this in your above logic.