Python Requests.Get-产生无效的架构错误

时间:2018-07-22 23:43:43

标签: python-3.x csv beautifulsoup python-requests

另一个适合您。

尝试从CSV文件中抓取URL列表。这是我的代码:

from bs4 import BeautifulSoup
import requests
import csv

with open('TeamRankingsURLs.csv', newline='') as f_urls, open('TeamRankingsOutput.csv', 'w', newline='') as f_output:
    csv_urls = csv.reader(f_urls)
    csv_output = csv.writer(f_output)


    for line in csv_urls:
        page = requests.get(line[0]).text
        soup = BeautifulSoup(page, 'html.parser')
        results = soup.findAll('div', {'class' :'LineScoreCard__lineScoreColumnElement--1byQk'})

        for r in range(len(results)):
            csv_output.writerow([results[r].text])

...这给了我以下错误:

Traceback (most recent call last):
  File "TeamRankingsScraper.py", line 11, in <module>
    page = requests.get(line[0]).text
  File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\api.py", line 72, in get
    return request('get', url, params=params, **kwargs)
  File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\api.py", line 58, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\sessions.py", line 512, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\sessions.py", line 616, in send
    adapter = self.get_adapter(url=request.url)
  File "C:\Users\windowshopr\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\sessions.py", line 707, in get_adapter
    raise InvalidSchema("No connection adapters were found for '%s'" % url)
requests.exceptions.InvalidSchema: No connection adapters were found for 'https://www.teamrankings.com/mlb/stat/runs-per-game?date=2018-04-15'

我的CSV文件只是A列中几个网址(即https://www..)的列表。

(我要抓取的div类在该页面上不存在,但是那不是问题所在。至少我不认为。我只需要更新它,以使其能够阅读即可。 CSV文件中。)

有什么建议吗?因为此代码可在另一个项目上工作,但是由于某些原因,我在此新的URL列表中遇到了问题。非常感谢!

1 个答案:

答案 0 :(得分:2)

从追溯requests.exceptions.InvalidSchema: No connection adapters were found for 'https://www.teamrankings.com/mlb/stat/runs-per-game?date=2018-04-15'

查看网址中的随机字符,它应从https://www.teamrankings.com/mlb/stat/runs-per-game?date=2018-04-15

开始

因此,首先使用regex解析csv并删除http / https之前的所有随机字符。那应该解决您的问题。

如果您想在读取csv时解决此特定URL的当前问题,请执行:

import regex as re

strin = "https://www.teamrankings.com/mlb/stat/runs-per-game?date=2018-04-15"

re.sub(r'.*http', 'http', strin)

这将为您提供请求可以处理的正确网址。

由于您要求对循环中可访问的路径进行全面修复,因此您可以执行以下操作:

from bs4 import BeautifulSoup
import requests
import csv
import regex as re

with open('TeamRankingsURLs.csv', newline='') as f_urls, open('TeamRankingsOutput.csv', 'w', newline='') as f_output:
    csv_urls = csv.reader(f_urls)
    csv_output = csv.writer(f_output)


    for line in csv_urls:
        page = re.sub(r'.*http', 'http', line[0])
        page = requests.get(page).text
        soup = BeautifulSoup(page, 'html.parser')
        results = soup.findAll('div', {'class' :'LineScoreCard__lineScoreColumnElement--1byQk'})

        for r in range(len(results)):
            csv_output.writerow([results[r].text])