网址中的熊猫read_csv并包含请求标头

时间:2019-04-16 14:57:41

标签: python pandas http-headers python-requests

从熊猫0.19.2开始,可以向函数read_csv()传递URL。例如,参见此answer

import pandas as pd

url="https://raw.githubusercontent.com/cs109/2014_data/master/countries.csv"
c=pd.read_csv(url)

我要使用的URL是:https://moz.com/top500/domains/csv

使用上面的代码,该URL返回错误:

urllib2.HTTPError: HTTP Error 403: Forbidden

基于this post,我可以通过传递请求标头来获得有效的响应:

import urllib2,cookielib

site= "https://moz.com/top500/domains/csv"
hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
       'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
       'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
       'Accept-Encoding': 'none',
       'Accept-Language': 'en-US,en;q=0.8',
       'Connection': 'keep-alive'}

req = urllib2.Request(site, headers=hdr)

try:
    page = urllib2.urlopen(req)
except urllib2.HTTPError, e:
    print (e.fp.read())

content = page.read()
print (content)

是否可以使用Pandas read_csv()的Web URL功能,还可以传递请求标头以使请求通过?

1 个答案:

答案 0 :(得分:1)

我建议您将requestsio库用于任务。以下代码可以完成这项工作:

import pandas as pd
import requests
from io import StringIO

url = "https://moz.com:443/top500/domains/csv"
headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:66.0) Gecko/20100101 Firefox/66.0"}
req = requests.get(url, headers=headers)
data = StringIO(req.text)

df = pd.read_csv(data)
print(df)

(如果要添加自定义标头,只需修改headers变量)

希望这会有所帮助