我有一串原始HTTP,我想表示一个对象中的字段。有没有办法解析HTTP字符串中的各个头?
'GET /search?sourceid=chrome&ie=UTF-8&q=ergterst HTTP/1.1\r\nHost: www.google.com\r\nConnection: keep-alive\r\nAccept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5\r\nUser-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.45 Safari/534.13\r\nAccept-Encoding: gzip,deflate,sdch\r\nAvail-Dictionary: GeNLY2f-\r\nAccept-Language: en-US,en;q=0.8\r\n
[...]'
答案 0 :(得分:76)
标准库中有很好的工具可用于解析RFC 821标头,也可用于解析整个HTTP请求。下面是一个示例请求字符串(请注意,Python将其视为一个大字符串,即使我们将其分为几行以便于阅读),我们可以将其提供给我的示例:
request_text = (
'GET /who/ken/trust.html HTTP/1.1\r\n'
'Host: cm.bell-labs.com\r\n'
'Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3\r\n'
'Accept: text/html;q=0.9,text/plain\r\n'
'\r\n'
)
正如@TryPyPy指出的那样,你可以使用mimetools.Message
来解析标题 - 尽管我们应该补充一点,一旦你完成创建它,生成的Message
对象就像一个标题字典:
# Ignore the request line and parse only the headers
from mimetools import Message
from StringIO import StringIO
request_line, headers_alone = request_text.split('\r\n', 1)
headers = Message(StringIO(headers_alone))
print len(headers) # -> "3"
print headers.keys() # -> ['accept-charset', 'host', 'accept']
print headers['Host'] # -> "cm.bell-labs.com"
但是,当然,这会忽略请求行,或者让您自己解析它。事实证明,有一个更好的解决方案。
如果您使用BaseHTTPRequestHandler
,标准库将为您解析HTTP。虽然它的文档有点模糊 - 标准库中的整套HTTP和URL工具存在问题 - 要使其解析字符串所需要做的就是(a)将字符串包装在StringIO()
中, (b)读取raw_requestline
以便它可以被解析,并且(c)捕获在解析期间发生的任何错误代码,而不是让它尝试将它们写回客户端(因为我们没有!)。
所以这是我们对标准库类的专长:
from BaseHTTPServer import BaseHTTPRequestHandler
from StringIO import StringIO
class HTTPRequest(BaseHTTPRequestHandler):
def __init__(self, request_text):
self.rfile = StringIO(request_text)
self.raw_requestline = self.rfile.readline()
self.error_code = self.error_message = None
self.parse_request()
def send_error(self, code, message):
self.error_code = code
self.error_message = message
同样,我希望标准图书馆的人们已经意识到HTTP解析应该以一种不需要我们编写九行代码来正确调用它的方式进行分解,但你能做什么呢?以下是如何使用这个简单的类:
# Using this new class is really easy!
request = HTTPRequest(request_text)
print request.error_code # None (check this first)
print request.command # "GET"
print request.path # "/who/ken/trust.html"
print request.request_version # "HTTP/1.1"
print len(request.headers) # 3
print request.headers.keys() # ['accept-charset', 'host', 'accept']
print request.headers['host'] # "cm.bell-labs.com"
如果在解析过程中出现错误,error_code
将不会是None
:
# Parsing can result in an error code and message
request = HTTPRequest('GET\r\nHeader: Value\r\n\r\n')
print request.error_code # 400
print request.error_message # "Bad request syntax ('GET')"
我更喜欢使用这样的标准库,因为我怀疑他们已经遇到并解决了任何可能会让我感到困惑的边缘情况,如果我尝试使用正则表达式重新实现Internet规范。
答案 1 :(得分:12)
mimetools
已被弃用,并且完全从Python 3中移除(link)。
以下是Python 3中的应用方法:
import email
import io
import pprint
# […]
request_line, headers_alone = request_text.split('\r\n', 1)
message = email.message_from_file(io.StringIO(headers_alone))
headers = dict(message.items())
pprint.pprint(headers, width=160)
答案 2 :(得分:7)
如果您删除GET
行:
import mimetools
from StringIO import StringIO
he = "Host: www.google.com\r\nConnection: keep-alive\r\nAccept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5\r\nUser-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.45 Safari/534.13\r\nAccept-Encoding: gzip,deflate,sdch\r\nAvail-Dictionary: GeNLY2f-\r\nAccept-Language: en-US,en;q=0.8\r\n"
m = mimetools.Message(StringIO(he))
print m.headers
解析示例并将第一行中的信息添加到对象的方法是:
import mimetools
from StringIO import StringIO
he = 'GET /search?sourceid=chrome&ie=UTF-8&q=ergterst HTTP/1.1\r\nHost: www.google.com\r\nConnection: keep-alive\r\n'
# Pop the first line for further processing
request, he = he.split('\r\n', 1)
# Get the headers
m = mimetools.Message(StringIO(he))
# Add request information
m.dict['method'], m.dict['path'], m.dict['http-version'] = request.split()
print m['method'], m['path'], m['http-version']
print m['Connection']
print m.headers
print m.dict
答案 3 :(得分:1)
使用python3.7,urllib3.HTTPResponse
,http.client.parse_headers
以及curl flag explanation here:
curl -i -L -X GET "http://httpbin.org/relative-redirect/3" | python -c '
import sys
from io import BytesIO
from urllib3 import HTTPResponse
from http.client import parse_headers
rawresponse = sys.stdin.read().encode("utf8")
redirects = []
while True:
header, body = rawresponse.split(b"\r\n\r\n", 1)
if body[:4] == b"HTTP":
redirects.append(header)
rawresponse = body
else:
break
f = BytesIO(header)
# read one line for HTTP/2 STATUSCODE MESSAGE
requestline = f.readline().split(b" ")
protocol, status = requestline[:2]
headers = parse_headers(f)
resp = HTTPResponse(body, headers=headers)
resp.status = int(status)
print("headers")
print(resp.headers)
print("redirects")
print(redirects)
'
输出:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 215 100 215 0 0 435 0 --:--:-- --:--:-- --:--:-- 435
headers
HTTPHeaderDict({'Connection': 'keep-alive', 'Server': 'gunicorn/19.9.0', 'Date': 'Thu, 20 Sep 2018 05:39:25 GMT', 'Content-Type': 'application/json', 'Content-Length': '215', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Credentials': 'true', 'Via': '1.1 vegur'})
redirects
[b'HTTP/1.1 302 FOUND\r\nConnection: keep-alive\r\nServer: gunicorn/19.9.0\r\nDate: Thu, 20 Sep 2018 05:39:24 GMT\r\nContent-Type: text/html; charset=utf-8\r\nContent-Length: 0\r\nLocation: /relative-redirect/2\r\nAccess-Control-Allow-Origin: *\r\nAccess-Control-Allow-Credentials: true\r\nVia: 1.1 vegur',
b'HTTP/1.1 302 FOUND\r\nConnection: keep-alive\r\nServer: gunicorn/19.9.0\r\nDate: Thu, 20 Sep 2018 05:39:24 GMT\r\nContent-Type: text/html; charset=utf-8\r\nContent-Length: 0\r\nLocation: /relative-redirect/1\r\nAccess-Control-Allow-Origin: *\r\nAccess-Control-Allow-Credentials: true\r\nVia: 1.1 vegur',
b'HTTP/1.1 302 FOUND\r\nConnection: keep-alive\r\nServer: gunicorn/19.9.0\r\nDate: Thu, 20 Sep 2018 05:39:24 GMT\r\nContent-Type: text/html; charset=utf-8\r\nContent-Length: 0\r\nLocation: /get\r\nAccess-Control-Allow-Origin: *\r\nAccess-Control-Allow-Credentials: true\r\nVia: 1.1 vegur']
注释:
答案 4 :(得分:0)
在python3中
from email import message_from_string
data = socket.recv(4096)
headers = message_from_string(str(data, 'ASCII').split('\r\n', 1)[1])
print(headers['Host'])
答案 5 :(得分:0)
以Python方式
.cert
答案 6 :(得分:0)
它们是处理标头的另一种方式,更简单,更安全。更面向对象。无需手动解析。
简短演示。
1。解析它们
来自str
,bytes
,fp
,dict
,requests.Response
,email.Message
,httpx.Response
,{{1} }。
urllib3.HTTPResponse
2。建立它们
此
from requests import get
from kiss_headers import parse_it
response = get('https://www.google.fr')
headers = parse_it(response)
headers.content_type.charset # output: ISO-8859-1
# Its the same as
headers["content-type"]["charset"] # output: ISO-8859-1
将成为
from kiss_headers import *
headers = (
Host("developer.mozilla.org")
+ UserAgent(
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:50.0) Gecko/20100101 Firefox/50.0"
)
+ Accept("text/html")
+ Accept("application/xhtml+xml")
+ Accept("application/xml", qualifier=0.9)
+ Accept(qualifier=0.8)
+ AcceptLanguage("en-US")
+ AcceptLanguage("en", qualifier=0.5)
+ AcceptEncoding("gzip")
+ AcceptEncoding("deflate")
+ AcceptEncoding("br")
+ Referer("https://developer.mozilla.org/testpage.html")
+ Connection(should_keep_alive=True)
+ UpgradeInsecureRequests()
+ IfModifiedSince("Mon, 18 Jul 2016 02:36:04 GMT")
+ IfNoneMatch("c561c68d0ba92bbeb8b0fff2a9199f722e3a621a")
+ CacheControl(max_age=0)
)
raw_headers = str(headers)