Python - 在http响应流中寻找

时间:2011-03-06 06:24:37

标签: python http

使用urllibs(或urllibs2)并想要我想要的东西是没有希望的。 任何解决方案?

4 个答案:

答案 0 :(得分:23)

我不确定C#实现是如何工作的,但是,由于互联网流通常是不可寻的,我的猜测是它将所有数据下载到本地文件或内存中对象并从那里搜索。 Python的等价物就像Abafei建议的那样,将数据写入文件或StringIO并从那里寻找。

但是,如果您对Abafei的回答所表达的评论表明,您只想检索文件的某个特定部分(而不是通过返回的数据进行后退和转发),则还有另一种可能性。 urllib2可用于检索网页的某个部分(或HTTP术语中的“范围”),前提是服务器支持此行为。

range标题

当您向服务器发送请求时,请求的参数将在各种标头中给出。其中之一是Range标头,在section 14.35 of RFC2616(定义HTTP / 1.1的规范)中定义。此标头允许您执行诸如从第10,000个字节开始检索所有数据或从1,000个字节和1,500个字节之间检索数据的操作。

服务器支持

不要求服务器支持范围检索。如果某些服务器支持范围,则会返回Accept-Ranges标题(section 14.5 of RFC2616)以及对报告的响应。可以使用HEAD请求来检查。但是,没有特别需要这样做;如果服务器不支持范围,它将返回整个页面,然后我们可以像以前一样在Python中提取所需的数据部分。

检查是否返回范围

如果服务器返回范围,则必须发送Content-Range标题(section 14.16 of RFC2616)以及响应。如果这在响应的标题中出现,我们知道返回了一个范围;如果它不存在,则返回整个页面。

使用urllib2实现

urllib2允许我们向请求添加标头,从而允许我们向服务器询问范围而不是整个页面。以下脚本在命令行中获取URL,起始位置和(可选)长度,并尝试检索页面的给定部分。

import sys
import urllib2

# Check command line arguments.
if len(sys.argv) < 3:
    sys.stderr.write("Usage: %s url start [length]\n" % sys.argv[0])
    sys.exit(1)

# Create a request for the given URL.
request = urllib2.Request(sys.argv[1])

# Add the header to specify the range to download.
if len(sys.argv) > 3:
    start, length = map(int, sys.argv[2:])
    request.add_header("range", "bytes=%d-%d" % (start, start + length - 1))
else:
    request.add_header("range", "bytes=%s-" % sys.argv[2])

# Try to get the response. This will raise a urllib2.URLError if there is a
# problem (e.g., invalid URL).
response = urllib2.urlopen(request)

# If a content-range header is present, partial retrieval worked.
if "content-range" in response.headers:
    print "Partial retrieval successful."

    # The header contains the string 'bytes', followed by a space, then the
    # range in the format 'start-end', followed by a slash and then the total
    # size of the page (or an asterix if the total size is unknown). Lets get
    # the range and total size from this.
    range, total = response.headers['content-range'].split(' ')[-1].split('/')

    # Print a message giving the range information.
    if total == '*':
        print "Bytes %s of an unknown total were retrieved." % range
    else:
        print "Bytes %s of a total of %s were retrieved." % (range, total)

# No header, so partial retrieval was unsuccessful.
else:
    print "Unable to use partial retrieval."

# And for good measure, lets check how much data we downloaded.
data = response.read()
print "Retrieved data size: %d bytes" % len(data)

使用它,我可以检索Python主页的最后2,000个字节:

blair@blair-eeepc:~$ python retrieverange.py http://www.python.org/ 17387
Partial retrieval successful.
Bytes 17387-19386 of a total of 19387 were retrieved.
Retrieved data size: 2000 bytes

主页中间的400字节:

blair@blair-eeepc:~$ python retrieverange.py http://www.python.org/ 6000 400
Partial retrieval successful.
Bytes 6000-6399 of a total of 19387 were retrieved.
Retrieved data size: 400 bytes

但是,Google主页不支持范围:

blair@blair-eeepc:~$ python retrieverange.py http://www.google.com/ 1000 500
Unable to use partial retrieval.
Retrieved data size: 9621 bytes

在这种情况下,在进行任何进一步处理之前,有必要在Python中提取感兴趣的数据。

答案 1 :(得分:3)

最好只将数据写入文件(甚至使用StringIO)写入字符串,并在该文件(或字符串)中进行搜索。

答案 2 :(得分:1)

Python seek on remote file using HTTP

基于HTTP范围支持的解决方案,如RFC 2616中所定义。

答案 3 :(得分:1)

我没有找到任何现有的带有seek()到HTTP URL的类文件接口的实现,所以我推出了自己的简单版本:https://github.com/valgur/pyhttpio。它取决于urllib.request,但如果需要,可以很容易地修改为使用requests

完整代码:

import cgi
import time
import urllib.request
from io import IOBase
from sys import stderr


class SeekableHTTPFile(IOBase):
    def __init__(self, url, name=None, repeat_time=-1, debug=False):
        """Allow a file accessible via HTTP to be used like a local file by utilities
         that use `seek()` to read arbitrary parts of the file, such as `ZipFile`.
        Seeking is done via the 'range: bytes=xx-yy' HTTP header.

        Parameters
        ----------
        url : str
            A HTTP or HTTPS URL
        name : str, optional
            The filename of the file.
            Will be filled from the Content-Disposition header if not provided.
        repeat_time : int, optional
            In case of HTTP errors wait `repeat_time` seconds before trying again.
            Negative value or `None` disables retrying and simply passes on the exception (the default).
        """
        super().__init__()
        self.url = url
        self.name = name
        self.repeat_time = repeat_time
        self.debug = debug
        self._pos = 0
        self._seekable = True
        with self._urlopen() as f:
            if self.debug:
                print(f.getheaders())
            self.content_length = int(f.getheader("Content-Length", -1))
            if self.content_length < 0:
                self._seekable = False
            if f.getheader("Accept-Ranges", "none").lower() != "bytes":
                self._seekable = False
            if name is None:
                header = f.getheader("Content-Disposition")
                if header:
                    value, params = cgi.parse_header(header)
                    self.name = params["filename"]

    def seek(self, offset, whence=0):
        if not self.seekable():
            raise OSError
        if whence == 0:
            self._pos = 0
        elif whence == 1:
            pass
        elif whence == 2:
            self._pos = self.content_length
        self._pos += offset
        return self._pos

    def seekable(self, *args, **kwargs):
        return self._seekable

    def readable(self, *args, **kwargs):
        return not self.closed

    def writable(self, *args, **kwargs):
        return False

    def read(self, amt=-1):
        if self._pos >= self.content_length:
            return b""
        if amt < 0:
            end = self.content_length - 1
        else:
            end = min(self._pos + amt - 1, self.content_length - 1)
        byte_range = (self._pos, end)
        self._pos = end + 1
        with self._urlopen(byte_range) as f:
            return f.read()

    def readall(self):
        return self.read(-1)

    def tell(self):
        return self._pos

    def __getattribute__(self, item):
        attr = object.__getattribute__(self, item)
        if not object.__getattribute__(self, "debug"):
            return attr

        if hasattr(attr, '__call__'):
            def trace(*args, **kwargs):
                a = ", ".join(map(str, args))
                if kwargs:
                    a += ", ".join(["{}={}".format(k, v) for k, v in kwargs.items()])
                print("Calling: {}({})".format(item, a))
                return attr(*args, **kwargs)

            return trace
        else:
            return attr

    def _urlopen(self, byte_range=None):
        header = {}
        if byte_range:
            header = {"range": "bytes={}-{}".format(*byte_range)}
        while True:
            try:
                r = urllib.request.Request(self.url, headers=header)
                return urllib.request.urlopen(r)
            except urllib.error.HTTPError as e:
                if self.repeat_time is None or self.repeat_time < 0:
                    raise
                print("Server responded with " + str(e), file=stderr)
                print("Sleeping for {} seconds before trying again".format(self.repeat_time), file=stderr)
                time.sleep(self.repeat_time)

一个潜在的用法示例:

url = "https://www.python.org/ftp/python/3.5.0/python-3.5.0-embed-amd64.zip"
f = SeekableHTTPFile(url, debug=True)
zf = ZipFile(f)
zf.printdir()
zf.extract("python.exe")

编辑:在这个答案中实际上有一个大致相同但稍微更简单的实现:https://stackoverflow.com/a/7852229/2997179