仅使用python请求下载文档的一部分

时间:2014-05-12 06:30:53

标签: python http python-requests

我正在使用python-requests编写一个web scraper。

每个页面超过1MB,但我需要提取的实际数据很早就在文档的流程中,所以我浪费时间下载大量不必要的数据。

如果可能的话,我想在文档源代码中出现所需数据后立即停止下载,以节省时间。

例如,我只想提取“abc”Div中的文本,文档的其余部分是无用的:

<html>
<head>
<title>My site</title>
</head>
<body>

<div id="abc">blah blah...</div>

<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Mauris fermentum molestie ligula, a pharetra eros mollis ut.</p>
<p>Quisque auctor volutpat lobortis. Vestibulum pellentesque lacus sapien, quis vulputate enim mollis a. Vestibulum ultrices fermentum urna ac sodales.</p>
<p>Nunc sit amet augue at dolor fermentum ultrices. Curabitur faucibus porttitor vehicula. Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>
<p>Etiam sed leo at ipsum blandit dignissim ut a est.</p>

</body>
</html>

目前我只是在做:

r = requests.get(URL)

2 个答案:

答案 0 :(得分:15)

您要在此处使用的内容称为Range HTTP标头。

请参阅:http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html特别是Range 上的位)。

另请参阅Custom Headers

上的API文档

示例:

from requests import get


url = "http://download.thinkbroadband.com/5MB.zip"
headers = {"Range": "bytes=0-100"}  # first 100 bytes

r = get(url, headers=headers)

答案 1 :(得分:6)

我从这个问题来到这里:Open first N characters of a url file with Python。但是,我并不认为这是严格的重复,因为标题中没有明确提及是否必须使用requests模块。此外,无论出于何种原因,可能会出现请求所在的服务器不支持范围字节。在这种情况下,我只是简单地直接谈论HTTP:

#!/usr/bin/env python

import socket
import time

TCP_HOST = 'stackoverflow.com' # This is the host we are going to query
TCP_PORT = 80 # This is the standard port for HTTP protocol
MAX_LIMIT = 1024 # This is the maximum size of the info we want in bytes

# Create the string to talk HTTP/1.1
MESSAGE = \
"GET /questions/23602412/only-download-a-part-of-the-document-using-python-requests HTTP/1.1\r\n" \
"HOST: stackoverflow.com\r\n" \
"User-Agent: Custom/0.0.1\r\n" \
"Accept: */*\r\n\n"

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Create a socket
s.connect((TCP_HOST, TCP_PORT)) # Connect to remote socket at given address
s.send(MESSAGE) # Let's begin the transaction

time.sleep(0.1) # Machines are involved, but... oh, well!

# Keep reading from socket till max limit is reached
curr_size = 0
data = ""
while curr_size < MAX_LIMIT:
    data += s.recv(MAX_LIMIT - curr_size)
    curr_size = len(data)

s.close() # Mark the socket as closed

# Everyone likes a happy ending!
print data + "\n"
print "Length of received data:", len(data)

示例运行:

$ python sample.py
HTTP/1.1 200 OK
Cache-Control: private
Content-Type: text/html; charset=utf-8
X-Frame-Options: SAMEORIGIN
X-Request-Guid: 3098c32c-3423-4e8a-9c7e-6dd530acdf8c
Content-Length: 73444
Accept-Ranges: bytes
Date: Fri, 05 Aug 2016 03:21:55 GMT
Via: 1.1 varnish
Connection: keep-alive
X-Served-By: cache-sin6926-SIN
X-Cache: MISS
X-Cache-Hits: 0
X-Timer: S1470367315.724674,VS0,VE246
X-DNS-Prefetch-Control: off
Set-Cookie: prov=c33383b6-3a4d-730f-02b9-0eab064b3487; domain=.stackoverflow.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly

<!DOCTYPE html>
<html itemscope itemtype="http://schema.org/QAPage">
<head>

<title>http - Only download a part of the document using python requests - Stack Overflow</title>
    <link rel="shortcut icon" href="//cdn.sstatic.net/Sites/stackoverflow/img/favicon.ico?v=4f32ecc8f43d">
    <link rel="apple-touch-icon image_src" href="//cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a">
    <link rel="search" type="application/open

Length of received data: 1024