如何使用请求库从http请求中获取IP地址?

时间:2014-03-18 22:35:56

标签: python python-requests pycurl httplib httplib2

我正在使用python中的请求库发出HTTP请求,但是我需要来自响应http请求的服务器的ip地址,并且我试图避免进行两次调用(并且可能有一个不同的ip地址来自一个回应了请求的人。

这可能吗?有没有python http库允许我这样做?

ps:我还需要发出HTTPS请求并使用经过身份验证的代理。

更新1:

示例:

import requests

proxies = {
  "http": "http://user:password@10.10.1.10:3128",
  "https": "http://user:password@10.10.1.10:1080",
}

response = requests.get("http://example.org", proxies=proxies)
response.ip # This doesn't exist, this is just an what I would like to do

然后,我想知道从响应中的方法或属性连接的IP地址请求。在其他库中,我能够通过找到sock对象并使用getpeername()方法来实现这一点。

2 个答案:

答案 0 :(得分:30)

事实证明它相当复杂。

使用requests版本1.2.3时,这是一个猴子补丁:

_make_request方法包裹在HTTPConnectionPool上,以便将socket.getpeername()上的响应存储在HTTPResponse个实例上。

对于我在python 2.7.3上,这个实例在response.raw._original_response上可用。

from requests.packages.urllib3.connectionpool import HTTPConnectionPool

def _make_request(self,conn,method,url,**kwargs):
    response = self._old_make_request(conn,method,url,**kwargs)
    sock = getattr(conn,'sock',False)
    if sock:
        setattr(response,'peer',sock.getpeername())
    else:
        setattr(response,'peer',None)
    return response

HTTPConnectionPool._old_make_request = HTTPConnectionPool._make_request
HTTPConnectionPool._make_request = _make_request

import requests

r = requests.get('http://www.google.com')
print r.raw._original_response.peer

收率:

('2a00:1450:4009:809::1017', 80, 0, 0)

啊,如果涉及代理或响应被分块,则HTTPConnectionPool._make_request不会被调用。

所以这是修补httplib.getresponse的新版本:

import httplib

def getresponse(self,*args,**kwargs):
    response = self._old_getresponse(*args,**kwargs)
    if self.sock:
        response.peer = self.sock.getpeername()
    else:
        response.peer = None
    return response


httplib.HTTPConnection._old_getresponse = httplib.HTTPConnection.getresponse
httplib.HTTPConnection.getresponse = getresponse

import requests

def check_peer(resp):
    orig_resp = resp.raw._original_response
    if hasattr(orig_resp,'peer'):
        return getattr(orig_resp,'peer')

运行:

>>> r1 = requests.get('http://www.google.com')
>>> check_peer(r1)
('2a00:1450:4009:808::101f', 80, 0, 0)
>>> r2 = requests.get('https://www.google.com')
>>> check_peer(r2)
('2a00:1450:4009:808::101f', 443, 0, 0)
>>> r3 = requests.get('http://wheezyweb.readthedocs.org/en/latest/tutorial.html#what-you-ll-build')
>>> check_peer(r3)
('162.209.99.68', 80)

同时检查运行时是否设置了代理;代理地址被退回。


更新 2016/01/19

est提供an alternative that doesn't need the monkey-patch

rsp = requests.get('http://google.com', stream=True)
# grab the IP while you can, before you consume the body!!!!!!!!
print rsp.raw._fp.fp._sock.getpeername()
# consume the body, which calls the read(), after that fileno is no longer available.
print rsp.content  

更新 2016/05/19

从评论中复制此处以获得可见性,Richard Kenneth Niescior提供了以下已确认使用请求2.10.0和Python 3的内容。

rsp=requests.get(..., stream=True)
rsp.raw._connection.sock.getpeername()

更新 2019/02/22

Python3,请求版本为2.19.1。

rsp=requests.get(..., stream=True)
resp.raw._connection.sock.socket.getsockname()

答案 1 :(得分:0)

试试:

import requests

proxies = {
  "http": "http://user:password@10.10.1.10:3128",
  "https": "http://user:password@10.10.1.10:1080",
}

response = requests.get('http://jsonip.com', proxies=proxies)
ip = response.json()['ip']
print('Your public IP is:', ip)