我目前正在学习扭曲的框架,我正在尝试使用twisted.names.client.Resolver
和twisted.names.client.getHostByName
制作异步DNS解析器。
脚本应该通过查询权威的名称服务器来破坏子域名。每秒10000-50000并发连接是我的最低阈值,以便考虑可用于我的意图的工具。
我的问题是:
成为具体的:下面的脚本是我的第一次尝试,但它的效果并不像希望的那么快。
我强烈认为我的方法是完全错误的。如果你这样调用底部脚本:
[nikolai@niko-arch subdomains]$ python2 subdomains.py -n50 nytimes.com
www ==> ``170.149.168.130``
blog ==> ``170.149.168.153``
cs ==> ``199.181.175.242``
my ==> ``170.149.172.130``
blogs ==> ``170.149.168.153``
search ==> ``170.149.168.135``
cn ==> ``61.244.110.199``
feeds ==> ``170.149.172.130``
app ==> ``54.243.156.140``
games ==> ``184.73.175.199``
mail ==> ``170.149.172.135``
up ==> ``107.20.203.136``
tv ==> ``170.149.168.135``
data ==> ``174.129.28.73``
p ==> ``75.101.137.16``
open ==> ``170.149.168.153``
ts ==> ``170.149.97.51``
education ==> ``170.149.168.130``
wap ==> ``170.149.172.163``
m ==> ``170.149.172.163``
在大多数情况下,50个子域请求的一切正常。但是当我指定-n1000(以及因此1000更新dns请求)时,它需要非常长(5分钟及以上)并且反应器正在产生各种奇怪的错误,例如twisted.internet.error.DNSLookupError和twisted.internet.defer。 TimeoutError(例如:Failure: twisted.internet.defer.TimeoutError: [Query('blogger.l.google.com', 255, 1)]
)。通常,它只是挂起而没有完成。
我希望每个不存在的子域都能收到twisted.names.error.DNSNameError,或者在子域存在的情况下,有效的A或AAAA资源记录应答,但没有上面的DNSLookupError。
有人能给我一个提示我做错了吗?通常情况下,epoll()应该能够轻松发送超过1000个请求(多年前我在C和10000 udp数据报中发送了相同的数据,在几秒钟内发送)。那么扭曲的哪一部分我没弄错呢?
gatherResults()不正确吗?我不知道我做错了什么..
最好还是提前感谢所有答案!
# Looks promising: https://github.com/zhangyuyan/github
# https://github.com/zhangyuyan/github/blob/01dd311a1f07168459b222cb5c59ac1aa4d5d614/scan-dns-e3-1.py
import os
import argparse
import exceptions
from twisted.internet import defer, reactor
import twisted.internet.error as terr
from twisted.names import client, dns, error
def printResults(results, subdomain):
"""
Print the ip address for the successful query.
"""
return '%s ==> ``%s``' % (subdomain, results)
def printError(failure, subdomain):
"""
Lookup failed for some reason, just catch the DNSNameError and DomainError.
"""
reason = failure.trap(error.DNSNameError, error.DomainError, terr.DNSLookupError, defer.TimeoutError) # Subdomain wasn't found
print(failure)
return reason
def printRes(results):
for i in results:
if not isinstance(i, type): # Why the heck are Failure objects of type 'type'???
print(i)
reactor.stop()
global res
res = results
def get_args():
parser = argparse.ArgumentParser(
description='Brute force subdomains of a supplied target domain. Fast, using async IO./n')
parser.add_argument('target_domain', type=str, help='The domain name to squeeze the subdomains from')
parser.add_argument('-r', '--default-resolver', type=str, help='Add here the ip of your preferred DNS server')
parser.add_argument('-n', '--number-connections', default=100, type=int, help='The number of file descriptors to acquire')
parser.add_argument('-f', '--subdomain-file', help='This file should contain the subdomains separated by newlines')
parser.add_argument('-v', '--verbosity', action='count', help='Increase the verbosity of output', default=0)
args = parser.parse_args()
if args.number_connections > 1000:
# root privs required to acquire more than 1024 fd's
if os.geteuid() != 0:
parser.error('You need to be root in order to use {} connections'.format(args.number_connections))
if not args.default_resolver:
# Parse /etc/resolv.conf
args.default_resolver = [line.split(' ')[1].strip() for line in open('/etc/resolv.conf', 'r').readlines() if 'nameserver' in line][0]
return args
def main(args=None):
if args:
args = args
else:
args = get_args()
subs = [sub.strip() for sub in open('subs.txt', 'r').readlines()[:args.number_connections]]
# use openDNS servers
r = client.Resolver('/etc/resolv.conf', servers=[('208.67.222.222', 53), ('208.67.220.220', 53)])
d = defer.gatherResults([r.getHostByName('%s.%s' % (subdomain, args.target_domain)).addCallbacks(printResults, printError, callbackArgs=[subdomain], errbackArgs=[subdomain]) for subdomain in subs])
d.addCallback(printRes)
reactor.run()
if __name__ == '__main__':
main()
答案 0 :(得分:1)
你这样做的方法是将所有子域请求缓冲到一个巨大的列表中,然后发出所有请求,然后在一个巨大的列表中缓冲查询响应,然后打印该列表。由于您可能只想在到达时打印名称解析,因此您应该安排定时调用以可能在非常短的时间间隔内以指定大小的批量发出请求。
另外,如果您对高性能Python感兴趣,那么您应该使用PyPy而不是CPython。即使不使代码更具可扩展性,只需单独进行更改,就可以为您的目标提供足够的性能提升。