检查无限数量的自生成URL是否有效,如果有效则安全响应(http 200)

时间:2012-02-02 19:31:00

标签: python web-crawler scrapy

我想检查无限数量的自生成URL的有效性,以及文件中是否有有效的安全响应体。网址如下所示:https://mydomain.com/ +随机字符串(例如https://mydomain.com/ake3t),我想使用字母表生成它们" abcdefghijklmnopqrstuvwxyz0123456789 _-"只是蛮力尝试所有可能性。

我在python中编写了一个脚本但是因为我是一个绝对的初学者,所以它非常慢!因为我需要非常快速的东西,所以我尝试使用scrapy,因为我认为它只适用于这种工作。

现在的问题是我无法找到如何动态生成URL,我无法事先生成它们,因为它不是固定数量的。

有人可以告诉我如何实现这一目标或推荐我更适合这份工作的其他工具或库吗?

更新: 这是我使用的脚本,但我认为它很慢。让我最担心的是,如果我使用多个Thread(在threadsNr中指定)

,它会变慢
import threading, os
import urllib.request, urllib.parse, urllib.error 

threadsNr   = 1                                    
dumpFolder    = '/tmp/urls/'               
charSet     = 'abcdefghijklmnopqrstuvwxyz0123456789_-' 
Url_pre    = 'http://vorratsraum.com/'
Url_post    = 'alwaysTheSameTail'

# class that generate the words
class wordGenerator ():

    def __init__(self, word, charSet):
        self.currentWord = word
        self.charSet = charSet

    # generate the next word set that word as currentWord and return the word
    def nextWord (self):
        self.currentWord = self._incWord(self.currentWord)
        return self.currentWord

    # generate the next word
    def _incWord(self, word):
        word = str(word)                        # convert to string

        if word == '':                          # if word is empty 
            return self.charSet[0]              # return first char from the char set
        wordLastChar = word[len(word)-1]        # get the last char
        wordLeftSide = word[0:len(word)-1]      # get word without the last char
        lastCharPos  = self.charSet.find(wordLastChar)  # get position of last char in the char set

        if (lastCharPos+1) < len(self.charSet):         # if position of last char is not at the end of the char set
            wordLastChar = self.charSet[lastCharPos+1]  # get next char from the char set

        else:                                           # it is the last char
            wordLastChar = self.charSet[0]              # reset last char to have first character from the char set
            wordLeftSide = self._incWord(wordLeftSide)  # send left site to be increased

        return wordLeftSide + wordLastChar      # return the next word


class newThread(threading.Thread):
    def run(self):
        global exitThread
        global wordsTried
        global newWord
        global hashList

        while exitThread == False:
            part = newWord.nextWord()                # generate the next word to try
            url = Url_pre + part + Url_post

            wordsTried = wordsTried + 1
            if wordsTried == 1000: # just for testing how fast it is
                exitThread = True
            print( 'trying ' + part)          # display the word
            print( 'At URL ' + url)

            try:
                req = urllib.request.Request(url)
                req.addheaders = [('User-agent', 'Mozilla/5.0')]
                resp = urllib.request.urlopen(req)
                result = resp.read()
                found(part, result)
            except urllib.error.URLError as err:
                if err.code == 404:
                    print('Page not found!')
                elif err.code == 403:
                    print('Access denied!')
                else:
                    print('Something happened! Error code', err.code)
            except urllib.error.URLError as err:
                print('Some other error happened:', err.reason)
        resultFile.close()

def found(part, result):
    global exitThread
    global resultFile

    resultFile.write(part +"\n")

    if not os.path.isdir(dumpFolder + part):
        os.makedirs(dumpFolder + part)

    print('Found Part = '  + part)

wordsTried = 0                            
exitThread = False                              # flag to kill all threads
newWord = wordGenerator('',charSet);           # word generator

if not os.path.isdir(dumpFolder):
    os.makedirs(dumpFolder)

resultFile = open(dumpFolder + 'parts.txt','a')      # open file for append    

for i in range(threadsNr):
    newThread().start()

2 个答案:

答案 0 :(得分:1)

如果没有“非常慢”,无论是初学者还是没有,都无法检查“无限数量的网址”。

刮刀的时间几乎肯定取决于您访问的服务器的响应时间,而不是脚本的效率。

你到底在想做什么?

答案 1 :(得分:1)

你想要蛮力还是随机?以下是具有重复字符的顺序强力方法。这个速度很大程度上取决于您的服务器响应。另请注意,这可能会很快产生拒绝服务状况。

import itertools
import url

pageChars = 5
alphabet = "abcdefghijklmnopqrstuvwxyz0123456789_-"

#iterate over the product of alphabet with <pageChar> elements
#this assumes repeating characters are allowed
# Beware this generates len(alphabet)**pageChars possible strings
for chars in itertools.product(alphabet,repeat=pageChars):
    pageString = ''.join(chars)

    urlString = 'https://mydomain.com/' + pageString

    try:
        url = urllib2.urlopen(url)

    except urllib2.HTTPError:
        print('No page at: %s' % urlString)
        continue     

    pageDate = url.read()
    #do something with page data