http.client.IncompleteRead& multiprocessing.pool.MaybeEncodingError:发送结果时出错:

时间:2017-06-27 21:50:13

标签: python

我的程序是通过在URL的末尾添加一个简单的字符串查询(')并在页面源中查找错误来扫描大量网站中的SQLi漏洞。

我的程序一直停留在同一个网站上。这是我一直收到的错误:

     [-] http://www.pluralsight.com/guides/microsoft-net/getting-started-with-asp-net-mvc-core-1-0-from-zero-to-hero?status=in-review'
     [-] Page not found.
     [-] http://lfg.go2dental.com/member/dental_search/searchprov.cgi?P=LFGDentalConnect&Network=L'
     [-] http://www.parlimen.gov.my/index.php?lang=en'
     [-] http://www.otakunews.com/category.php?CatID=23'
     [-] http://plaine-d-aunis.bibli.fr/opac/index.php?lvl=cmspage&pageid=6&id_rubrique=100'
     [-] Page not found.
     [-] http://www.rvparkhunter.com/state.asp?state=britishcolumbia'
     [-] http://ensec.org/index.php?option=com_content&view=article&id=547:lord-howell-british-fracking-policy--a-change-of-direction-needed&catid=143:issue-content&Itemid=433'
     [-] URL Timed Out
     [-] http://www.videohelp.com/tools.php?listall=1'
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\multiprocessing\pool.py", line 44, in mapstar
return list(map(*args))
File "C:\Users\Brice\Desktop\My Site Hunter\sitehunter.py", line 81, in 
mp_worker
mainMethod(URLS)
File "C:\Users\Brice\Desktop\My Site Hunter\sitehunter.py", line 77, in 
mainMethod
tryMethod(req, URL)
File "C:\Users\Brice\Desktop\My Site Hunter\sitehunter.py", line 48, in 
tryMethod
checkforMySQLError(req, URL)
File "C:\Users\Brice\Desktop\My Site Hunter\sitehunter.py", line 23, in 
checkforMySQLError
response = urllib.request.urlopen(req, context=gcontext, timeout=2)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 532, in open
response = meth(req, response)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 564, in error
result = self._call_chain(*args)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 504, in _call_chain
result = func(*args)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 753, in http_error_302
fp.read()
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\http\client.py", line 462, in read
s = self._safe_read(self.length)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\http\client.py", line 614, in _safe_read
raise IncompleteRead(b''.join(s), amt)
http.client.IncompleteRead: IncompleteRead(4659 bytes read, 15043 more 
expected)
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "sitehunter.py", line 91, in <module>
mp_handler(URLList)
File "sitehunter.py", line 86, in mp_handler
p.map(mp_worker, URLList)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\multiprocessing\pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\multiprocessing\pool.py", line 608, in get
raise self._value
http.client.IncompleteRead: IncompleteRead(4659 bytes read, 15043 more 
expected)

C:\Users\Brice\Desktop\My Site Hunter>

这是我的完整源代码。我将在下一节中为您缩小范围。

# Start off with imports
import urllib.request
import urllib.error
import socket
import threading
import multiprocessing
import time
import ssl

# Fake a header to get less errors
headers={'User-agent' : 'Mozilla/5.0'}

# Make a class to pass to upon exception errors
class MyException(Exception):
    pass

# Checks for mySQL error responses after putting a string (') query on the end of a URL
def checkforMySQLError(req, URL):

    # gcontext is to bypass a no SSL error from shutting down my program
    gcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1)

    response = urllib.request.urlopen(req, context=gcontext, timeout=2)
    page_source = response.read()
    page_source_string = page_source.decode(encoding='cp866', errors='ignore')

    # The if statements behind the whole thing. Checks page source for these errors, 
    # and returns any that come up positive. 
    # I'd like to do my outputting here, if possible.
    if "You have an error in your SQL syntax" in page_source_string:
        print ("\t [+] " + URL)
    elif "mysql_fetch" in page_source_string:
        print ("\t [+] " + URL)
    elif "mysql_num_rows" in page_source_string:
        print ("\t [+] " + URL)
    elif "MySQL Error" in page_source_string:
        print ("\t [+] " + URL)
    elif "MySQL_connect()" in page_source_string:
        print ("\t [+] " + URL)
    elif "UNION SELECT" in page_source_string:
        print ("\t [+] " + URL)
    else:
        print ("\t [-] " + URL)

# Attempts to connect to the URL, and passes an error on if it fails.
def tryMethod(req, URL):
    try:
        checkforMySQLError(req, URL)
    except urllib.error.HTTPError as e:
        if e.code == 404:
            print("\t [-] Page not found.")
        if e.code == 400:
            print ("\t [+] " + URL)
    except urllib.error.URLError as e:
        print("\t [-] URL Timed Out")
    except socket.timeout as e:
        print("\t [-] URL Timed Out")
        pass
    except socket.error as e:
        print("\t [-] Error in URL")
        pass

 # This is where the magic begins.
def mainMethod(URLList):

        ##### THIS IS THE WORK-AROUND I USED TO FIX THIS ERROR ####
        # URL = urllib.request.urlopen(URLList, timeout=2)

        # Replace any newlines or we get an invalid URL request.
        URL = URLList.replace("\n", "")
        # URLLib doesn't like https, not sure why.
        URL = URL.replace("https://","http://")
        # Python likes to truncate urls after spaces, so I add a typical %20.
        URL = URL.replace("\s", "%20")
        # The blind sql query that makes the errors occur.
        URL = URL + "'"

        # Requests to connect to the URL and sends it to the tryMethod.
        req = urllib.request.Request(URL)
        tryMethod(req, URL)

# Multi-processing worker
def mp_worker(URLS):
    mainMethod(URLS)

# Multi-processing handler
def mp_handler(URLList):
    p = multiprocessing.Pool(25)
    p.map(mp_worker, URLList)

# The beginning of it all
if __name__=='__main__':
    URLList = open('sites.txt', 'r')
    mp_handler(URLList)

以下是代码的重要部分,特别是我使用urllib从URL读取的部分:

def mainMethod(URLList):

        ##### THIS IS THE WORK-AROUND I USED TO FIX THIS ERROR ####
        # URL = urllib.request.urlopen(URLList, timeout=2)

        # Replace any newlines or we get an invalid URL request.
        URL = URLList.replace("\n", "")
        # URLLib doesn't like https, not sure why.
        URL = URL.replace("https://","http://")
        # Python likes to truncate urls after spaces, so I add a typical %20.
        URL = URL.replace("\s", "%20")
        # The blind sql query that makes the errors occur.
        URL = URL + "'"

        # Requests to connect to the URL and sends it to the tryMethod.
        req = urllib.request.Request(URL)
        tryMethod(req, URL)


# Checks for mySQL error responses after putting a string (') query on the end of a URL
def checkforMySQLError(req, URL):

    # gcontext is to bypass a no SSL error from shutting down my program
    gcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1)

    response = urllib.request.urlopen(req, context=gcontext, timeout=2)
    page_source = response.read()
    page_source_string = page_source.decode(encoding='cp866', errors='ignore')

我通过在对URLList进行任何更改之前发出从URLList读取的请求来解决此错误。我注释掉修复它的部分 - 但只是为了得到另一个看起来更糟/更难修复的错误(这就是为什么我包含了这个错误,虽然我修复了它)

当我从该行代码中删除注释时,这是新错误:

         [-] http://www.davis.k12.ut.us/site/Default.aspx?PageType=1&SiteID=6497&ChannelID=6507&DirectoryType=6'
         [-] http://www.surreyschools.ca/NewsEvents/Posts/Lists/Posts/ViewPost.aspx?ID=507'
         [-] http://plaine-d-aunis.bibli.fr/opac/index.php?lvl=cmspage&pageid=6&id_rubrique=100'
         [-] http://www.parlimen.gov.my/index.php?lang=en'
         [-] http://www.rvparkhunter.com/state.asp?state=britishcolumbia'
         [-] URL Timed Out
         [-] http://www.videohelp.com/tools.php?listall=1'
Traceback (most recent call last):
  File "sitehunter.py", line 91, in <module>
    mp_handler(URLList)
  File "sitehunter.py", line 86, in mp_handler
    p.map(mp_worker, URLList)
  File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\pool.py", line 260, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\pool.py", line 608, in get
    raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '<multiprocessing.pool.ExceptionWithTraceback object at 0x0381C790>'. Reason: 'TypeError("cannot serialize '_io.BufferedReader' object",)'

C:\Users\Brice\Desktop\My Site Hunter>

老实说,新错误似乎比旧错误更糟糕。这就是我把两者都包括在内的原因。任何有关如何解决这个问题的信息都会非常感激,因为我在过去的几个小时里一直试图修复它。

0 个答案:

没有答案