尝试进行多进程处理时,如何解决“ TypeError:无法序列化'_io.BufferedReader'对象”错误

时间:2019-02-03 20:59:42

标签: python python-3.x windows multiprocessing pool

我正在尝试将代码中的线程切换为多处理以测量其性能,并希望实现更好的暴力破解潜力,因为我的程序旨在暴力破解受密码保护的.zip文件。但是每当我尝试运行该程序时,我都会得到:

BruteZIP2.py -z "Generic ZIP.zip" -f  Worm.txt
Traceback (most recent call last):
  File "C:\Users\User\Documents\Jetbrains\PyCharm\BruteZIP\BruteZIP2.py", line 40, in <module>
    main(args.zip, args.file)
  File "C:\Users\User\Documents\Jetbrains\PyCharm\BruteZIP\BruteZIP2.py", line 34, in main
    p.start()
  File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
  File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
  File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
  File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot serialize '_io.BufferedReader' object

我确实发现了与我有同样问题的线程,但是它们都没有得到解答/未解决。我还尝试在Pool上方插入p.start(),因为我认为这是由于我在基于Windows的计算机上,但这没有帮助。我的代码如下:

  import argparse
  from multiprocessing import Process
  import zipfile

  parser = argparse.ArgumentParser(description="Unzips a password protected .zip by performing a brute-force attack using either a word list, password list or a dictionary.", usage="BruteZIP.py -z zip.zip -f file.txt")
  # Creates -z arg
  parser.add_argument("-z", "--zip", metavar="", required=True, help="Location and the name of the .zip file.")
  # Creates -f arg
  parser.add_argument("-f", "--file", metavar="", required=True, help="Location and the name of the word list/password list/dictionary.")
  args = parser.parse_args()


  def extract_zip(zip_file, password):
      try:
          zip_file.extractall(pwd=password)
          print(f"[+] Password for the .zip: {password.decode('utf-8')} \n")
      except:
          # If a password fails, it moves to the next password without notifying the user. If all passwords fail, it will print nothing in the command prompt.
          print(f"Incorrect password: {password.decode('utf-8')}")
          # pass


  def main(zip, file):
      if (zip == None) | (file == None):
          # If the args are not used, it displays how to use them to the user.
          print(parser.usage)
          exit(0)
      zip_file = zipfile.ZipFile(zip)
      # Opens the word list/password list/dictionary in "read binary" mode.
      txt_file = open(file, "rb")
      for line in txt_file:
          password = line.strip()
          p = Process(target=extract_zip, args=(zip_file, password))
          p.start()
          p.join()


  if __name__ == '__main__':
      # BruteZIP.py -z zip.zip -f file.txt.
      main(args.zip, args.file)

正如我之前所说,我相信这主要是因为我现在在基于Windows的计算机上。我与其他基于Linux的计算机共享了我的代码,他们在运行上面的代码时没有遇到任何问题。

我的主要目的是要启动8个进程/池,以使与线程相比尝试的次数最大化,但是由于我无法获得针对TypeError: cannot serialize '_io.BufferedReader' object消息的修复程序,因此我不确定该怎么做。在这里做,我该如何解决。任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:3)

文件句柄的序列化不是很好...但是您可以发送zip文件的 name 而不是zip filehandle (字符串可以在进程之间进行序列化) )。并避免使用zip作为文件名,因为它是内置文件名。我选择了zip_filename

p = Process(target=extract_zip, args=(zip_filename, password))

然后:

def extract_zip(zip_filename, password):
      try:
          zip_file = zipfile.ZipFile(zip_filename)
          zip_file.extractall(pwd=password)

另一个问题是由于以下原因您的代码无法并行运行:

      p.start()
      p.join()

p.join等待该过程完成...几乎没有用。最后,您必须将流程标识符存储到join中。

这可能会导致其他问题:并行创建太多进程可能对您的计算机来说是一个问题,并且在某些时候没有太大帮助。考虑使用multiprocessing.Pool来限制工作人员的数量。

一个典型的例子是:

with multiprocessing.Pool(5) as p:
    print(p.map(f, [1, 2, 3, 4, 5, 6, 7]))

适合您的示例:

with multiprocessing.Pool(5) as p:
    p.starmap(extract_zip, [(zip_filename,line.strip()) for line in txt_file])

({starmap将元组扩展为2个单独的参数以适合您的extract_zip方法,如Python multiprocessing pool.map for multiple arguments中所述)