脚本在Python2中工作但在Python 3中不工作(hashlib)

时间:2013-06-11 21:55:37

标签: utf-8 python-3.x md5 python-2.x hashlib

我今天用一个简单的脚本来处理所有可用的hashlib算法中的校验和文件(md5,sha1 .....)我写了它并用Python2调试它,但是当我决定将它移植到Python 3时它就赢了不行。有趣的是它适用于小文件,但不适用于大文件。我认为我缓冲文件的方式存在问题,但是错误信息让我觉得它与我正在进行hexdigest的方式有关(我认为)这是我整个脚本的副本,所以随意复制它,使用它并帮助我找出问题所在。检查250 MB文件时出现的错误是

  

“'utf-8'编解码器无法解码位置10中的字节0xf3:无效的连续字节”

我谷歌它,但找不到任何修复它的东西。此外,如果您看到更好的方法来优化它,请告诉我。我的主要目标是在Python 3中100%完成工作。谢谢

#!/usr/local/bin/python33
import hashlib
import argparse

def hashFile(algorithm = "md5", filepaths=[], blockSize=4096):
    algorithmType = getattr(hashlib, algorithm.lower())() #Default: hashlib.md5()
    #Open file and extract data in chunks   
    for path in filepaths:
        try:
            with open(path) as f:
                while True:
                    dataChunk = f.read(blockSize)
                    if not dataChunk:
                        break
                    algorithmType.update(dataChunk.encode())
                yield algorithmType.hexdigest()
        except Exception as e:
            print (e)

def main():
    #DEFINE ARGUMENTS
    parser = argparse.ArgumentParser()
    parser.add_argument('filepaths', nargs="+", help='Specified the path of the file(s) to hash')
    parser.add_argument('-a', '--algorithm', action='store', dest='algorithm', default="md5", 
                        help='Specifies what algorithm to use ("md5", "sha1", "sha224", "sha384", "sha512")')
    arguments = parser.parse_args()
    algo = arguments.algorithm
    if algo.lower() in ("md5", "sha1", "sha224", "sha384", "sha512"):

这是在Python 2中运行的代码,我将把它放在你想要使用它的情况下,而不必修改上面的那个。

#!/usr/bin/python
import hashlib
import argparse

def hashFile(algorithm = "md5", filepaths=[], blockSize=4096):
    '''
    Hashes a file. In oder to reduce the amount of memory used by the script, it hashes the file in chunks instead of putting
    the whole file in memory
    ''' 
    algorithmType = hashlib.new(algorithm)  #getattr(hashlib, algorithm.lower())() #Default: hashlib.md5()
    #Open file and extract data in chunks   
    for path in filepaths:
        try:
            with open(path, mode = 'rb') as f:
                while True:
                    dataChunk = f.read(blockSize)
                    if not dataChunk:
                        break
                    algorithmType.update(dataChunk)
                yield algorithmType.hexdigest()
        except Exception as e:
            print e

def main():
    #DEFINE ARGUMENTS
    parser = argparse.ArgumentParser()
    parser.add_argument('filepaths', nargs="+", help='Specified the path of the file(s) to hash')
    parser.add_argument('-a', '--algorithm', action='store', dest='algorithm', default="md5", 
                        help='Specifies what algorithm to use ("md5", "sha1", "sha224", "sha384", "sha512")')
    arguments = parser.parse_args()
    #Call generator function to yield hash value
    algo = arguments.algorithm
    if algo.lower() in ("md5", "sha1", "sha224", "sha384", "sha512"):
        for hashValue in hashFile(algo, arguments.filepaths):
            print hashValue
    else:
        print "Algorithm {0} is not available in this script".format(algorithm)

if __name__ == "__main__":
    main()

1 个答案:

答案 0 :(得分:1)

我没有在Python 3中尝试过,但我在Python 2.7.5中得到了二进制文件中的相同错误(唯一的区别是我的ascii编解码器)。而不是编码数据块,直接以二进制模式打开文件:

with open(path, 'rb') as f:
    while True:
        dataChunk = f.read(blockSize)
        if not dataChunk:
            break
        algorithmType.update(dataChunk)
    yield algorithmType.hexdigest()

除此之外,我使用方法hashlib.new代替getattrhashlib.algorithms_available来检查参数是否有效。