Python修复了破碎的编码

时间:2013-01-03 00:36:13

标签: python encoding repair chardet

我有一个带有django播放列表管理的小型icecast2家庭服务器。另外,我有很多带有破碎编码的mp3。首先,我试图在python上找到一些编码修复工具,但是找不到任何适合我的工作(python-ftfynltk - 它不支持unicode输入。)< / p>

我使用beets点像瑞士刀来解析媒体标签,这很简单,而且我认为,对于大多数情况来说,它几乎已经足够了。

对于字符集检测,我使用chardet,但它在短字符串上有一些问题,因此我对遇到的编码使用了一些强制调整。我认为,如果编码错误,所有标签都有错误,所以我先收集所有使用的编码。

class MostFrequentEncoding(dict):
    def from_attrs(self, obj):
        for attr in dir(obj):
            val = getattr(obj, attr)
            self.feed(val)

    def feed(self, obj):
        if obj and isinstance(obj, basestring):
            guess = chardet.detect(obj)
            encoding = guess['encoding']

            if encoding not in self:
                self.setdefault(encoding, {'confidence': 0.0, 'total': 0})

            self[encoding]['confidence'] += guess['confidence']
            self[encoding]['total'] += 1

    def encodings(self):
        return sorted(self, key=lambda x: self[x]['total'], reverse=True)

以下是调整:

charset_coercing = {
    ('MacCyrillic', 'windows-1251'): {'MacCyrillic': -0.1},
}

这意味着,如果我们同时有MacCyrillicwindows-1251的机会,我们应该更喜欢windows-1251

def fix_encoding(src, possible_encodings):
    if not isinstance(src, basestring) or not src:
        return src

    guess = chardet.detect(src)
    first_encoding = guess['encoding']

    encodings = list(possible_encodings)        # copy possible encodings
    if first_encoding in encodings:             # we believe chardet, so first tested
        encodings.remove(first_encoding)        # encoding will be the one, detected by chardet
    encodings.insert(0, first_encoding)
    encodings_set = set(encodings)

    tested_encodings = { k:{'string': '', 'confidence': -1.0} for k in encodings }

    try:
        lat = src.encode('latin-1') if isinstance(src, unicode) else src # make latin string
    except UnicodeEncodeError:
        lat = src.encode('utf-8') # may be not necessary, should return src?

    while encodings:
        candidate = encodings.pop(0)
        if not candidate:
            continue

        if not candidate in tested_encodings:
            tested_encodings.setdefault(candidate, {'string': '', 'confidence': -1.0})

        try:
            fixed_string = lat.decode(candidate)
        except UnicodeDecodeError:
            continue

        # try to detect charset again
        fixed_confidence = chardet.detect(fixed_string)['confidence']
        # it seems, that new confidence is usually higher, if the previous detection was right

        tested_encodings[candidate]['string'] = fixed_string
        tested_encodings[candidate]['confidence'] = fixed_confidence

    # perform charset coercing
    for subset, coercing_encodings in charset_coercing.items():
        if set(subset).issubset(encodings_set):
            for enc, penalty in coercing_encodings.items():
                tested_encodings[enc]['confidence'] += penalty


    result = tested_encodings.get(first_encoding)
    if result['confidence'] >= 0.99: # if confidence value for first detection is high, use it
        return result['string']

    max_confidence_charset = max(tested_encodings, key=lambda x: tested_encodings[x]['confidence'])
    return tested_encodings[max_confidence_charset]['string']

媒体文件解析:

def extract_tags(media_file):
    try:
        mf = MediaFile(media_file)
    except:
        return {}

    mfe = MostFrequentEncoding()
    mfe.from_attrs(mf)

    encodings = mfe.encodings()
    tags = {}

    for attr in sorted(dir(mf)):
        val = getattr(mf, attr)
        if not val or callable(val) or \
        attr in ['__dict__', '__doc__', '__module__', '__weakref__', 'mgfile', 'art']:
            continue

        fixed = fix_encoding(val, encodings)
        tags[attr] = remove_extra_spaces(fixed) if isinstance(fixed, basestring) else fixed

    if mf.art:
        tags['art'] = { 'data': mf.art, 'mime': imghdr.what(None, h=mf.art) }

    return tags

用法例如:

f = '/media/Media/Music/Jason Becker/Jason Becker - Perpetual Burn/02__1.mp3'
pprint(extract_tags(f))

Here是一个完整的脚本。它可以在解析过程中显示相册的ascii-cover。

它似乎有效,但是在python上是否有任何维护的swiss-knife编码修复库?

0 个答案:

没有答案