使用process_wiki.py将xml格式的Wikipedia处理为文本格式,

时间:2019-05-10 14:13:32

标签: deep-learning gensim

我的代码有问题,无法使用process_wiki.py将xml格式的Wikipedia处理为文本格式。我从GG下载的代码。我在这里得到了错误:

Traceback (most recent call last):
File “process_wiki.py”, line 30, in
output.write(b’ ‘.join(text).decode(‘utf-8’) + ‘\n’)
TypeError: sequence item 0: expected a bytes-like object, str found

I have a problem with the code to process the xml format wikipedia to text format using process_wiki.py. The code that I downloaded from GG. 

from __future__ import print_function
 
import logging
import os.path
import six
import sys
 
from gensim.corpora import WikiCorpus
 
if __name__ == '__main__':
    program = os.path.basename(sys.argv[0])
    logger = logging.getLogger(program)
 
    logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s')
    logging.root.setLevel(level=logging.INFO)
    logger.info("running %s" % ' '.join(sys.argv))
 
    # check and process input arguments
    if len(sys.argv) != 3:
        print("Using: python process_wiki.py enwiki.xxx.xml.bz2 wiki.en.text")
        sys.exit(1)
    inp, outp = sys.argv[1:3]
    space = " "
    i = 0
 
    output = open(outp, 'w',encoding='utf8')
    wiki = WikiCorpus(inp, lemmatize=False, dictionary={})
    for text in wiki.get_texts():
		text = [x.encode(‘utf-8′) for x in text]
        if six.PY3:
            output.write(b’ ‘.join(text).decode(‘utf-8’) + ‘\n’)
        #   ###another method###
        #    output.write(
        #            space.join(map(lambda x:x.decode("utf-8"), text)) + '\n')
        else:
            output.write(space.join(text) + "\n")
        i = i + 1
        if (i % 10000 == 0):
            logger.info("Saved " + str(i) + " articles")
 
    output.close()
    logger.info("Finished Saved " + str(i) + " articles")

我试图修复无法运行的错误。

0 个答案:

没有答案