使用NLTK和Python从文本文件中读取和标记POS标记的句子

时间:2011-04-06 23:45:25

标签: python nlp text-files nltk

有没有人知道是否有现成的模块或简单的方法来读取和写入文本文件中的词性标记句子?我正在使用python和Natural Language Toolkit(NLTK)。例如,此代码:

import nltk

sentences = "Call me Ishmael. Some years ago - never mind how long precisely - having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world."

tagged = nltk.sent_tokenize(sentences.strip())
tagged = [nltk.word_tokenize(sent) for sent in tagged]
tagged = [nltk.pos_tag(sent) for sent in tagged]

print tagged

返回此嵌套列表:

[[('Call', 'NNP'), ('me', 'PRP'), ('Ishmael', 'NNP'), ('.', '.')], [('Some', 'DT'), ('years', 'NNS'), ('ago', 'RB'), ('-', ':'), ('never', 'RB'), ('mind', 'VBP'), ('how', 'WRB'), ('long', 'JJ'), ('precisely', 'RB'), ('-', ':'), ('having', 'VBG'), ('little', 'RB'), ('or', 'CC'), ('no', 'DT'), ('money', 'NN'), ('in', 'IN'), ('my', 'PRP$'), ('purse', 'NN'), (',', ','), ('and', 'CC'), ('nothing', 'NN'), ('particular', 'JJ'), ('to', 'TO'), ('interest', 'NN'), ('me', 'PRP'), ('on', 'IN'), ('shore', 'NN'), (',', ','), ('I', 'PRP'), ('thought', 'VBD'), ('I', 'PRP'), ('would', 'MD'), ('sail', 'VB'), ('about', 'IN'), ('a', 'DT'), ('little', 'RB'), ('and', 'CC'), ('see', 'VB'), ('the', 'DT'), ('watery', 'NN'), ('part', 'NN'), ('of', 'IN'), ('the', 'DT'), ('world', 'NN'), ('.', '.')]]

我知道我可以很容易地将其转储到泡菜中,但我真的想将其导出为较大文本文件的一部分。我希望能够将列表导出到文本文件,然后稍后返回它,解析它,并恢复原始列表结构。 NLTK中是否有内置函数来执行此操作?我看了,但找不到任何......

示例输出:

<headline>Article headline</headline>
<body>Call me Ishmael...</body>
<pos_tags>[[('Call', 'NNP'), ('me', 'PRP'), ('Ishmael', 'NNP')...</pos_tags>

2 个答案:

答案 0 :(得分:3)

NLTK具有标记文本的标准文件格式。它看起来像这样:

  

致电/ NNP me / PRP Ishmael / NNP ./.

您应该使用此格式,因为它允许您使用NLTK的TaggedCorpusReader和其他类似的类来读取文件,并获得全方位的语料库阅读器功能。令人困惑的是,NLTK中没有用于以这种格式编写标记语料库的高级功能,但这可能是因为它非常简单:

for sent in tagged:
    print " ".join(word+"/"+tag for word, tag in sent)

(NLTK确实提供nltk.tag.tuple2str(),但它只处理一个单词 - 只需输入word+"/"+tag即可。

如果您使用此格式将标记文字保存在一个或多个文件fileN.txt中,则可以使用nltk.corpus.reader.TaggedCorpusReader将其读回:

mycorpus = nltk.corpus.reader.TaggedCorpusReader("path/to/corpus", "file.*\.txt")
print mycorpus.fileids()
print mycorpus.sents()[0]
for sent in mycorpus.tagged_sents():
    <etc>

请注意,sents()方法会为您提供未标记的文本,尽管有点奇怪的间隔。没有必要在文件中包含标记和未标记的版本,如示例中所示。

TaggedCorpusReader不支持文件头(对于标题等),但如果你真的需要,你可以派生自己的类来读取文件元数据,然后像{{1}那样处理其余的}。

答案 1 :(得分:1)

似乎使用pickle.dumps并将其输出插入到您的文本文件中,也许使用标签包装器进行自动加载将满足您的要求。

您能更具体地了解文本输出的外观吗? 你的目标是更具人性化的东西吗?

编辑:添加一些代码

from xml.dom.minidom import Document, parseString
import nltk

sentences = "Call me Ishmael. Some years ago - never mind how long precisely - having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world."

tagged = nltk.sent_tokenize(sentences.strip())
tagged = [nltk.word_tokenize(sent) for sent in tagged]
tagged = [nltk.pos_tag(sent) for sent in tagged]

# Write to xml string
doc = Document()

base = doc.createElement("Document")
doc.appendChild(base)

headline = doc.createElement("headline")
htext = doc.createTextNode("Article Headline")
headline.appendChild(htext)
base.appendChild(headline)

body = doc.createElement("body")
btext = doc.createTextNode(sentences)
headline.appendChild(btext)
base.appendChild(body)

pos_tags = doc.createElement("pos_tags")
tagtext = doc.createTextNode(repr(tagged))
pos_tags.appendChild(tagtext)
base.appendChild(pos_tags)

xmlstring = doc.toxml()

# Read back tagged

doc2 = parseString(xmlstring)
el = doc2.getElementsByTagName("pos_tags")[0]
text = el.firstChild.nodeValue
tagged2 = eval(text)

print "Equal? ", tagged == tagged2