如何从Python的双向/三联语法的输出中删除列表特殊字符(“()”,“'”,“,”)

时间:2018-08-30 14:22:48

标签: python nltk special-characters

我已经编写了一个代码,该代码使用NLTK从文本输入中计算出二元/三元组频率。我在这里面临的问题是,因为输出是以Python列表的形式获得的,所以我的输出包含列表特定的字符,即(“()”,“'”,“,”)。我计划将其导出到一个csv文件中,因此我想在代码级别本身上删除这些特殊字符。我该如何进行编辑。

输入代码:

import nltk
from nltk import word_tokenize, pos_tag
from nltk.collocations import *
from itertools import *
from nltk.util import ngrams
from nltk.corpus import stopwords

corpus = '''The pure amnesia of her face,
newborn. I looked so far into her that, for a while, looked so far into her that, for a while  looked so far into her that, for a while looked so far into her that, for a while the visual 
held no memory. Little by little, I returned to myself, waking to nurse the visual held no  memory. Little by little, I returned to myself, waking to nurse
'''
s_corpus = corpus.lower()

stop_words = set(stopwords.words('english'))

tokens = nltk.word_tokenize(s_corpus)
tokens = [word for word in tokens if word not in stop_words]

c_tokens = [''.join(e for e in string if e.isalnum()) for string in tokens]
c_tokens = [x for x in c_tokens if x]

bgs_2 = nltk.bigrams(c_tokens)
bgs_3 = nltk.trigrams(c_tokens)

fdist = nltk.FreqDist(bgs_3)

tmp = list()
for k,v in fdist.items():
    tmp.append((v,k))
tmp = sorted (tmp, reverse=True)

for kk,vv in tmp[:]:
    print (vv,kk)

当前输出:

('looked', 'far', 'looked') 3
('far', 'looked', 'far') 3
('visual', 'held', 'memory') 2
('returned', 'waking', 'nurse') 2

预期输出:

looked far looked, 3
far looked far, 3
visual held memory, 2
returned waking nurse, 2

谢谢您的帮助。

2 个答案:

答案 0 :(得分:2)

一个更好的问题是 ngrams输出中的("()", "'",",")是什么?

>>> from nltk import ngrams
>>> from nltk import word_tokenize

# Split a sentence into a list of "words"
>>> word_tokenize("This is a foo bar sentence")
['This', 'is', 'a', 'foo', 'bar', 'sentence']
>>> type(word_tokenize("This is a foo bar sentence"))
<class 'list'>

# Extract bigrams.
>>> list(ngrams(word_tokenize("This is a foo bar sentence"), 2))
[('This', 'is'), ('is', 'a'), ('a', 'foo'), ('foo', 'bar'), ('bar', 'sentence')]

# Okay, so the output is a list, no surprise.
>>> type(list(ngrams(word_tokenize("This is a foo bar sentence"), 2)))
<class 'list'>

但是('This', 'is')是什么类型?

>>> list(ngrams(word_tokenize("This is a foo bar sentence"), 2))[0]
('This', 'is')
>>> first_thing_in_output = list(ngrams(word_tokenize("This is a foo bar sentence"), 2))[0]
>>> type(first_thing_in_output)
<class 'tuple'>

啊,这是一个元组,请参见https://realpython.com/python-lists-tuples/

打印元组时会发生什么?

>>> print(first_thing_in_output)
('This', 'is')

如果将它们转换为str()会怎样?

>>> print(str(first_thing_in_output))
('This', 'is')

但是我希望输出This is 而不是('This', 'is'),所以我将使用str.join()函数,请参见https://www.geeksforgeeks.org/join-function-python/

>>> print(' '.join((first_thing_in_output)))
This is

现在,这是真正通过the tutorial of basic Python types来了解正在发生的事情的好方法。此外,最好了解“容器”类型的工作原理,例如https://github.com/usaarhat/pywarmups/blob/master/session2.md


在原始帖子中,代码存在很多问题。

我想代码的目标是:

  • 标记文本并删除停用词
  • 提取ngram(没有停用词)
  • 打印出它们的字符串形式和计数

棘手的部分是stopwords.words('english')不包含标点符号,因此您最终会得到包含标点符号的奇怪ngram:

from nltk import word_tokenize
from nltk.util import ngrams
from nltk.corpus import stopwords

text = '''The pure amnesia of her face,
newborn. I looked so far into her that, for a while, looked so far into her that, for a while  looked so far into her that, for a while looked so far into her that, for a while the visual 
held no memory. Little by little, I returned to myself, waking to nurse the visual held no  memory. Little by little, I returned to myself, waking to nurse
'''

stoplist = set(stopwords.words('english'))

tokens = [token for token in nltk.word_tokenize(text) if token not in stoplist]

list(ngrams(tokens, 2))

[输出]:

[('The', 'pure'),
 ('pure', 'amnesia'),
 ('amnesia', 'face'),
 ('face', ','),
 (',', 'newborn'),
 ('newborn', '.'),
 ('.', 'I'),
 ('I', 'looked'),
 ('looked', 'far'),
 ('far', ','),
 (',', ','), ...]

也许您想使用标点符号来扩展非索引字表,例如

from string import punctuation
from nltk import word_tokenize
from nltk.util import ngrams
from nltk.corpus import stopwords

text = '''The pure amnesia of her face,
newborn. I looked so far into her that, for a while, looked so far into her that, for a while  looked so far into her that, for a while looked so far into her that, for a while the visual 
held no memory. Little by little, I returned to myself, waking to nurse the visual held no  memory. Little by little, I returned to myself, waking to nurse
'''

stoplist = set(stopwords.words('english') + list(punctuation))

tokens = [token for token in nltk.word_tokenize(text) if token not in stoplist]

list(ngrams(tokens, 2))

[输出]:

[('The', 'pure'),
 ('pure', 'amnesia'),
 ('amnesia', 'face'),
 ('face', 'newborn'),
 ('newborn', 'I'),
 ('I', 'looked'),
 ('looked', 'far'),
 ('far', 'looked'),
 ('looked', 'far'), ...]

然后您意识到诸如I之类的标记应为停用词,但仍存在于ngram列表中。这是因为stopwords.words('english')中的列表是小写的,例如

>>> stopwords.words('english')

[输出]:

['i',
 'me',
 'my',
 'myself',
 'we',
 'our',
 'ours',
 'ourselves',
 'you',
 "you're", ...]

因此,当您检查令牌是否在非索引字表中时,还应该将令牌小写。 (避免word_tokenize之前的句子缩小写,因为word_tokenize可能会从大写字母中得到提示)。因此:

from string import punctuation
from nltk import word_tokenize
from nltk.util import ngrams
from nltk.corpus import stopwords

text = '''The pure amnesia of her face,
newborn. I looked so far into her that, for a while, looked so far into her that, for a while  looked so far into her that, for a while looked so far into her that, for a while the visual 
held no memory. Little by little, I returned to myself, waking to nurse the visual held no  memory. Little by little, I returned to myself, waking to nurse
'''

stoplist = set(stopwords.words('english') + list(punctuation))

tokens = [token for token in nltk.word_tokenize(text) if token.lower() not in stoplist]

list(ngrams(tokens, 2))

[输出]:

[('pure', 'amnesia'),
 ('amnesia', 'face'),
 ('face', 'newborn'),
 ('newborn', 'looked'),
 ('looked', 'far'),
 ('far', 'looked'),
 ('looked', 'far'),
 ('far', 'looked'),
 ('looked', 'far'),
 ('far', 'looked'), ...]

现在ngram看起来已经实现了目标:

  • 标记文本并删除停用词
  • 提取ngram(没有停用词)

然后在您要按顺序将ngram输出到文件的最后一部分上,您实际上可以使用Freqdist.most_common(),该字母将以降序排列,例如

from string import punctuation
from nltk import word_tokenize
from nltk.util import ngrams
from nltk.corpus import stopwords
from nltk import FreqDist

text = '''The pure amnesia of her face,
newborn. I looked so far into her that, for a while, looked so far into her that, for a while  looked so far into her that, for a while looked so far into her that, for a while the visual 
held no memory. Little by little, I returned to myself, waking to nurse the visual held no  memory. Little by little, I returned to myself, waking to nurse
'''

stoplist = set(stopwords.words('english') + list(punctuation))

tokens = [token for token in nltk.word_tokenize(text) if token.lower() not in stoplist]

FreqDist(ngrams(tokens, 2)).most_common()

[输出]:

[(('looked', 'far'), 4),
 (('far', 'looked'), 3),
 (('visual', 'held'), 2),
 (('held', 'memory'), 2),
 (('memory', 'Little'), 2),
 (('Little', 'little'), 2),
 (('little', 'returned'), 2),
 (('returned', 'waking'), 2),
 (('waking', 'nurse'), 2),
 (('pure', 'amnesia'), 1),
 (('amnesia', 'face'), 1),
 (('face', 'newborn'), 1),
 (('newborn', 'looked'), 1),
 (('far', 'visual'), 1),
 (('nurse', 'visual'), 1)]

(另请参见:Difference between Python's collections.Counter and nltk.probability.FreqDist

最后,将其打印到文件中,您实际上应该使用上下文管理器http://eigenhombre.com/introduction-to-context-managers-in-python.html

with open('bigrams-list.tsv', 'w') as fout:
    for bg, count in FreqDist(ngrams(tokens, 2)).most_common():
        print('\t'.join([' '.join(bg), str(count)]), end='\n', file=fout)

[bigrams-list.tsv]:

looked far  4
far looked  3
visual held 2
held memory 2
memory Little   2
Little little   2
little returned 2
returned waking 2
waking nurse    2
pure amnesia    1
amnesia face    1
face newborn    1
newborn looked  1
far visual  1
nurse visual    1

令人回味的食物

现在您看到了这个奇怪的二元组Little little有意义吗?

这是从

中删除by的副产品
  

一点一点

因此,现在,根据您提取的ngram的最终任务是什么,您可能真的不想从列表中删除停用词。

答案 1 :(得分:0)

因此,只需“修复”您的输出: 使用它来打印数据:

for kk,vv in tmp:
    print(" ".join(list(kk)),",%d" % vv)

,如果要将其解析为csv,则应以其他格式收集输出。

当前,您正在创建包含列表和编号的列表。 尝试将数据收集为包含每个值的列表的列表。 这样,您就可以直接将其写入csv文件。

在这里看看:Create a .csv file with values from a Python list