Lexttizing txt文件并仅替换词形词

时间:2018-03-17 20:45:29

标签: python nltk lemmatization

无法弄清楚如何从txt文件中对单词进行词形变换。我已经列出了这些单词的内容,但我不确定如何在事后对它们进行词形推理。

以下是我所拥有的:

import nltk, re
nltk.download('wordnet')
from nltk.stem.wordnet import WordNetLemmatizer

def lemfile():
    f = open('1865-Lincoln.txt', 'r')
    text = f.read().lower()
    f.close()
    text = re.sub('[^a-z\ \']+', " ", text)
    words = list(text.split())

3 个答案:

答案 0 :(得分:1)

初始化一个WordNetLemmatizer对象,并将行中的每个单词解释。您可以使用fileinput模块执行就地文件I / O.

# https://stackoverflow.com/a/5463419/4909087
import fileinput

lemmatizer = WordNetLemmatizer()
for line in fileinput.input('1865-Lincoln.txt', inplace=True, backup='.bak'):
    line = ' '.join(
        [lemmatizer.lemmatize(w) for w in line.rstrip().split()]
    )
    # overwrites current `line` in file
    print(line)

fileinput.input在使用时将stdout重定向到打开的文件。

答案 1 :(得分:0)

您还可以尝试围绕WordNetLemmatizer包中的NLTK pywsd的包装,具体来说,https://github.com/alvations/pywsd/blob/master/pywsd/utils.py#L129

安装:

pip install -U nltk
python -m nltk.downloader popular
pip install -U pywsd

代码:

>>> from pywsd.utils import lemmatize_sentence
>>> lemmatize_sentence('These are foo bar sentences.')
['these', 'be', 'foo', 'bar', 'sentence', '.']
>>> lemmatize_sentence('These are foo bar sentences running.')
['these', 'be', 'foo', 'bar', 'sentence', 'run', '.']

特别针对您的问题:

from __future__ import print_function
from pywsd.util import lemmatize_sentence 

with open('file.txt') as fin, open('outputfile.txt', 'w') as fout
    for line in fin:
        print(' '.join(lemmatize_sentence(line.strip()), file=fout, end='\n')

答案 2 :(得分:0)

使txt文件合法化并仅替换经过修饰词的单词可以--

import nltk
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from pywsd.utils import lemmatize_sentence

lmm = WordNetLemmatizer()
ps = PorterStemmer()

new_data= []

with open('/home/rahul/Desktop/align.txt','r') as f:
f1 = f.read()
f2 = f1.split()
en_stops = set(stopwords.words('english'))
hu_stops = set(stopwords.words('hungarian'))

all_words = f2 
punctuations = '''!()-[]{};:'"\,<>./?@#$%^&*_~[<p>]'''
#if lemmatization of one string is required then uncomment below line
#data='this is coming rahul  schooling met happiness making'
print ()
for line in all_words:
    new_data=' '.join(lemmatize_sentence(line))
    print (new_data)

PS-根据您的需要进行标识。 希望这会有所帮助!