读取文本文件并将其拆分为python中的单个单词

时间:2013-06-04 15:50:32

标签: python string split

所以我有这个由数字和单词组成的文本文件,例如像09807754 18 n 03 aristocrat 0 blue_blood 0 patrician这样,我想将它拆分,以便每个单词或数字都会作为一个新行出现。

空白分隔符是理想的,因为我希望带有破折号的单词保持连接。

这是我到目前为止所做的:

f = open('words.txt', 'r')

for word in f:

    print(word)

不确定如何离开这里,我希望这是输出:

09807754
18
n
3
aristocrat
...

6 个答案:

答案 0 :(得分:109)

如果您的数据周围没有引号,并且您一次只想要一个单词(忽略文件中空格与换行符的含义):

with open('words.txt','r') as f:
    for line in f:
        for word in line.split():
           print(word)      

如果您想要文件每行中的单词的嵌套列表(例如,要从文件创建行和列的矩阵):

with open("words.txt") as f:
    [line.split() for line in f]

或者,如果要将文件展平为文件中单个单词的单词列表,可以执行以下操作:

with open('words.txt') as f:
    [word for line in f for word in line.split()]

如果你想要一个正则表达式解决方案:

import re
with open("words.txt") as f:
    for line in f:
        for word in re.findall(r'\w+', line):
            # word by word

或者,如果您希望它是一个带有正则表达式的逐行生成器:

 with open("words.txt") as f:
     (word for line in f for word in re.findall(r'\w+', line))

答案 1 :(得分:17)

f = open('words.txt')
for word in f.read().split():
    print(word)

答案 2 :(得分:10)

作为补充, 如果您正在阅读vvvvery大文件,并且您不希望一次将所有内容都读入内存,您可以考虑使用缓冲区,然后按yield返回每个单词:

def read_words(inputfile):
    with open(inputfile, 'r') as f:
        while True:
            buf = f.read(10240)
            if not buf:
                break

            # make sure we end on a space (word boundary)
            while not str.isspace(buf[-1]):
                ch = f.read(1)
                if not ch:
                    break
                buf += ch

            words = buf.split()
            for word in words:
                yield word
        yield '' #handle the scene that the file is empty

if __name__ == "__main__":
    for word in read_words('./very_large_file.txt'):
        process(word)

答案 3 :(得分:3)

你可以做的是使用nltk来标记单词,然后将所有单词存储在列表中,这就是我所做的。 如果你不知道nltk;它代表自然语言工具包,用于处理自然语言。如果你想开始,这里有一些资源 [http://www.nltk.org/book/]

import nltk 
from nltk.tokenize import word_tokenize 
file = open("abc.txt",newline='')
result = file.read()
words = word_tokenize(result)
for i in words:
       print(i)

输出将是:

09807754
18
n
03
aristocrat
0
blue_blood
0
patrician

答案 4 :(得分:0)

这是我完全功能性的方法,避免了必须阅读和分割线条。它使用itertools模块:

注意python 3,将itertools.imap替换为map

import itertools

def readwords(mfile):
    byte_stream = itertools.groupby(
        itertools.takewhile(lambda c: bool(c),
            itertools.imap(mfile.read,
                itertools.repeat(1))), str.isspace)

    return ("".join(group) for pred, group in byte_stream if not pred)

样本用法:

>>> import sys
>>> for w in readwords(sys.stdin):
...     print (w)
... 
I really love this new method of reading words in python
I
really
love
this
new
method
of
reading
words
in
python

It's soo very Functional!
It's
soo
very
Functional!
>>>

我想在你的情况下,这将是使用该功能的方式:

with open('words.txt', 'r') as f:
    for word in readwords(f):
        print(word)

答案 5 :(得分:0)

with open(filename) as file:
    words = file.read().split()

它是文件中所有单词的列表。

import re
with open(filename) as file:
    words = re.findall(r"([a-zA-Z\-]+)", file.read())