如何计算平均字数和字数python 2.7中的句子长度来自文本文件

时间:2015-02-04 23:44:19

标签: python python-2.7

过去两周我一直坚持这一点,我想知道你能不能帮忙。

我正在尝试计算平均字长和&文本文件中的句子长度。我似乎无法绕过它。我刚刚开始使用在主文件中调用的函数。

我的主文件看起来像是

import Consonants
import Vowels
import Sentences
import Questions
import Words

""" Vowels """


text = Vowels.fileToString("test.txt")    
x = Vowels.countVowels(text)

print str(x) + " Vowels"

""" Consonats """

text = Consonants.fileToString("test.txt")    
x = Consonants.countConsonants(text)


print str(x) + " Consonants"

""" Sentences """


text = Sentences.fileToString("test.txt")    
x = Sentences.countSentences(text)
print str(x) + " Sentences"


""" Questions """

text = Questions.fileToString("test.txt")    
x = Questions.countQuestions(text)

print str(x) + " Questions"

""" Words """
text = Words.fileToString("test.txt")    
x = Words.countWords(text)

print str(x) + " Words"

我的一个函数文件是这样的:

def fileToString(filename):
    myFile = open(filename, "r")
    myText = ""
    for ch in myFile:
        myText = myText + ch
    return myText

def countWords(text):
    vcount = 0
    spaces = [' ']
    for letter in text:
        if (letter in spaces):
            vcount = vcount + 1
    return vcount

我想知道如何计算单词长度作为我导入的函数?我尝试使用其他一些线程,但它们对我没有用。

4 个答案:

答案 0 :(得分:1)

我试图给你一个算法,

  • 阅读文件,使用forenumerate()进行split()循环,并检查它们如何以endswith()结束。像;

for ind,word in enumerate(readlines.split()): if word.endswith("?") ..... if word.endswith("!")

然后将它们放入dict中,使用带有ind循环的while(索引)值;

obj = "Hey there! how are you? I hope you are ok."
dict1 = {}
for ind,word in enumerate(obj.split()):
    dict1[ind]=word

x = 0
while x<len(dict1):
    if "?" in dict1[x]:
        print (list(dict1.values())[:x+1])
    x += 1

输出;

>>> 
['Hey', 'there!', 'how', 'are', 'you?']
>>> 

你知道,我实际上切断了这些词,直到达到?。所以我现在在列表中有一个句子(您可以将其更改为!)。我可以达到每个元素的长度,其余部分都是简单的数学。您将找到每个元素长度的总和,然后将其除以该列表的长度。理论上,它将给出平均值。

请记住,这是算法。您确实必须更改此代码以适合您的数据,关键点为enumerate()endswith()dict

答案 1 :(得分:0)

老实说,当你匹配单词和句子之类的东西时,你最好不要仅仅依靠str.split来学习和使用正则表达式来捕捉每个角落的情况。

#text.txt
Here is some text. It is written on more than one line, and will have several sentences.

Some sentences will have their OWN line!

It will also have a question. Is this the question? I think it is.

#!/usr/bin/python

import re

with open('test.txt') as infile:
    data = infile.read()

sentence_pat = re.compile(r"""
    \b                # sentences will start with a word boundary
    ([^.!?]+[.!?]+)   # continue with one or more non-sentence-ending
                      #    characters, followed by one or more sentence-
                      #    ending characters.""", re.X)

word_pat = re.compile(r"""
    (\S+)             # Words are just groups of non-whitespace together
    """, re.X)

sentences = sentence_pat.findall(data)
words = word_pat.findall(data)

average_sentence_length = sum([len(sentence) for sentence in sentences])/len(sentences)
average_word_length = sum([len(word) for word in words])/len(words)

样本:

>>> sentences
['Here is some text.',
 'It is written on more than one line, and will have several sentences.',
 'Some sentences will have their OWN line!',
 'It will also have a question.',
 'Is this the question?',
 'I think it is.']

>>> words
['Here',
 'is',
 'some',
 'text.',
 'It',
 'is',
 ... ,
 'I',
 'think',
 'it',
 'is.']

>>> average_sentence_length
31.833333333333332

>>> average_word_length
4.184210526315789

答案 2 :(得分:0)

回答这个问题:

  

我想知道如何计算单词长度   我导入的功能?

def avg_word_len(filename):
    word_lengths = []
    for line in open(filename).readlines():
        word_lengths.extend([len(word) for word in line.split()])
    return sum(word_lengths)/len(word_lengths)

注意:这不考虑类似的事情。而且!在结束时......等等。

答案 3 :(得分:0)

如果您想自己创建脚本,这不适用,但我会使用NLTK。它有一些非常好的工具来处理很长的文本。

This page为nltk提供了一个备忘单。您应该能够导入文本,将序列作为大型列表列表并获取n-gram列表(长度为n的单词)。然后你可以计算平均值。