我目前正在使用Python编写一个程序来计算德语文本中的英语。我想知道在整篇文章中发生了多少次出错。为此,我列出了德语中的所有英语主义,如下所示:
abchecken
abchillen
abdancen
abdimmen
abfall-container
abflug-terminal
这个名单继续......
然后我检查了这个列表和要分析的文本之间的交叉点,但是这只给出了两个文本中出现的所有单词的列表,例如:Anglicisms : 4:{'abdancen', 'abchecken', 'terminal'}
我真的很喜欢这个词输出这些词出现的次数(最好按频率排序),例如:
Anglicisms: abdancen(5), abchecken(2), terminal(1)
这是我到目前为止的代码:
#counters to zero
lines, blanklines, sentences, words = 0, 0, 0, 0
print ('-' * 50)
while True:
try:
#def text file
filename = input("Please enter filename: ")
textf = open(filename, 'r')
break
except IOError:
print( 'Cannot open file "%s" ' % filename )
#reads one line at a time
for line in textf:
print( line, ) # test
lines += 1
if line.startswith('\n'):
blanklines += 1
else:
#sentence ends with . or ! or ?
#count these characters
sentences += line.count('.') + line.count('!') + line.count('?')
#create a list of words
#use None to split at any whitespace regardless of length
tempwords = line.split(None)
print(tempwords)
#total words
words += len(tempwords)
#anglicisms
words1 = set(open(filename).read().split())
words2 = set(open("anglicisms.txt").read().split())
duplicates = words1.intersection(words2)
textf.close()
print( '-' * 50)
print( "Lines : ", lines)
print( "Blank lines : ", blanklines)
print( "Sentences : ", sentences)
print( "Words : ", words)
print( "Anglicisms : %d:%s"%(len(duplicates),duplicates))
我遇到的第二个问题是它不计算那些用英语说的话。例如,如果“大”在文本列表和文本中的“大脚”中,则会忽略此事件。我该如何解决这个问题?
来自瑞士的亲切问候!
答案 0 :(得分:1)
我会做这样的事情:
from collections import Counter
anglicisms = open("anglicisms.txt").read().split()
matches = []
for line in textf:
matches.extend([word for word in line.split() if word in anglicisms])
anglicismsInText = Counter(matches)
关于第二个问题我发现它有点难。以你的榜样“大”是一种英国主义,“大”应该匹配,但是“A 大所有”呢?或“超过大”?每次在字符串中发现英语时,它是否应该匹配?在开始?在末尾?一旦你知道,你应该建立一个匹配它的正则表达式
编辑:匹配以英语主义开头的字符串:
def derivatesFromAnglicism(word):
return any([word.startswith(a) for a in anglicism])
matches.extend([word for word in line.split() if derivatesFromAnglicism(word)])
答案 1 :(得分:0)
这解决了你的第一个问题:
anglicisms = ["a", "b", "c"]
words = ["b", "b", "b", "a", "a", "b", "c", "a", "b", "c", "c", "c", "c"]
results = map(lambda angli: (angli, words.count(angli)), anglicisms)
results.sort(key=lambda p:-p[1])
结果如下:
[('b', 5), ('c', 5), ('a', 3)]
对于你的第二个问题,我认为正确的方法是使用常规表达。