我刚刚完成了一个程序,它从书籍和图表中读取文字的字数,x轴是一本书的计数,y轴是第二本书的计数。它有效,但它的速度非常慢,我希望得到一些如何优化它的技巧。我认为我最关心的是创建一本字典,用于书籍之间相似的单词,以及一本书中的单词字典而不是另一本书。这个实现为程序增加了很多运行时间,我想找到一种pythonic方法来改进它。以下是代码:
import re # regular expressions
import io
import collections
from matplotlib import pyplot as plt
# xs=[x1,x2,...,xn]
# Number of occurences of the word in book 1
# use
# ys=[y1.y2,...,yn]
# Number of occurences of the word in book 2
# plt.plot(xs,ys)
# save as svg or pdf files
word_pattern = re.compile(r'\w+')
# with version ensures closing even if there are failures
with io.open("swannsway.txt") as f:
text = f.read() # read as a single large string
book1 = word_pattern.findall(text) # pull out words
book1 = [w.lower() for w in book1 if len(w)>=3]
with io.open("moby_dick.txt") as f:
text = f.read() # read as a single large string
book2 = word_pattern.findall(text) # pull out words
book2 = [w.lower() for w in book2 if len(w)>=3]
#Convert these into relative percentages/total book length
wordcount_book1 = {}
for word in book1:
if word in wordcount_book1:
wordcount_book1[word]+=1
else:
wordcount_book1[word]=1
'''
for word in wordcount_book1:
wordcount_book1[word] /= len(wordcount_book1)
for word in wordcount_book2:
wordcount_book2[word] /= len(wordcount_book2)
'''
wordcount_book2 = {}
for word in book2:
if word in wordcount_book2:
wordcount_book2[word]+=1
else:
wordcount_book2[word]=1
common_words = {}
for i in wordcount_book1:
for j in wordcount_book2:
if i == j:
common_words[i] = [wordcount_book1[i], wordcount_book2[j]]
break
book_singles= {}
for i in wordcount_book1:
if i not in common_words:
book_singles[i] = [wordcount_book1[i], 0]
for i in wordcount_book2:
if i not in common_words:
book_singles[i] = [0, wordcount_book2[i]]
wordcount_book1 = collections.Counter(book1)
wordcount_book2 = collections.Counter(book2)
# how many words of different lengths?
word_length_book1 = collections.Counter([len(word) for word in book1])
word_length_book2 = collections.Counter([len(word) for word in book2])
print(wordcount_book1)
#plt.plot(list(word_length_book1.keys()),list(word_length_book1.values()), list(word_length_book2.keys()), list(word_length_book2.values()), 'bo')
for i in range(len(common_words)):
plt.plot(list(common_words.values())[i][0], list(common_words.values())[i][1], 'bo', alpha = 0.2)
for i in range(len(book_singles)):
plt.plot(list(book_singles.values())[i][0], list(book_singles.values())[i][1], 'ro', alpha = 0.2)
plt.ylabel('Swannsway')
plt.xlabel('Moby Dick')
plt.show()
#key:value
答案 0 :(得分:1)
您的大部分代码只有轻微的低效率,而我试图解决这些问题。你最大的延迟是密谋book_singles
,我相信我已经修好了。细节:我换了这个:
word_pattern = re.compile(r'\w+')
为:
word_pattern = re.compile(r'[a-zA-Z]{3,}')
因为book_singles
足够大而且不包含数字!通过在模式中包含最小大小,我们消除了对此循环的需求:
book1 = [w.lower() for w in book1 if len(w)>=3]
和book2匹配的那个。这里:
book1 = word_pattern.findall(text) # pull out words
book1 = [w.lower() for w in book1 if len(w)>=3]
我移动.lower()
所以我们只做一次,而不是每一个字:
book1 = word_pattern.findall(text.lower()) # pull out words
book1 = [w for w in book1 if len(w) >= 3]
由于它可能在 C 中实施,这可能是一场胜利。这样:
wordcount_book1 = {}
for word in book1:
if word in wordcount_book1:
wordcount_book1[word]+=1
else:
wordcount_book1[word]=1
我已切换为使用defaultdict
,因为您已经导入了集合:
wordcount_book1 = collections.defaultdict(int)
for word in book1:
wordcount_book1[word] += 1
对于这些循环:
common_words = {}
for i in wordcount_book1:
for j in wordcount_book2:
if i == j:
common_words[i] = [wordcount_book1[i], wordcount_book2[j]]
break
book_singles= {}
for i in wordcount_book1:
if i not in common_words:
book_singles[i] = [wordcount_book1[i], 0]
for i in wordcount_book2:
if i not in common_words:
book_singles[i] = [0, wordcount_book2[i]]
我重写了第一个循环,这是一场灾难,然后让它做了双重任务,因为它已经完成了第二个循环的工作:
common_words = {}
book_singles = {}
for i in wordcount_book1:
if i in wordcount_book2:
common_words[i] = [wordcount_book1[i], wordcount_book2[i]]
else:
book_singles[i] = [wordcount_book1[i], 0]
for i in wordcount_book2:
if i not in common_words:
book_singles[i] = [0, wordcount_book2[i]]
最后,这些绘图循环在他们一遍又一遍地走common_words.values()
和book_singles.values()
的方式以及他们一次绘制一个点的方式上效率非常低:
for i in range(len(common_words)):
plt.plot(list(common_words.values())[i][0], list(common_words.values())[i][1], 'bo', alpha = 0.2)
for i in range(len(book_singles)):
plt.plot(list(book_singles.values())[i][0], list(book_singles.values())[i][1], 'ro', alpha = 0.2)
我将它们改为:
counts1, counts2 = zip(*common_words.values())
plt.plot(counts1, counts2, 'bo', alpha=0.2)
counts1, counts2 = zip(*book_singles.values())
plt.plot(counts1, counts2, 'ro', alpha=0.2)
完整的返工代码,它会遗漏您计算但未在示例中使用的内容:
import re # regular expressions
import collections
from matplotlib import pyplot as plt
# xs=[x1,x2,...,xn]
# Number of occurrences of the word in book 1
# use
# ys=[y1.y2,...,yn]
# Number of occurrences of the word in book 2
# plt.plot(xs,ys)
# save as svg or pdf files
word_pattern = re.compile(r'[a-zA-Z]{3,}')
# with ensures closing of file even if there are failures
with open("swannsway.txt") as f:
text = f.read() # read as a single large string
book1 = word_pattern.findall(text.lower()) # pull out words
with open("moby_dick.txt") as f:
text = f.read() # read as a single large string
book2 = word_pattern.findall(text.lower()) # pull out words
# Convert these into relative percentages/total book length
wordcount_book1 = collections.defaultdict(int)
for word in book1:
wordcount_book1[word] += 1
wordcount_book2 = collections.defaultdict(int)
for word in book2:
wordcount_book2[word] += 1
common_words = {}
book_singles = {}
for i in wordcount_book1:
if i in wordcount_book2:
common_words[i] = [wordcount_book1[i], wordcount_book2[i]]
else:
book_singles[i] = [wordcount_book1[i], 0]
for i in wordcount_book2:
if i not in common_words:
book_singles[i] = [0, wordcount_book2[i]]
counts1, counts2 = zip(*common_words.values())
plt.plot(counts1, counts2, 'bo', alpha=0.2)
counts1, counts2 = zip(*book_singles.values())
plt.plot(counts1, counts2, 'ro', alpha=0.2)
plt.xlabel('Moby Dick')
plt.ylabel('Swannsway')
plt.show()
<强>输出强>
您可以删除stop words以减少高分词并带出有趣的数据。
答案 1 :(得分:0)
以下是优化代码的一些提示。
计算单词的出现次数。
使用Counter
库中的collections
课程(请参阅this post):
from collections import Counter
wordcount_book1 = Counter(book1)
wordcount_book2 = Counter(book2)
查找常用且独特的字词。
使用set
课程。所有单词都是联合,常用单词是交集,唯一单词就是区别。
book1_words = set(wordcount_book1.keys())
book2_words = set(wordcount_book2.keys())
all_words = book1_words | book2_words
common_words = book1_words & book2_words
book_singles = [book1_words - common_words, book2_words - common_words]
计算单词长度。 首先计算所有单词的长度,然后将其乘以每本书的单词数:
word_length = Counter([len(w) for w in all_words])
word_length_book1 = {w:word_length[w]*wordcount_book1[w] for w in book1_words})
word_length_book1 = {w:word_length[w]*wordcount_book2[w] for w in book2_words}
可能这些情节应该没有循环,但不幸的是我不明白你想要绘制的是什么。