使用Python的二战游戏的Wordcloud

时间:2018-03-28 14:39:45

标签: python word-cloud

我正在使用python中的Wordcloud packge直接从文本文件生成文字云。 这是我从stckoverflow重新使用的代码:

import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS


def random_color_func(word=None, font_size=None, position=None, orientation=None, font_path=None, random_state=None):
    h = int(360.0 * 45.0 / 255.0)
    s = int(100.0 * 255.0 / 255.0)
    l = int(100.0 * float(random_state.randint(60, 120)) / 255.0)

    return "hsl({}, {}%, {}%)".format(h, s, l)

file_content=open ("xyz.txt").read()

wordcloud = WordCloud(font_path = r'C:\Windows\Fonts\Verdana.ttf',
                            stopwords = STOPWORDS,
                            background_color = 'white',
                            width = 1200,
                            height = 1000,
                            color_func = random_color_func
                            ).generate(file_content)

plt.imshow(wordcloud,interpolation="bilinear")
plt.axis('off')
plt.show()

它给了我单词的wordcloud。在WordCloud()函数中是否有任何参数传递n-gram而不形成文本文件。

我想要二元组的词云。或者在显示中附加下划线的单词。喜欢: machine_learning(机器和学习将是2个不同的单词)

3 个答案:

答案 0 :(得分:1)

通过减少WordCloud中的collocation_threshold参数的值,可以轻松生成Bigram单词云。

编辑wordcloud:

wordcloud = WordCloud(font_path = r'C:\Windows\Fonts\Verdana.ttf',
                            stopwords = STOPWORDS,
                            background_color = 'white',
                            width = 1200,
                            height = 1000,
                            color_func = random_color_func,
                            collocation_threshold = 3               --added this to your question code, try changing this value between 1-50
                            ).generate(file_content)

有关更多信息:

collocation_threshold:int,默认值= 30 Bigrams的催款可能性配置得分必须大于要计算的该参数 作为二元组。默认值为30。

您还可以在此处找到wordcloud.WordCloud的源代码:https://amueller.github.io/word_cloud/_modules/wordcloud/wordcloud.html

答案 1 :(得分:0)

您应该使用vectorizer = CountVectorizer(ngram_range =(2,2))来获取频率,然后使用wordcloud中的.generate_from_frequencies方法

答案 2 :(得分:0)

感谢迭戈的回答。这只是迭戈对python代码的回答的延续。

import nltk
from wordcloud import WordCloud, STOPWORDS

WNL = nltk.WordNetLemmatizer()
text = 'your input text goes here'
# Lowercase and tokenize
text = text.lower()
# Remove single quote early since it causes problems with the tokenizer.
text = text.replace("'", "")
# Remove numbers from text
remove_digits = str.maketrans('', '', digits)
text = text.translate(remove_digits)
tokens = nltk.word_tokenize(text)
text1 = nltk.Text(tokens)

# Remove extra chars and remove stop words.
text_content = [''.join(re.split("[ .,;:!?‘’``''@#$%^_&*()<>{}~\n\t\\\-]", word)) for word in text1]

#set the stopwords list
stopwords_wc = set(STOPWORDS)
customised_words = ['xxx', 'yyy'] # If you want to remove any particular word form text which does not contribute much in meaning

new_stopwords = stopwords_wc.union(customized_words)
text_content = [word for word in text_content if word not in new_stopwords]

# After the punctuation above is removed it still leaves empty entries in the list.
text_content = [s for s in text_content if len(s) != 0]

# Best to get the lemmas of each word to reduce the number of similar words
text_content = [WNL.lemmatize(t) for t in text_content]

nltk_tokens = nltk.word_tokenize(text)  
bigrams_list = list(nltk.bigrams(text_content))
print(bigrams_list)
dictionary2 = [' '.join(tup) for tup in bigrams_list]
print (dictionary2)

#Using count vectoriser to view the frequency of bigrams
vectorizer = CountVectorizer(ngram_range=(2, 2))
bag_of_words = vectorizer.fit_transform(dictionary2)
vectorizer.vocabulary_
sum_words = bag_of_words.sum(axis=0) 
words_freq = [(word, sum_words[0, idx]) for word, idx in vectorizer.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)
print (words_freq[:100])

#Generating wordcloud and saving as jpg image
words_dict = dict(words_freq)
WC_height = 1000
WC_width = 1500
WC_max_words = 200
wordCloud = WordCloud(max_words=WC_max_words, height=WC_height, width=WC_width,stopwords=new_stopwords)
wordCloud.generate_from_frequencies(words_dict)
plt.title('Most frequently occurring bigrams connected by same colour and font size')
plt.imshow(wordCloud, interpolation='bilinear')
plt.axis("off")
plt.show()
wordCloud.to_file('wordcloud_bigram.jpg')