我的问题如下。我有很长的网址列表,例如:
www.foo.com/davidbobmike1joe
www.foo.com/mikejoe2bobkarl
www.foo.com/joemikebob
www.foo.com/bobjoe
我需要将该列表中的所有条目(URL)相互比较,在这些URL的子域中提取关键字(在本例中为:david,joe,bob,mike,karl),并按频率对它们进行排序。我一直在阅读几个库,如nltk。然而,这里的问题是没有空格来独立地标记每个单词。关于如何完成工作的任何建议?
答案 0 :(得分:1)
您可以使用此代码提取名称,传入[david,bob等]列表:
Is there an easy way generate a probable list of words from an unspaced sentence in python?
然后使用collections.Counter
获取频率。
from Bio import trie
import string
from collections import Counter
def get_trie(words):
tr = trie.trie()
for word in words:
tr[word] = len(word)
return tr
def get_trie_word(tr, s):
for end in reversed(range(len(s))):
word = s[:end + 1]
if tr.has_key(word):
return word, s[end + 1: ]
return None, s
def get_trie_words(s):
names = ['david', 'bob', 'karl', 'joe', 'mike']
tr = get_trie(names)
while s:
word, s = get_trie_word(tr, s)
yield word
def main(urls):
d = Counter()
for url in urls:
url = "".join(a for a in url if a in string.lowercase)
for word in get_trie_words(url):
d[word] += 1
return d
if __name__ == '__main__':
urls = [
"davidbobmike1joe",
"mikejoe2bobkarl",
"joemikebob",
"bobjoe",
]
print main(urls)
Counter({'bob': 4, 'joe': 4, 'mike': 3, 'karl': 1, 'david': 1})
答案 1 :(得分:1)
如果您拒绝使用字典,那么算法将需要大量计算。除此之外,不可能区分仅出现一次的关键字(例如“karl”)和蹩脚的序列(例如:“e2bo”)。我的解决方案将是最大的努力,只有在您的URL列表多次包含关键字时才会起作用。
我假设一个单词是经常出现至少3个字符的字符序列。这可以防止字母“o”成为最受欢迎的单词。
基本思路如下。
import operator
sentences = ["davidbobmike1joe" , "mikejoe2bobkarl", "joemikebob", "bobjoe", "bobbyisawesome", "david", "bobbyjoe"];
dict = {}
def countWords(n):
"""Count all possible character sequences/words of length n occuring in all given sentences"""
for sentence in sentences:
countWordsSentence(sentence, n);
def countWordsSentence(sentence, n):
"""Count all possible character sequence/words of length n occuring in a sentence"""
for i in range(0,len(sentence)-n+1):
word = sentence[i:i+n]
if word not in dict:
dict[word] = 1;
else:
dict[word] = dict[word] +1;
def cropDictionary():
"""Removes all words that occur only once."""
for key in dict.keys():
if(dict[key]==1):
dict.pop(key);
def removePartials(word):
"""Removes all the partial occurences of a given word from the dictionary."""
for i in range(3,len(word)):
for j in range(0,len(word)-i+1):
for key in dict.keys():
if key==word[j:j+i] and dict[key]==dict[word]:
dict.pop(key);
def removeAllPartials():
"""Removes all partial words in the dictionary"""
for word in dict.keys():
removePartials(word);
for i in range(3,max(map(lambda x: len(x), sentences))):
countWords(i);
cropDictionary();
removeAllPartials();
print dict;
>>> print dict;
{'mike': 3, 'bobby': 2, 'david': 2, 'joe': 5, 'bob': 6}