将单词列表转换为这些单词出现的频率列表

时间:2012-01-23 15:13:29

标签: wolfram-mathematica

我正在使用各种单词列表进行大量工作。

请考虑我的以下问题:

docText={"settlement", "new", "beginnings", "wildwood", "settlement", "book",
"excerpt", "agnes", "leffler", "perry", "my", "mother", "junetta", 
"hally", "leffler", "brought", "my", "brother", "frank", "and", "me", 
"to", "edmonton", "from", "monmouth", "illinois", "mrs", "matilda", 
"groff", "accompanied", "us", "her", "husband", "joseph", "groff", 
"my", "father", "george", "leffler", "and", "my", "uncle", "andrew", 
"henderson", "were", "already", "in", "edmonton", "they", "came", 
"in", "1910", "we", "arrived", "july", "1", "1911", "the", "sun", 
"was", "shining", "when", "we", "arrived", "however", "it", "had", 
"been", "raining", "for", "days", "and", "it", "was", "very", 
"muddy", "especially", "around", "the", "cn", "train"}

searchWords={"the","for","my","and","me","and","we"}

这些列表中的每一个都要长得多(比如searchWords列表中的250个单词,docText约为12,000个单词)。

现在,我可以通过执行以下操作来确定给定单词的频率:

docFrequency=Sort[Tally[docText],#1[[2]]>#2[[2]]&];    
Flatten[Cases[docFrequency,{"settlement",_}]][[2]]

但是我被挂断的地方是我要生成特定列表。具体而言,将单词列表转换为出现这些单词的频率列表的问题。我已尝试用Do循环执行此操作但已撞墙。

我希望使用docText浏览searchWords并将docText的每个元素替换为其外观的绝对频率。即由于“结算”出现两次,它将被列表中的2替换,而由于“我的”出现3次,它将变为3.列表将是2,1,1,1,2等等向前。

我怀疑答案位于If[]Map[]

的某处

这一切听起来都很奇怪,但我试图预处理一组有关术语频率信息的信息......


为Clarity添加(我希望):

这是一个更好的例子。

searchWords={"0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "a", "A", "about", 
"above", "across", "after", "again", "against", "all", "almost", 
"alone", "along", "already", "also", "although", "always", "among", 
"an", "and", "another", "any", "anyone", "anything", "anywhere", 
"are", "around", "as", "at", "b", "B", "back", "be", "became", 
"because", "become", "becomes", "been", "before", "behind", "being", 
"between", "both", "but", "by", "c", "C", "can", "cannot", "could", 
"d", "D", "do", "done", "down", "during", "e", "E", "each", "either", 
"enough", "even", "ever", "every", "everyone", "everything", 
"everywhere", "f", "F", "few", "find", "first", "for", "four", 
"from", "full", "further", "g", "G", "get", "give", "go", "h", "H", 
"had", "has", "have", "he", "her", "here", "herself", "him", 
"himself", "his", "how", "however", "i", "I", "if", "in", "interest", 
"into", "is", "it", "its", "itself", "j", "J", "k", "K", "keep", "l", 
"L", "last", "least", "less", "m", "M", "made", "many", "may", "me", 
"might", "more", "most", "mostly", "much", "must", "my", "myself", 
"n", "N", "never", "next", "no", "nobody", "noone", "not", "nothing", 
"now", "nowhere", "o", "O", "of", "off", "often", "on", "once", 
"one", "only", "or", "other", "others", "our", "out", "over", "p", 
"P", "part", "per", "perhaps", "put", "q", "Q", "r", "R", "rather", 
"s", "S", "same", "see", "seem", "seemed", "seeming", "seems", 
"several", "she", "should", "show", "side", "since", "so", "some", 
"someone", "something", "somewhere", "still", "such", "t", "T", 
"take", "than", "that", "the", "their", "them", "then", "there", 
"therefore", "these", "they", "this", "those", "though", "three", 
"through", "thus", "to", "together", "too", "toward", "two", "u", 
"U", "under", "until", "up", "upon", "us", "v", "V", "very", "w", 
"W", "was", "we", "well", "were", "what", "when", "where", "whether", 
"which", "while", "who", "whole", "whose", "why", "will", "with", 
"within", "without", "would", "x", "X", "y", "Y", "yet", "you", 
"your", "yours", "z", "Z"}

这些是WordData[]自动生成的停用词。所以我想将这些单词与docText进行比较。由于“结算”不是searchWords的一部分,所以它会显示为0.但由于“我的”是searchWords的一部分,它会弹出计数(所以我可以告诉多少次出现给定的单词)。

我真的非常感谢你们的帮助 - 我很期待参加一些正式的课程,因为我突然碰到了能够真正解释我想要做什么的能力!

3 个答案:

答案 0 :(得分:7)

我们可以将searchWords中未显示的所有内容替换为docText中的0,如下所示:

preprocessedDocText = 
   Replace[docText, 
     Dispatch@Append[Thread[searchWords -> searchWords], _ -> 0], {1}]

我们可以按频率替换剩余的字词:

replaceTable = Dispatch[Rule @@@ Tally[docText]];

preprocessedDocText /. replaceTable

Dispatch预处理规则列表(->),并在后续使用中显着加快替换速度。

我没有对大数据进行基准测试,但Dispatch应提供良好的加速。

答案 1 :(得分:4)

@Szabolcs提供了一个很好的解决方案,我可能会自己走同样的路线。这是一个稍微不同的解决方案,只是为了好玩:

ClearAll[getFreqs];
getFreqs[docText_, searchWords_] :=
  Module[{dwords, dfreqs, inSearchWords, lset},
    SetAttributes[{lset, inSearchWords}, Listable];
    lset[args__] := Set[args];
    {dwords, dfreqs} = Transpose@Tally[docText];
    lset[inSearchWords[searchWords], True];
    inSearchWords[_] = False;
    dfreqs*Boole[inSearchWords[dwords]]]

这显示了Listable属性如何用于替换循环甚至Map - ping。我们有:

In[120]:= getFreqs[docText,searchWords]
Out[120]= {0,0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,3,1,1,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,1,1,2,
1,0,0,2,0,0,1,0,2,0,2,0,1,1,2,1,1,0,1,0,1,0,0,1,0,0}

答案 2 :(得分:2)

我打算以与Szabolcs不​​同的方式解决这个问题,但我最终得到了一些相似的东西。

尽管如此,我认为它更清洁。在某些数据上,它更快,而在其他数据上则更慢。

docText /. 
  Dispatch[FilterRules[Rule @@@ Tally@docText, searchWords] ~Join~ {_String -> 0}]