我正在尝试将this Stack Overflow answer中的Viterbi算法的Python实现转换为Ruby。完整的脚本可以在我的评论的底部找到。
不幸的是,我对Python知之甚少,所以翻译比我想要的更难。不过,我已经取得了一些进展。现在,唯一能完全融化我脑子的是这一行:
prob_k, k = max((probs[j] * word_prob(text[j:i]), j) for j in range(max(0, i - max_word_length), i))
有人可以解释一下它在做什么吗?
这是完整的Python脚本:
import re
from itertools import groupby
# text will be a compound word such as 'wickedweather'.
def viterbi_segment(text):
probs, lasts = [1.0], [0]
# Iterate over the letters in the compound.
# eg. [w, ickedweather], [wi, ckedweather], and so on.
for i in range(1, len(text) + 1):
# I've no idea what this line is doing and I can't figure out how to split it up?
prob_k, k = max((probs[j] * word_prob(text[j:i]), j) for j in range(max(0, i - max_word_length), i))
# Append values to arrays.
probs.append(prob_k)
lasts.append(k)
words = []
i = len(text)
while 0 < i:
words.append(text[lasts[i]:i])
i = lasts[i]
words.reverse()
return words, probs[-1]
# Calc the probability of a word based on occurrences in the dictionary.
def word_prob(word):
# dictionary.get(key) will return the value for the specified key.
# In this case, thats the number of occurances of thw word in the
# dictionary. The second argument is a default value to return if
# the word is not found.
return dictionary.get(word, 0) / total
# This ensures we ony deal with full words rather than each
# individual letter. Normalize the words basically.
def words(text):
return re.findall('[a-z]+', text.lower())
# This gets us a hash where the keys are words and the values are the
# number of ocurrances in the dictionary.
dictionary = dict((w, len(list(ws)))
# /usr/share/dixt/words is a file of newline delimitated words.
for w, ws in groupby(sorted(words(open('/usr/share/dict/words').read()))))
# Assign the length of the longest word in the dictionary.
max_word_length = max(map(len, dictionary))
# Assign the total number of words in the dictionary. It's a float
# because we're going to divide by it later on.
total = float(sum(dictionary.values()))
# Run the algo over a file of newline delimited compound words.
compounds = words(open('compounds.txt').read())
for comp in compounds:
print comp, ": ", viterbi_segment(comp)
答案 0 :(得分:1)
您正在查看list comprehension。
扩展版看起来像这样:
all_probs = []
for j in range(max(0, i - max_word_length), i):
all_probs.append((probs[j] * word_prob(text[j:i]), j))
prob_k, k = max(all_probs)
我希望这有助于解释它。如果没有,请随时编辑您的问题并指向您不理解的陈述。
答案 1 :(得分:0)
这是一个有效的ruby实现,以防其他人对其有所使用。我将上面讨论的列表理解理解为我认为适当的惯用的不可读红宝石级别。
def viterbi(text)
probabilities = [1.0]
lasts = [0]
# Iterate over the letters in the compound.
# eg. [h ellodarkness],[he llodarkness],...
(1..(text.length + 1)).each do |i|
prob_k, k = ([0, i - maximum_word_length].max...i).map { |j| [probabilities[j] * word_probability(text[j...i]), j] }.map { |s| s }.max_by(&:first)
probabilities << prob_k
lasts << k
end
words = []
i = text.length
while i.positive?
words << text[lasts[i]...i]
i = lasts[i]
end
words.reverse!
[words, probabilities.last]
end
def word_probability(word)
word_counts[word].to_f / word_counts_sum.to_f
end
def word_counts_sum
@word_counts_sum ||= word_counts.values.sum.to_f
end
def maximum_word_length
@maximum_word_length ||= word_counts.keys.map(&:length).max
end
def word_counts
return @word_counts if @word_counts
@word_counts = {"hello" => 12, "darkness" => 6, "friend" => 79, "my" => 1, "old" => 5}
@word_counts.default = 0
@word_counts
end
puts "Best split is %s with probability %.6f" % viterbi("hellodarknessmyoldfriend")
=> Best split is ["hello", "darkness", "my", "old", "friend"] with probability 0.000002
主要的烦恼是python和ruby中不同的范围定义(打开/关闭间隔)。该算法非常快。
使用可能性而不是概率可能会比较有利,因为重复相乘可能会导致下溢和/或累积较长单词的浮点错误。