我提前为代码转储道歉。我把它整理得和我一样多 可以不丢失我的问题的背景(在粗体下面)。
我有一个结构
use std::rand;
use std::rand::Rng;
use std::rand::distributions::{Weighted, WeightedChoice, Sample, IndependentSample};
struct MarkovChain {
state: uint,
weights: Vec<Vec<uint>>,
}
建模马尔可夫链。对尺寸有一些检查
我在weights
中实施的MarkovChain::new
矩阵:
impl MarkovChain {
fn new(weights: Vec<Vec<uint>>, initial_state: uint) -> MarkovChain {
let states = weights.len();
assert!(states > 0);
assert!(initial_state < states);
assert!(weights.iter().all(|row| row.len() == states));
MarkovChain {
state: initial_state,
weights: weights,
}
}
}
现在我实施Sample
:
impl Sample<uint> for MarkovChain {
fn sample<R: Rng>(&mut self, rng: &mut R) -> uint {
// I'd like to put the following part in MarkovChain::new
// instead, but I can't figure out how to store the
// WeightedChoice inside the MarkovChain struct.
//BEGIN
let mut row = self.weights[self.state]
.iter()
.enumerate()
.map(|(i, &wt)| Weighted { item: i, weight: wt })
.collect::<Vec<Weighted<uint>>>();
let wc = WeightedChoice::new(row.as_mut_slice());
//END
self.state = wc.ind_sample(rng);
self.state
}
}
这里的问题是每个都需要构建row
和wc
调用时间sample
。鉴于典型的用例涉及
多次调用sample
,这是一个问题。
我想移动row
和wc
的计算(针对每个州)
转而MarkovChain::new
,但我似乎无法弄清楚如何
在MarkovChain
结构中存储WeightedChoice
。我该怎么做?
我无法判断这是否真的很难,或者我是否因为多年的垃圾收集语言而遭受脑损伤。
这是马尔可夫链的一个示例用法。如果可能的话,我想保持接口不变:
fn main() {
// Create the 3-state Markov chain illustrated at
// https://en.wikipedia.org/w/index.php?title=Markov_chain&oldid=626307401#Example
let mut mc = MarkovChain::new(vec![vec![900, 75, 25],
vec![150, 800, 50],
vec![250, 250, 500]], 0);
// Expect around 62.5% 0s, 31.25% 1s, and 6.25% 2s after many iterations.
let rng = &mut rand::task_rng();
let mut stats = vec![0u, 0, 0];
for _ in range(0u, 10000) {
*stats.get_mut(mc.sample(rng)) += 1;
}
println!("Expect approximately [6250, 3125, 625]:");
println!("{}", stats);
}
这里是Rust playpen中的全部内容。