连续RBM:仅对负值输入数据表现不佳?

时间:2013-06-30 23:23:01

标签: matlab rbm

我试图将这个连续RBM的python实现移植到Matlab: http://imonad.com/rbm/restricted-boltzmann-machine/

我生成了一个(嘈杂的)圆形的二维训练数据,并用2个可见的8个隐藏层训练了rbm。为了测试实现,我将均匀分布的随机数据馈送到RBM并绘制重建数据(与上面链接中使用的过程相同)。

现在令人困惑的部分:训练数据在(0,1)x(0,1)的范围内,我得到非常令人满意的结果,但训练数据在范围(-0.5,-0.5)x(-0.5,-0.5) )或(-1,0)x(-1,0)RBM仅重建圆圈最右上方的数据。我不明白是什么原因造成的,这只是我实施中的一个错误,我看不到?

一些情节,蓝点是训练数据,红点是重建。 http://i41.tinypic.com/se6fr8.png http://i43.tinypic.com/iw43np.png http://i39.tinypic.com/2cy0gur.png

以下是我对RBM的实施: 训练:

maxepoch = 300;
ksteps = 10;
sigma = 0.2;        % cd standard deviation
learnW = 0.5;       % learning rate W
learnA  = 0.5;      % learning rate A
nVis = 2;           % number of visible units
nHid = 8;           % number of hidden units
nDat = size(dat, 1);% number of training data points
cost = 0.00001;     % cost
moment = 0.9;      % momentum
W = randn(nVis+1, nHid+1) / 10; % weights
dW = randn(nVis+1, nHid+1) / 1000; % change of weights
sVis = zeros(1, nVis+1);    % state of visible neurons
sVis(1, end) = 1.0;         % bias
sVis0 = zeros(1, nVis+1);   % initial state of visible neurons
sVis0(1, end) = 1.0;        % bias
sHid = zeros(1, nHid+1);    % state of hidden neurons
sHid(1, end) = 1.0;         % bias
aVis  = 0.1*ones(1, nVis+1);% A visible
aHid  = ones(1, nHid+1);    % A hidden
err = zeros(1, maxepoch);
e = zeros(1, maxepoch);
for epoch = 1:maxepoch
    wPos = zeros(nVis+1, nHid+1);
    wNeg = zeros(nVis+1, nHid+1);
    aPos = zeros(1, nHid+1);
    aNeg = zeros(1, nHid+1);
    for point = 1:nDat
        sVis(1:nVis) = dat(point, :);
        sVis0(1:nVis) = sVis(1:nVis); % initial sVis
        % positive phase
        activHid;
        wPos = wPos + sVis' * sHid;
        aPos = aPos + sHid .* sHid;
        % negative phase
        activVis;
        activHid;
        for k = 1:ksteps
            activVis;
            activHid;
        end
        tmp = sVis' * sHid;
        wNeg = wNeg + tmp;
        aNeg = aNeg + sHid .* sHid;
        delta = sVis0(1:nVis) - sVis(1:nVis);
        err(epoch) = err(epoch) + sum(delta .* delta);
        e(epoch) = e(epoch) - sum(sum(W' * tmp));
    end
    dW = dW*moment + learnW * ((wPos - wNeg) / numel(dat)) - cost * W;
    W = W + dW;
    aHid = aHid + learnA * (aPos - aNeg) / (numel(dat) * (aHid .* aHid));
    % error
    err(epoch) = err(epoch) / (nVis * numel(dat));
    e(epoch) = e(epoch) / numel(dat);
    disp(['epoch: ' num2str(epoch) ' err: ' num2str(err(epoch)) ...
    ' ksteps: ' num2str(ksteps)]);
end
save(['rbm_' filename '.mat'], 'W', 'err', 'aVis', 'aHid');

activHid.m:

sHid = (sVis * W) + randn(1, nHid+1);
sHid = sigFun(aHid .* sHid, datRange);
sHid(end) = 1.; % bias

activVis.m:

sVis = (W * sHid')' + randn(1, nVis+1);
sVis = sigFun(aVis .* sVis, datRange);
sVis(end) = 1.; % bias

sigFun.m:

function [sig] = sigFun(X, datRange)
    a = ones(size(X)) * datRange(1);
    b = ones(size(X)) * (datRange(2) - datRange(1));
    c = ones(size(X)) + exp(-X);
    sig = a + (b ./ c);
end

重构:

nSamples = 2000;
ksteps = 10;
nVis = 2;
nHid = 8;
sVis = zeros(1, nVis+1);    % state of visible neurons
sVis(1, end) = 1.0;         % bias
sHid = zeros(1, nHid+1);    % state of hidden neurons
sHid(1, end) = 1.0;         % bias
input = rand(nSamples, 2);
output = zeros(nSamples, 2);
for sample = 1:nSamples
    sVis(1:nVis) = input(sample, :);
    for k = 1:ksteps
        activHid;
        activVis;
    end
    output(sample, :) = sVis(1:nVis);
end

2 个答案:

答案 0 :(得分:1)

RBM最初设计为仅适用于二进制数据。但也适用于0和1之间的数据。它的算法部分。 Further reading

答案 1 :(得分:1)

由于输入在x和y的[0 1]范围内,这就是为什么它们保持不变的原因。将输入更改为input = (rand(nSamples, 2)*2) -1;会导致输入从[-1 1]范围内采样,因此红点将更加分散在圆周上。