我一直致力于在pymc3中获得一些心理物理行为数据的分层模型。我总体上对事物印象深刻,但在尝试使用Theano和pymc3后,我有一个主要起作用的模型,但是有一些问题。
构建代码是为了使Weibull的参数化版本适合七组数据。每个试验都被建模为二元伯努利结果,而阈值(输出为y值,用于拟合高斯,宽度和高度的高斯函数(a,c和d在典型的高斯上)。 / p>
使用参数化的Weibull似乎工作得很好,现在对于Weibull的斜率是分层的,而阈值分别适合每个数据块。然而 - 我从k和y_est获得的输出让我相信它们可能不是正确的大小,并且与概率分布不同,它看起来不像我可以指定形状(除非' sa theano这样做的方式,我还没有找到 - 虽然从我所看到的指定theano中的形状是棘手的)。
最终,我想使用y_est来估计高斯的高度或宽度,但是现在输出会导致一个令人难以置信的混乱,我认为它与y_est和k中的大小问题有关。任何帮助都很棒 - 下面的代码应该模拟一些数据,然后是模型。该模型可以很好地拟合每个单独的阈值并获得斜坡,但在处理其余部分时会崩溃。
感谢您一看 - 到目前为止,我对pymc3印象非常深刻!
编辑:好的,所以y_est.tag.test_value.shape输出的形状看起来像这样
y_est.tag.test_value.shape
(101, 7)
k.tag.test_value.shape
(7,)
我认为这是我遇到麻烦的地方,尽管我的构造可能不好。 k具有正确的形状(每个unique_xval一个k值)。对于每个难度级别,y_est输出整组数据(101x7)而不是单个估计(每个unique_xval一个y_est)。有没有办法指定y_est获取df_y_vals的特定子集来控制它?
#Import necessary modules and define our weibull function
import numpy as np
import pylab as pl
from scipy.stats import bernoulli
#x stimulus intensity
#g chance (0.5 for 2AFC)
# m slope
# t threshold
# a performance level defining threshold
def weib(x,g,a,m,t):
k=-np.log(((1-a)/(1-g))**(1/t))
return 1- (1-g)*np.exp(- (k*x/t)**m);
#Output values from weibull function
xit=101
xvals=np.linspace(0.05,1,xit)
out_weib=weib(xvals, 0.5, 0.8, 3, 0.6)
#Okay, fitting the perfect output of a Weibull should be easy, contaminate with some noise
#Slope of 3, threshold of 0.6
#How about 5% noise!
noise=0.05*np.random.randn(np.size(out_weib))
out=out_weib+noise
#Let's make this more like a typical experiment -
#i.e. no percent correct, just one or zero
#Randomly pick based on the probability at each point whether they got the trial right or wrong
trial=np.zeros_like(out)
for i in np.arange(out.size):
p=out_weib[i]
trial[i] = bernoulli.rvs(p)
#Iterate for 6 sets of data, similar slope (from a normal dist), different thresh (output from gaussian)
#Gauss parameters=
true_gauss_height = 0.3
true_gauss_width = 0.01
true_gauss_elevation = 0.2
#What thresholds will we get then? 6 discrete points along that gaussian, from 0 to 180 degree mask
x_points=[0, 30, 60, 90, 120, 150, 180]
x_points=np.asarray(x_points)
gauss_points=true_gauss_height*np.exp(- ((x_points**2)/2*true_gauss_width**2))+true_gauss_elevation
import pymc as pm2
import pymc3 as pm
import pandas as pd
slopes=pm2.rnormal(3, 3, size=7)
out_weib=np.zeros([xvals.size,x_points.size])
for i in np.arange(x_points.size):
out_weib[:,i]=weib(xvals, 0.5, 0.8, slopes[i], gauss_points[i])
#Let's make this more like a typical experiment - i.e. no percent correct, just one or zero
#Randomly pick based on the probability at each point whether they got the trial right or wrong
trials=np.zeros_like(out_weib)
for i in np.arange(len(trials)):
for ii in np.arange(gauss_points.size):
p=out_weib[i,ii]
trials[i,ii] = bernoulli.rvs(p)
#Let's make that data into a DataFrame for pymc3
y_vals=np.tile(xvals, [7, 1])
df_correct = pd.DataFrame(trials, columns=x_points)
df_y_vals = pd.DataFrame(y_vals.T, columns=x_points)
unique_xvals=x_points
import theano as th
with pm.Model() as hierarchical_model:
# Hyperpriors for group node
mu_slope = pm.Normal('mu_slope', mu=3, sd=1)
sigma_slope = pm.Uniform('sigma_slope', lower=0.1, upper=2)
#Priors for the overall gaussian function - 3 params, the height of the gaussian
#Width, and elevation
gauss_width = pm.HalfNormal('gauss_width', sd=1)
gauss_elevation = pm.HalfNormal('gauss_elevation', sd=1)
slope = pm.Normal('slope', mu=mu_slope, sd=sigma_slope, shape=unique_xvals.size)
thresh=pm.Uniform('thresh', upper=1, lower=0.1, shape=unique_xvals.size)
k = -th.tensor.log(((1-0.8)/(1-0.5))**(1/thresh))
y_est=1-(1-0.5)*th.tensor.exp(-(k*df_y_vals/thresh)**slope)
#We want our model to predict either height or width...height would be easier.
#Our Gaussian function has y values estimated by y_est as the 82% thresholds
#and Xvals based on where each of those psychometrics were taken.
#height_est=pm.Deterministic('height_est', (y_est/(th.tensor.exp((-unique_xvals**2)/2*gauss_width)))+gauss_elevation)
height_est = pm.Deterministic('height_est', (y_est-gauss_elevation)*th.tensor.exp((unique_xvals**2)/2*gauss_width**2))
#Define likelihood as Bernoulli for each binary trial
likelihood = pm.Bernoulli('likelihood',p=y_est, shape=unique_xvals.size, observed=df_correct)
#Find start
start=pm.find_MAP()
step=pm.NUTS(state=start)
#Do MCMC
trace = pm.sample(5000, step, njobs=1, progressbar=True) # draw 5000 posterior samples using NUTS sampling
答案 0 :(得分:0)
当你说“有没有办法指定y_est获取df_y_vals的特定子集来控制它”时,我不确定你想要做什么。你能描述一下你应该使用的每个y_est值df_y_vals的值吗? df_y_vals的形状是什么? y_est的形状应该是什么? (7)?
我怀疑你想要的是使用numpy advanced indexing索引到df_y_vals,它在PyMC和numpy中的工作方式相同。如果没有更多信息,很难说清楚。