Replies: 1 comment 1 reply
-
|
Hi Nele, I will let others (maybe @alexander Fengler
***@***.***> or @krishn Bera ***@***.***> )
weigh in on the error message but I will note here that it is likely better
to not rely on WAIC or LOO for comparing angle vs DDM. Since they are
nested you can simply fit the angle model and do inference on the theta
parameter -- if it differs significantly from zero it is likely the better
model. A formal way to do this for model comparison is to use the Savage
Dickey ratio test: specify a prior for theta that overlaps with zero and
then compare the density of the posterior at the null hypothesis (zero for
theta) to the density of the prior at zero. The ratio of those two values
gives you the bayes factor, which is usually what the information metrics
are trying to approximate anyway, but now it does not rely on the
trial-wise likelihood estimations but instead the posteriors
We will add Savage Dickey to the tutorials on model comparison but
meanwhile let us know if this makes sense or you need more clarification
Michael
Michael J Frank, PhD | Edgar L. Marston Professor
Director, Carney Center for Computational Brain Science
<https://www.brown.edu/carney/ccbs>
Laboratory of Neural Computation and Cognition <https://www.lnccbrown.com/>
Brown University
website <http://ski.clps.brown.edu>
…On Wed, Jan 14, 2026 at 4:36 PM Nele-JB ***@***.***> wrote:
Hi everyone,
I am using HSSM to estimate and compare hierarchical DDMs and angle models.
When using arviz.compare() to perform leave-one-out cross-validation (LOO)
, I receive this warning for each model:
'Estimated shape parameter of Pareto distribution is greater than 0.70 for
one or more samples. [...] This is more likely to happen with a non-robust
model and highly influential observations.'
The hierarchical models I estimate indeed have a rather complex fixed and
random effects structure, but converge very well (in terms of ESS, R-hat
etc.). Highly influential observations (in form of drastic RT outliers) are
not present in the data.
I have already increased sample size during importance sampling (a
recommended way to reduce k in the official Stan forum), but this doesn´t
change anything. A related warning occurs when computing WAIC.
I would be glad about advice on how to treat this warning message. Does
anyone have experience/advice on how to receive more reliable fit indices
here? Or is this warning somewhat 'normal behavior' for more complex
hierarchical SSMs, given that some hierarchical models here:
https://lnccbrown.github.io/HSSM/tutorials/scientific_workflow_hssm/#quantitative-model-comparison
receive similar warnings?
Thanks very much in advance for your response.
Nele
—
Reply to this email directly, view it on GitHub
<#875>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAG7TFA3FHR5P4ZQTPSXYYL4GZH6TAVCNFSM6AAAAACRVVJBWOVHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZZGM2DGNJXGM>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I am using HSSM to estimate and compare hierarchical DDMs and angle models.
When using arviz.compare() to perform leave-one-out cross-validation (LOO) , I receive this warning for each model:
'Estimated shape parameter of Pareto distribution is greater than 0.70 for one or more samples. [...] This is more likely to happen with a non-robust model and highly influential observations.'
The hierarchical models I estimate indeed have a rather complex fixed and random effects structure, but converge very well (in terms of ESS, R-hat etc.). Highly influential observations (in form of drastic RT outliers) are not present in the data.
I have already increased sample size during importance sampling (a recommended way to reduce k in the official Stan forum), but this doesn´t change anything. A related warning occurs when computing WAIC.
I would be glad about advice on how to treat this warning message. Does anyone have experience/advice on how to receive more reliable fit indices here? Or is this warning somewhat 'normal behavior' for more complex hierarchical SSMs, given that some hierarchical models here:
https://lnccbrown.github.io/HSSM/tutorials/scientific_workflow_hssm/#quantitative-model-comparison
receive similar warnings?
Thanks very much in advance for your response.
Nele
Beta Was this translation helpful? Give feedback.
All reactions