![]() I looked at the data, and the means are "close" group A and group B might have the same mean In terms of how it does this, you could think of a person reasoning as follows ![]() Just like least squares and normal distributions tend to work well in most cases (but not all). Andrew Gelman's approach is just a way to handle a broad class of multiple comparisons implicitly. So it's really more about describing what sort of hypothesis you are talking about and getting as many known features reflected in the prior and likelihood. with "spike and slab" priors or "horse shoe" priors. in your example keep locationB and locationC "true" parameters equal, and set locationA "true" parameter arbitrarily large or small, and watch the standard linear mixed model fail.). They get "stuck" in between squashing noise and leaving signals untouched (e.g. Mixed models and normally distributed random effects don't work here. Is everything exchangeable? Or do you have "sparsity" - such as only a few non-zero regression coefficients with a large pool of candidates. And how do you encode exchangeability into bayesian analysis? - hyper-priors, mixed models, random effects, etc!!!īut exchangeability only gets you part of the way there. ![]() It's all about the "collection" - this triggers my thinking towards exchangeability - the hypothesis being compared are "similar" to each other in some way. You also have less sinister, but essentially the same "I have so many tests to run - surely all can't be correct".Īfter thinking about this, one thing I notice is that you don't tend to hear much about specific hypothesis or specific comparisons. I can think of some - the first is "data dredging" - test "everything" until you get enough passes/fails (I would think almost every stats trained person would be exposed to this problem). What information is being used when we worry about multiple comparisons? It seems too good to be true - but both of these are harder than they seem. It's all about encoding information, then turn the Bayesian crank. Very interesting question, here's my take on it. In the Bayesian world, the prior "pulls back" on any point estimates, and the updated posterior distribution applies to inference at any time and requires no complex sample space considerations. With sequential testing, there is great confusion about how to adjust point estimates when an experiment is terminated early using frequentist inference. For example, the prior distribution for the A-B difference calibrates all future assessments of A-B and does not have to consider C-D. In contrast, Bayesian assessments anchor all assessment to the prior distribution, which calibrates evidence. The problem stems from the frequentist's reversal of the flow of time and information, making frequentists have to consider what could have happened instead of what did happen. In a group sequential setting, an early comparison for A vs B must be penalized for a later comparison that has not been made yet, and a later comparison must be penalized for an earlier comparison even if the earlier comparison did not alter the course of the study.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |