-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Subgroup analysis #18
Comments
+1 and +1 to adressing it later. |
-> tern.mmrm does this for mmrm https://github.com/insightsengineering/tern.mmrm |
Interesting. Looking at the README, I see:
Internally at my company for the Bayesian case, so far we fit a single model to all the data with subgroup-specific fixed effects, then show a bunch of posterior summaries of contrasts. We don't use a Bayesian equivalent of openpharma/mmrm#164, and I am not sure if such a method exists, but maybe it would be good to eventually find out. |
I plan to start implementing subgroup models soon in order to support internal work. Statistically, it seems clear enough how to add a subgroup factor to the model and then estimate meaningful subgroup-specific marginal means. From my perspective, the new part is figuring out how to measure the impact of the subgroup as a whole (as opposed to any individual subgroup level). In the frequentist case, this sort of thing is a straightforward model comparison problem: run one model with a subgroup, run a nested model without the subgroup, and compare them with an F test. For Bayesian models, this could easily lead us to Bayes factors. I'm not really a fan of Bayes factors because they are difficult to interpret. And as far as the computation goes, bridge sampling is a good as it gets (https://cran.r-project.org/web/packages/bridgesampling/vignettes/bridgesampling_tutorial.pdf) and I have seen it diverge and produce inconsistent results a lot when computing marginal likelihoods for MMRM-like models. If we really want something like a Bayes factor, the posterior odds might be more feasible using mixture importance sampling: https://arxiv.org/abs/2209.09190. At the very least, this would require 4 model fits instead of 2, and 2 of those runs would need to have an extra Alternatively, we could straightforwardly list and compare the DIC and/or WAIC of the subgroup model vs the reduced model. These quantities would have serious problems if used directly for model averaging, but maybe the metrics themselves are enough. Visually, we could compute the overall marginals of each model (by treatment and visit, averaging over subgroup for both models) and then compare them using That covers the standard techniques I am aware of. So far, I am in favor of DIC/WAIC and visualization. Am I missing anything? Ideally, for a package like |
Current branch for subgroup: https://github.com/openpharma/brms.mmrm/tree/18. A tentative roadmap is below, and it will depend on team consensus. For each function below, either the function itself needs to handle the subgroup, or
|
I think we have everything in place for subgroup so far, unless there are ideas for model comparison techniques other than |
It would be great to discuss in tomorrow's meeting what else we might want as far as the subgroup analysis. The functionality is in https://openpharma.github.io/brms.mmrm/articles/subgroup.html, and the validation of the subgroup model part of the "complex scenario" in https://openpharma.github.io/brms.mmrm/articles/sbc.html. |
Wrapped up in #79 |
Suggested by @chstock. I picture a model with subgroup and subgroup/treatment interaction and subgroup/treatment/time interaction with the following summaries:
We would also want a test of treatment/subgroup interaction as a whole, c.f. openpharma/mmrm#164.
I think this issue deserves to be addressed eventually, but the base case and informative priors are more important.
The text was updated successfully, but these errors were encountered: