-
Notifications
You must be signed in to change notification settings - Fork 6
Description
Rob, Gerasimos, and Alex,
In sussing out the CI stuff, I discovered that the Bayesian shrinkage we're getting with extreme score estimates may be too large to be tolerated, especially in the context of the precision added by the sd =10 prior. The concern is that a clinician might administer a 10 item short form in the acute setting, obtain 10 incorrect responses and get a T-score (sem) of 26.6 (5.01). and then administer the full test a month later on which the person also gets zero correct, leading to an estimate (sem) of 17.7 (4.01), which, if they apply the correct math (which they probably won't, unless we do it for them), will suggest that there is a t-tailed 92% probability that the patient has gotten worse. Now arguably, that might be an appropriate conclusion under normal circumstances, but Gerasimos convinced me this is something to be concerned about, As I write this, I'm think that a quick Monte Carlo simulation with a constant extreme low generating theta, comparing score estimates for CAT-10 and full test might be in order. In any case, if this shrinkage is too much to bear, options for addressing include EAP with a uniform prior or ML estimation with fences, with fences being two dummy items at the extremes of the ability range, that are always administered and scored correct (for the low one) and incorrect (for the high one), to put some bounds on the ML procedure. Apparently it produces less error than EAP (https://journals.sagepub.com/doi/full/10.1177/0146621616631317?casa_token=iSbh34Qp4woAAAAA%3A6VadRw5h1_Nhg3hA_vxQEjK1LrrAjAwCV5e3jHCgT_lHjPmkVG5g4mzAkQLJI6HCiCKVJgYHo61GXw). catR will implement this easily and I'm currently playing around with it.