Skip to content

Make MCMC calibration more modular for various calibration options #151

@btobers

Description

@btobers

MCMC calibration is currently hard-coded to expect that the 0th element in the observation and model prediction lists is the glacierwide mass balance, and the 1st element is the 1d elevation change (if option_calib_elev_change_1d):

    def log_likelihood(self, m):
        log_likehood = 0
        for i, pred in enumerate(self.preds):
            # --- Check for invalid predictions early ---
            if torch.all(pred == float('-inf')):
                # Invalid model output -> assign -inf likelihood
                return torch.tensor([-float('inf')])

            if i == 0:
                # --- Base case: mass balance likelihood ---
                log_likehood += log_normal_density(
                    self.obs[i][0],  # observed values
                    mu=pred,  # predicted values
                    sigma=self.obs[i][1],  # observation uncertainty
                )

            elif i == 1 and len(m) > 3:
                # --- Extended case: apply density scaling to get binned elevation change ---
                # Create density field, separate values for ablation/accumulation zones
                rho = np.ones_like(self.bin_z)
                rho[self.abl_mask] = m[3]  # rhoabl
                rho[~self.abl_mask] = m[4]  # rhoacc
                rho = torch.tensor(rho)
                self.preds[i] = pred = (
                    self.preds[i] * (pygem_prms['constants']['density_ice'] / rho[:, np.newaxis])
                )  # scale prediction by model density values (convert from m ice to m thickness change considering modeled density)

                log_likehood += log_normal_density(
                    self.obs[i][0],  # observations
                    mu=pred,  # scaled predictions
                    sigma=self.obs[i][1],  # uncertainty
                )
        return log_likehood

This doesn't allow much flexibility in calibrating against additional datasets or turning on/off various calibrations. We should instead pass the observation and predictions as dictionaries to pygem.mcmc.mbPosterior with paired keys that get evaluated accordingly to make the sampling framework more robust.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions