From 9686489fa042060cf7c69c48223ac43cbf579f07 Mon Sep 17 00:00:00 2001 From: David Augustin Date: Thu, 4 Jul 2024 07:38:01 +0100 Subject: [PATCH 1/7] Update README.md --- README.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/README.md b/README.md index 8e3c9ccc..1eef5f7d 100644 --- a/README.md +++ b/README.md @@ -8,8 +8,7 @@ ## About -**Chi** is an open source Python package hosted on GitHub, -which can be used to model dose response dynamics. +**Chi** is an open source Python package for pharmacokinetic and pharmacodynamic (PKPD) modelling. All features of the software are described in detail in the [full API documentation](https://chi.readthedocs.io/en/latest/). From 22e8510b4231bc6822da25e9a517a507632bcf68 Mon Sep 17 00:00:00 2001 From: DavAug Date: Thu, 4 Jul 2024 08:40:00 +0200 Subject: [PATCH 2/7] doc tweak --- .../fitting_models_to_data.rst | 58 +++++++++---------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/docs/source/getting_started/fitting_models_to_data.rst b/docs/source/getting_started/fitting_models_to_data.rst index 00e49373..85466518 100644 --- a/docs/source/getting_started/fitting_models_to_data.rst +++ b/docs/source/getting_started/fitting_models_to_data.rst @@ -27,7 +27,7 @@ dynamics and to optimise dosing regimens to target a desired treatment response. However, at this point, the simulated treatment responses have little to do with real treatment responses. To describe *real* treatment -responses that we may observe in clinical practice, we need to somehow connect +responses, i.e. treatment responses that we may observe in clinical practice, we need to somehow connect our model to reality. The most common approach to relate models to real treatment responses is to @@ -75,13 +75,13 @@ for a given model structure. Estimating model parameters from data: Background ************************************************* -Before we can try to find parameter values that describe the observed -treatment response most closely, we first need to agree on what we mean by -"*most closely*" for the relationship between the mechanistic model output and the measurements. -An intuitive way to define this notion of closeness is to use the distance +Before we can try to find better parameter values that describe the observed +treatment response, we first need to agree on what we mean by +"*better*" for the relationship between the mechanistic model output and the measurements. +An intuitive notion of "better" is "closer", quantified by the distance between the measurements and the model output, i.e. the difference between the measured values and the -simulated values. Then the model parameters that most closely +simulated values. Then the model parameters that best describe the measurements would be those parameter values that make the mechanistic model output perfectly match the measurements, resulting in distances of 0 ng/mL between the model output and the measurements at all measured time points. @@ -90,26 +90,25 @@ However, as outlined in Sections 1.3 and 1.4 of the noisy, and will therefore not perfectly represent the treatment response dynamics. Consequently, if we were to match the model outputs to measurements perfectly, we would end up with an inaccurate description of the treatment response -as our model would be paying too much attention to the measurement noise. +that is corrupted by measurement noise. -One way to improve our notion of closeness is to incorporate the measurement -process into our computational model of the treatment response, thereby -explicitly stating that we do not expect the mechanistic model output to match -the measurements perfectly. In Chi, this can be done +One way to overcome this limitation is to change our notion of "better" and incorporate the measurement +process into our computational model of the treatment response. This makes explicit +that we do not expect the mechanistic model output to match +the measurements perfectly. In Chi, the measurement process can be captured using :class:`chi.ErrorModel` s. Error models promote the single value output of mechanistic model simulations to a distribution of values. This distribution characterises a range of values around the mechanistic model output where measurements may be expected. -For simulation, this distribution can be used to sample measurement values and +We can use this measurement distribution in two ways: 1. for simulation; and 2. for +parameter estimation. For simulation, the distribution +can be used to sample measurement values and imitate the measurement process of real treatment responses, see Section 1.3 in the :doc:`quick_overview` for an example. For parameter estimation, the distribution can be used to quantify the likelihood with which the observed measurements would have been generated by our model, -see Section 1.4 in the :doc:`quick_overview`. To account for measurement noise -during the parameter estimation, we therefore -choose to quantify the closeness between the model output an the measurements -using likelihoods. +see Section 1.4 in the :doc:`quick_overview`. Formally, we denote the measurement distribution by :math:`p(y | \psi, t, r)`, where :math:`y` denotes the measurement value, :math:`\psi` denotes the model parameters, @@ -122,7 +121,7 @@ of the measurement distribution evaluated at the measurement, :math:`p(y_1 | \psi, t_1, r^*)`. Note that this likelihood depends on the choice of model parameters, :math:`\psi`. The model parameters with the maximum likelihood are -the parameter values that most closely describe the measurements. +the parameter values that "best" describe the measurements. .. note:: The measurement distribution, :math:`p(y | \psi, t, r)`, is defined @@ -144,7 +143,7 @@ the parameter values that most closely describe the measurements. we extend the definition of the model parameters to include :math:`\sigma`, :math:`\psi = (a_0, k_a, k_e, v, \sigma)`. - We can see that the model output + We can see that the mechanistic model output defines the mean or Expectation Value of the measurement distribution. 2. If we choose a :class:`chi.LogNormalErrorModel` to describe the difference @@ -154,7 +153,7 @@ the parameter values that most closely describe the measurements. .. math:: p(y | \psi, t, r) = \frac{1}{\sqrt{2\pi \sigma ^2}}\frac{1}{y}\mathrm{e}^{-\big(\log y - \log c(\psi, t, r) + \sigma / 2\big)^2 / 2\sigma ^2}. - One can show that also for this distribution the model output defines the mean + One can show that also for this distribution the mechanistic model output defines the mean or Expectation Value of the measurement distribution. The main difference between the two distributions is the shape. The @@ -306,14 +305,14 @@ likelihood-prior product over the full parameter space, :math:`p(\mathcal{D}) = \int \mathrm{d} \psi \, p(\mathcal{D}, \psi ) = \int \mathrm{d} \psi \, p(\mathcal{D}| \psi )\, p(\psi)`. This renders the value of the constant shift for all intents and purposes unknown. -The unknown shift makes it impossible to make statements about the absolute probability -of parameter values. However, it does allow for relative comparisons of -probabilities -- a fact exploited by MCMC algorithms to circumvent the limitation +The unknown shift makes it very difficult to make statements about the absolute probability +of parameter values from the :class:`chi.LogPosterior` alone. However, the uknown shift does allow for relative comparisons of +probabilities as the shift is the same for all parameter values -- a fact exploited by MCMC algorithms to circumvent the limitation of the partially known log-posterior. MCMC algorithms use the relative comparison of parameter probabilities to generate random samples from the posterior distribution, opening a gateway to reconstruct the distribution. The more random samples are generated, the closer the histogram over the samples will -approximate the posterior distribution. In fact, one can show that the histogram +approximate the original posterior distribution. In fact, one can show that the histogram will converge to the posterior distribution as the number of samples approaches infinity. This makes it possible for MCMC algorithms to reconstruct any posterior distribution from a :class:`chi.LogPosterior`. @@ -397,12 +396,12 @@ we can see in the second row of the figure that the marginal posterior distribut substantially differs from the marginal prior distribution. This is because the drug concentration measurements contain important information about the elimination rate, rendering rates above 1.5 1/day or below 0.25 1/day as extremely unlikely for the -model of the treatment response. This in in stark contrast to the relatively wide +model of the treatment response. This is in stark contrast to the relatively wide range of model parameters that we deemed feasible prior to the inference (see black line). However, the measurements are not conclusive enough to reduce the distribution of feasible elimination rates to a single value. Similarly, for the volume of distribution (row 3) and the error scale parameter -(row 4), the measurements lead to substaintial updates relative to the +(row 4), the measurements lead to substantial updates relative to the prior distribution. In comparison, the measurements appear less informative about the absorption rate (see row 1), given that the marginal posterior distribution of @@ -437,9 +436,10 @@ Let us begin this section by revisiting the right column in the figure above. Th shows the samples from the three MCMC algorithm runs at each iteration. For early iterations of the algorithm, the samples from the MCMC runs look quite distinct -- each run appears to sample -from a different area of the parameter space. In contrast, -the MCMC runs seem to converge and sample from the same area of the parameter space -at later iterations. Intuitively, +from a different area of the parameter space. In contrast, at later iterations +the MCMC runs are harder to distinguish and sample from the same area of the parameter space. + +Intuitively, it does not really make sense for the samples from the MCMC runs to look different -- after all, we use the same MCMC algorithm to sample from the same posterior distribution. The histogram over the samples *should* therefore be identical within the limits of @@ -476,7 +476,7 @@ and is the particular choice of the *second* half important? The answer comes back to a common limitation of all MCMC algorithm which we can see in the right column of the figure presented earlier: MCMC algorithms generate samples from the posterior distribution conditional on the latest generated sample. -For some MCMC algorithms, this conditioning has little influences on sequential samples +For some MCMC algorithms, this conditioning has little influence on sequential samples because the internal sampling strategy is advanced enough to sufficiently decorrelate subsequent samples. But for many MCMC algorithms the conditioned sample substantially influences the sampled value. That From bde22919934f36ed9f07131bce261b044cf71fae Mon Sep 17 00:00:00 2001 From: DavAug Date: Thu, 4 Jul 2024 08:43:27 +0200 Subject: [PATCH 3/7] Update os version testing to use Py3.11 --- .github/workflows/unit-test-os-versions.yml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/workflows/unit-test-os-versions.yml b/.github/workflows/unit-test-os-versions.yml index 0191c807..5f6573a1 100644 --- a/.github/workflows/unit-test-os-versions.yml +++ b/.github/workflows/unit-test-os-versions.yml @@ -19,10 +19,10 @@ jobs: steps: - uses: actions/checkout@v1 - - name: Set up Python 3.8 + - name: Set up Python 3.11 uses: actions/setup-python@v1 with: - python-version: 3.8 + python-version: 3.11 architecture: x64 - name: install sundials (ubuntu) @@ -30,7 +30,7 @@ jobs: run: | sudo apt-get update sudo apt-get install libsundials-dev - + - name: install sundials (macos) if: ${{ matrix.os == 'macos-latest' }} run: | From 3f36dd93f4842e45712f729219359d0c1b324455 Mon Sep 17 00:00:00 2001 From: DavAug Date: Thu, 4 Jul 2024 09:05:24 +0200 Subject: [PATCH 4/7] Fix scipy - arviz clash --- setup.py | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/setup.py b/setup.py index eacc7577..8565e145 100644 --- a/setup.py +++ b/setup.py @@ -9,7 +9,7 @@ setup( # Module name name='chi-drm', - version='1.0.0', + version='1.0.1', description='Package to model dose response dynamics', long_description=readme, long_description_content_type="text/markdown", @@ -36,8 +36,9 @@ 'pandas>=0.24', 'pints>=0.4', 'plotly>=4.8.1', + 'scipy<=1.12', # 07/2024 - ArviZ seems to not yet keep up with SciPy 'tqdm>=4.46.1', - 'xarray>=0.19' + 'xarray>=0.19', ], extras_require={ 'docs': [ @@ -49,4 +50,4 @@ 'jupyter==1.0.0', ] }, -) +) \ No newline at end of file From 11e8f31d19a78774757966d975c57ac97b4cbe52 Mon Sep 17 00:00:00 2001 From: DavAug Date: Thu, 4 Jul 2024 09:10:03 +0200 Subject: [PATCH 5/7] flake 8 --- setup.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/setup.py b/setup.py index 8565e145..ec56d640 100644 --- a/setup.py +++ b/setup.py @@ -50,4 +50,4 @@ 'jupyter==1.0.0', ] }, -) \ No newline at end of file +) From 2d34c816d904d1ac4e6b3e3b1f303c305d11f5a7 Mon Sep 17 00:00:00 2001 From: DavAug Date: Thu, 4 Jul 2024 09:36:06 +0200 Subject: [PATCH 6/7] Fix workflow --- .github/workflows/unit-test-os-versions.yml | 1 - 1 file changed, 1 deletion(-) diff --git a/.github/workflows/unit-test-os-versions.yml b/.github/workflows/unit-test-os-versions.yml index 5f6573a1..fb61a778 100644 --- a/.github/workflows/unit-test-os-versions.yml +++ b/.github/workflows/unit-test-os-versions.yml @@ -23,7 +23,6 @@ jobs: uses: actions/setup-python@v1 with: python-version: 3.11 - architecture: x64 - name: install sundials (ubuntu) if: ${{ matrix.os == 'ubuntu-latest' }} From ef679820101880118a5db2b5ea8eb7f8a2b0cd3f Mon Sep 17 00:00:00 2001 From: DavAug Date: Thu, 4 Jul 2024 11:27:29 +0200 Subject: [PATCH 7/7] fix worklfows 2 --- .github/workflows/unit-test-os-versions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/unit-test-os-versions.yml b/.github/workflows/unit-test-os-versions.yml index fb61a778..83a9be94 100644 --- a/.github/workflows/unit-test-os-versions.yml +++ b/.github/workflows/unit-test-os-versions.yml @@ -20,7 +20,7 @@ jobs: - uses: actions/checkout@v1 - name: Set up Python 3.11 - uses: actions/setup-python@v1 + uses: actions/setup-python@v5 with: python-version: 3.11