Skip to content

Commit

Permalink
Optimal Control extension (#247)
Browse files Browse the repository at this point in the history
* implemented first method for noisy systems

* example folder

* added comments and description

* start readme

* Update README.md

* added contributing file

* Update README.md

* Update README.md

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update README.md

* Update README.md

* update with code formatter

* - edited docstring of __init__

* added warning and info print for unreasonable parameter choices

* revised code

* - docstrings according to Sphinx docstring format
- updated test cases according to changed default parameters
- applied automated code formatting to all changed files and test-files

* Update CONTRIBUTING.md

* - code formatting with --line-length=120

* - code formatting with --line-length=120

* adjusted shapes of arrays, no network yet

* implemented control matrix and precision matrix

* starting testing

* first functioning version without delay

* functional tested version of network OC for fhn

* cleaned up

* stochastic method

* - fix dimensions and slices according to network-compatible dimensionalities (i.e. NxVxT)

* - noisy network example

* check cmat diagonal entries

* - added some description for noisy network example and made the setting fully equivalent to deterministic model example (prec_mat & control_mat)

* adjusted test setting, updated example files for noise case

* adjusted test setting, updated example files for noise case

* added tests in test_fhn_noisy and test_fhn

* speed up solve_adjoint method

* - removed unused matplotlib import in test_fhn.py
- Cmat entries set to interval [0,1] in test_fhn.py and deterministic example

* adjust parameters in test_fhn

* - baseclass for OC
- inherit from OC-baseclass for OC of fhn, passes all tests for fhn-OC
- updated docstrings
- use "isinstance(x, type(None))" for None-checks

* - skeleton for OC for Hopf model
- Jacobian for Hopf model
- code formatting

* - optimal control for Hopf model with quick visual evaluation of the performance in the example files

* - fix overwriting attributes of the neurolib-model that is passed to OC -> 'self.model' is deepcopy of the passed 'model'
- restructured: jit-functions into the file of derived OC class definition
- updated comments & documentation
- identified "compute_gradient" as model dependent in its current implementation
- Dxdot is currently unused for Hopf & FHN
- single test case for Hopf, rely on tests for FHN for shared methods for the moment
- updated dt in example to avoid numerical inaccuracies

* - add check for passed model in model specific OC
- place plotting functions of examples in oc-utility-folder
- provide a single example (using the FHN), since completely analogous for Hopf

* - add test_twonode_oc for deterministic hopf

* - remove fhn example, is replaced by combined example

* refactoring stochastic OC

* refactored stochastic control

* fixed bugs

* implemented wc model

* oc wc implemented and partially tested

* - improved performance of cost functions and derivatives (derivative_precision_cost x7, precision_cost x2 with better readability), all tests passed

* functional version wc optimal control

* remove file

* - remove explicit passing of parameter "N" to precision_cost function
- update of docstrings

* - simplified function, passes tests

* - fix: adjoint_state[t=0] was never computed

* - added function 'update_control_with_limit' to limit the absolute value of the control strength at all points in time. Passes tests.
- added new attribute 'maximum_control_strength' to the 'OC' class to set the absolute maximum value for the control strength
- test cases for the new function 'update_control_with_limit'

* - fix: added missing 'maximum_control_strength' as parameter in the derived oc-classes

* - fix: added scaling by time step of precision- and energy cost, added parameter in function calls
- adjusted corresponding test cases

* corrected background missing

* - interval in which precision cost is now positional argument to avoid repeated checks of specification and allow notation with "negative indices" in Oc-object initialization
- added 'convert_interval' to allow for the different ways of interval specification
- updated function calls accordingly
- changed test structure: separate tests for the interval-specification

* - added: apply maximal-absolute-control-strength constraint even before first optimization
- added: new test cases; passes all tests
- minor fixes

* - additional assertions for interval specification
- checks for repeated calls of ".optimize()"
- new test cases
- removed unused variable and old comments

* merged cost_functions_numba branch

* merged cost_functions_numba branch

* - adjusted to naming 'Duh' for Jacobian
- jit the compute_gradient-functions
- added documentation & docstrings

* removed unnecessary line of code

* remove background from oc.py

* - adaptive step size computation:
     * adaptive step-size reduction or -increment, reduces relevance of hyperparameter for initial step size
     * memorize last step, heavily increases efficiency in many cases
- added very generic test case for step-size computation based on derived OcWc-class
- "convert_interval" not further numba-compiled
- improved documentation, added type-description in model specific OC classes
- added todo-hints for automated control-adjustments
- set default of "precision_cost_interval" in OcWc to same values as for the the other models

* - add counter=0 in case of zero gradient
- remove distinction (not-)noisy step within the noise-free step size computation, that is not required at this point
- add test case for zero-step

* - combine the step-size combination for the noisy and the noise free setting in the adaptive step-size algorithm

* - shared "factor_down" for noisy and deterministic case
- "factor_down**2" in loop for finding numerically stable regime for faster exit of loop
- call the new step-size function for noisy case and delete unused method
-> passes all tests in test-oc and test-fhn-noisy
- adjust parameters in test case to range [0,1] in Cmat

* - revision of documentation
- changed variable names for clarity

* - revision of docstrings for clean documentation build
- fix typos and add comments
- rename variables and functions for clarity

* - fix mistake in equation

* implemented network delays for fhn, hopf, wc/ added tests for all three models

* update notebooks, fixed typo in oc.py

* removed commented debug code

* removed file accidentially imported from other branch

* Revert "removed file accidentially imported from other branch"

This reverts commit 0543c8a.

* - add comments
- variable naming for clarity
- remove unused code and comments
- use fixed seed in test-cases for reproducibility in all cases

* corrected small mistake in delay test

* delay test cases

* cleaned branch, remove files from experiments

* - restructure project

* - move example file

* - adjust imports in example to new project structure

* corrected failing test

* corrected failing test

* - move the calculation of Jacobian matrices to neurolib core modules and adjust imports in the OC submodule

* - create subfolders for oc-tests and -examples and adjust naming

* fixed issue in step size reduction

* fixed issue in step size reduction

* - restore documentation-updates:
   + improved docstrings and comments
   + better aligned to PEP
   + clear variable naming

* fixed bug in time delay

* delete check zero in cmat/ run model once at initilaization/ revise example notebooks/ fix bugs in step_size and delay

* revise test files

* correct typo in jacobian

* correct typo in jacobian

* cost functional restructure

* update test wc

* update test wc

* update test wc

* update test wc

* changed chdir in control examples

* revision

* added test for weights dictionary

* Update README.md

* Update README.md

* Update README.md

* Delete CONTRIBUTING.md

* Delete README.md

* start implementation oc aln

* implemeted bug fixes: aln delayed exc feedback, wc noise, wc&fhn&hopf copy params

* time seires variables

* fixed bug in aln implementation

* time dependent variables

* fixed mistake

* Update README.md

* Update README.md

* time dependend variables and init

* notebook description

* implementation

* aln implementation

* added gradient as self.gradient

* aln imp

* implementation without params.Dmat_ndt

* functioning implementation except for sigmas, adaptation

* functioning implementation except for sigmas, adaptation

* sigma implementation, IA missing

* apparently works with adaptation

* minimal version of adaptation control

* finish aln implementation including adaptation current

* finish aln implementation including adaptation current

* refactoring

* merge neurolib updates

* network implementation

* finalize(?) aln

* remove files

* refactor implementation and finish example notebook

* finalize tests

* replace Vmean with analytical function

* finish test cases

* finalize aln implmentation

* added Dxdoth in hopf and fhn

* adjust fhn, hopf, wc model_params as in aln

* pass N, V, T for iterations

* refactor compute_gradient function

* update comments

* fix bug in network input aln

* revise example notebooks

* initialize. for implementation, wait for aln

* merge aln into OCdev

* cleaned up repository

* clean up

* Delete workflow.txt

* move zero_ste_encountered in deterministic computation

* implemented cost interval and added test

* checkout files from master

* fixed solve_adjoint

* differentiate static and time-dependent inputs

* update example

* revert example change

* update wc for baseline and dynamical inputs

* revise test wc

* revise test wc

* revised cost functional tests

* revised remaining test files

* moved get_xs, get_xs_delay and update_input to oc.py

* refactoring tests

* state vars as dict

* remove test notebook

* update aln model with correct input_vars

* merging

* finish test revision

* indexing in time_integration file via dictionary in all models

* remove comments

* add adjust shape method for inputs, refactoring

* moved example files to main folder

* refactor aln oc test

* refactor etst cases

* remove unnecessary parameter

* solve_adjoint revision, WC notebook

* refactor tests

* Update timeIntegration.py

* Update plot_oc.py

* shorten example runtime

* Delete examples/example-5.6-oc-aln-model-noisy.ipynb

* Delete examples/example-5.2-oc-phenomenological-model-noisy.ipynb

* Update plot_oc.py

* Update cost_functions.py

* added test for solve_adjoint

* Update test_oc.py

* Update oc.py

* Update oc.py

* fix bug in WC jacobian

* Rename example-5.3-oc-wc-model-deterministic.ipynb to example-5.2-oc-wc-model-deterministic.ipynb

* Rename example-5.4-oc-wc-model-noisy.ipynb to example-5.3-oc-wc-model-noisy.ipynb

* Rename example-5.5-oc-aln-model-deterministic.ipynb to example-5.4-oc-aln-model-deterministic.ipynb

* Update README.md

* Update README.md

* Update README.md

* Update README.md

Co-authored-by: Caglar Cakan <[email protected]>

* move plot_oc to neurolib.utils

* restore plot_oc.py in correct form

---------

Co-authored-by: Martin <[email protected]>
Co-authored-by: Caglar Cakan <[email protected]>
  • Loading branch information
3 people committed Dec 22, 2023
1 parent 6886ced commit 762456a
Show file tree
Hide file tree
Showing 30 changed files with 9,065 additions and 6 deletions.
34 changes: 33 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,10 @@ neurolib/ # Main module
├── optimize/ # Optimization submodule
├── evolution/ # Evolutionary optimization
└── exploration/ # Parameter exploration
├── control/optimal_control/ # Optimal control submodule
├── oc.py # Optimal control base class
├── cost_functions.py # cost functions for OC
├── /.../ # Implemented OC models
├── data/ # Empirical datasets (structural, functional)
├── utils/ # Utility belt
├── atlases.py # Atlases (Region names, coordinates)
Expand All @@ -122,6 +126,7 @@ Example [IPython Notebooks](examples/) on how to use the library can be found in
- [Example 0.6](https://mybinder.org/v2/gh/neurolib-dev/neurolib/master?filepath=examples%2Fexample-0.6-custom-model.ipynb) - Minimal example of how to implement your own model in `neurolib`
- [Example 1.2](https://mybinder.org/v2/gh/neurolib-dev/neurolib/master?filepath=examples%2Fexample-1.2-brain-network-exploration.ipynb) - Parameter exploration of a brain network and fitting to BOLD data
- [Example 2.0](https://mybinder.org/v2/gh/neurolib-dev/neurolib/master?filepath=examples%2Fexample-2-evolutionary-optimization-minimal.ipynb) - A simple example of the evolutionary optimization framework
- [Example 5.2](https://mybinder.org/v2/gh/neurolib-dev/neurolib/master?filepath=examples%2Fexample-5.2-oc-wc-model-deterministic.ipynb) - Example of optimal control of the noise-free Wilson-Cowan model

A basic overview of the functionality of `neurolib` is also given in the following.

Expand Down Expand Up @@ -284,6 +289,31 @@ This will gives us a summary of the last generation and plots a distribution of
<img src="https://github.com/neurolib-dev/neurolib/raw/master/resources/evolution_animated.gif">
</p>

### Optimal control
The optimal control module enables to compute efficient stimulation for your neural model. If you know how your output should look like, this module computes the optimal input. Detailes example notebooks can be found in the [example folder](https://github.com/neurolib-dev/neurolib/tree/master/examples) (examples 5.1, 5.2, 5.3, 5.4). In optimal control computations, you trade precision with respect to a target against control strength. You can determine how much each contribution affects the results, by setting weights accordingly.

To compute an optimal control signal, you need to create a model (e.g., an FHN model) and define a target state (e.g., a sine curve with period 2).
```python
from neurolib.models.fhn import FHNModel
model = FHNModel()

duration = 10.
model.params["duration"] = duration
dt = model.params["dt"]

period = 2.
target = np.sin(2. * np.pi * np.arange(0, duration+dt, dt) / period)
```
You can then create a controlled model and run the iterative optimization to find the most efficient control input. The optimal control and the controlled model activity can be taken from the controlled model.
```python
model_controlled = oc_fhn.OcFhn(model, target)
model_controlled.optimize(500) # run 500 iterations
optimal_control = model_controlled.control
optimal_state = model_controlled.get_xs()
```

For a comprehensive study on optimal control of the Wilson-Cowan model based on the neurolib optimal control module, see Salfenmoser, L. & Obermayer, K. Optimal control of a Wilson–Cowan model of neural population dynamics. Chaos 33, 043135 (2023). https://doi.org/10.1063/5.0144682.

## More information

### Built With
Expand Down Expand Up @@ -315,9 +345,11 @@ Cakan, C., Jajcay, N. & Obermayer, K. neurolib: A Simulation Framework for Whole

Caglar Cakan ([email protected])
Department of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany
Bernstein Center for Computational Neuroscience Berlin, Germany
Bernstein Center for Computational Neuroscience Berlin, Germany

### Acknowledgments
This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) with the project number 327654276 (SFB 1315) and the Research Training Group GRK1589/2.

The optimal control module was developed by Lena Salfenmoser and Martin Krück supported by the DFG project 163436311 (SFB 910).

<!--end-include-in-documentation-->
708 changes: 708 additions & 0 deletions examples/example-5.1-oc-phenomenological-model-deterministic.ipynb

Large diffs are not rendered by default.

662 changes: 662 additions & 0 deletions examples/example-5.2-oc-wc-model-deterministic.ipynb

Large diffs are not rendered by default.

567 changes: 567 additions & 0 deletions examples/example-5.3-oc-wc-model-noisy.ipynb

Large diffs are not rendered by default.

859 changes: 859 additions & 0 deletions examples/example-5.4-oc-aln-model-deterministic.ipynb

Large diffs are not rendered by default.

Empty file added neurolib/control/__init__.py
Empty file.
Empty file.
210 changes: 210 additions & 0 deletions neurolib/control/optimal_control/cost_functions.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,210 @@
import numpy as np
import numba


@numba.njit
def accuracy_cost(x, target_timeseries, weights, cost_matrix, dt, interval=(0, None)):
"""Total cost related to the accuracy, weighted sum of contributions.
:param x: State of dynamical system.
:type x: np.ndarray
:param target_timeseries: Target state.
:type target_timeseries: np.darray
:param weights: Dictionary of weights.
:type weights: dictionary
:param cost_matrix: Matrix of channels to take into account
:type cost_matrix: ndarray
:param dt: Time step.
:type dt: float
:param interval: (t_start, t_end). Indices of start and end point of the slice (both inclusive) in time
dimension. Only 'int' positive index-notation allowed (i.e. no negative indices or 'None').
:type interval: tuple, optional
:return: Accuracy cost.
:rtype: float
"""

cost_timeseries = np.zeros((target_timeseries.shape))

# timeseries of control vector is weighted sum of contributing cost functionals
if weights["w_p"] != 0.0:
cost_timeseries += weights["w_p"] * precision_cost(x, target_timeseries, cost_matrix, interval)

cost = 0.0
# integrate over nodes, channels, and time
if weights["w_p"] != 0.0:
for n in range(x.shape[0]):
for v in range(x.shape[1]):
for t in range(interval[0], interval[1]):
cost += cost_timeseries[n, v, t] * dt

return cost


@numba.njit
def derivative_accuracy_cost(x, target_timeseries, weights, cost_matrix, interval=(0, None)):
"""Derivative of the 'accuracy_cost' wrt. the state 'x'.
:param x: State of dynamical system.
:type x: np.ndarray
:param target_timeseries: Target state.
:type target_timeseries: np.darray
:param weights: Dictionary of weights.
:type weights: dictionary
:param cost_matrix: Matrix of channels to take into account
:type cost_matrix: ndarray
:param interval: (t_start, t_end). Indices of start and end point of the slice (both inclusive) in time
dimension. Only 'int' positive index-notation allowed (i.e. no negative indices or 'None').
:type interval: tuple, optional
:return: Accuracy cost derivative.
:rtype: ndarray
"""

der = np.zeros((target_timeseries.shape))

if weights["w_p"] != 0.0:
der += weights["w_p"] * derivative_precision_cost(x, target_timeseries, cost_matrix, interval)

return der


@numba.njit
def precision_cost(x_sim, x_target, cost_matrix, interval=(0, None)):
"""Summed squared difference between target and simulation within specified time interval weighted by w_p.
Penalizes deviation from the target.
:param x_sim: N x V x T array that contains the simulated time series.
:type x_sim: np.ndarray
:param x_target: N x V x T array that contains the target time series.
:type x_target: np.ndarray
:param cost_matrix: N x V binary matrix that defines nodes and channels of precision measurement. Defaults to
None.
:type cost_matrix: np.ndarray
:param interval: (t_start, t_end). Indices of start and end point of the slice (both inclusive) in time
dimension. Only 'int' positive index-notation allowed (i.e. no negative indices or 'None').
:type interval: tuple
:return: Precision cost for time interval.
:rtype: float
"""

cost = np.zeros((x_target.shape))

# integrate over nodes, channels, and time
for n in range(x_target.shape[0]):
for v in range(x_target.shape[1]):
for t in range(interval[0], interval[1]):
cost[n, v, t] = 0.5 * cost_matrix[n, v] * (x_target[n, v, t] - x_sim[n, v, t]) ** 2

return cost


@numba.njit
def derivative_precision_cost(x_sim, x_target, cost_matrix, interval):
"""Derivative of 'precision_cost' wrt. 'x_sim'.
:param x_sim: N x V x T array that contains the simulated time series.
:type x_sim: np.ndarray
:param x_target: N x V x T array that contains the target time series.
:type x_target: np.ndarray
:param cost_matrix: N x V binary matrix that defines nodes and channels of precision measurement, defaults to
None
:type cost_matrix: np.ndarray
:param interval: (t_start, t_end). Indices of start and end point of the slice (both inclusive) in time
dimension. Only 'int' positive index-notation allowed (i.e. no negative indices or 'None').
:type interval: tuple
:return: Control-dimensions x T array of precision cost gradients.
:rtype: np.ndarray
"""

derivative = np.zeros(x_target.shape)

# integrate over nodes, variables, and time
for n in range(x_target.shape[0]):
for v in range(x_target.shape[1]):
for t in range(interval[0], interval[1]):
derivative[n, v, t] = -cost_matrix[n, v] * (x_target[n, v, t] - x_sim[n, v, t])

return derivative


@numba.njit
def control_strength_cost(u, weights, dt):
"""Total cost related to the control strength, weighted sum of contributions.
:param u: Control-dimensions x T array. Control signals.
:type u: np.ndarray
:param weights: Dictionary of weights.
:type weights: dictionary
:param dt: Time step.
:type dt: float
:return: control strength cost of the control.
:rtype: float
"""

cost_timeseries = np.zeros((u.shape))

# timeseries of control vector is weighted sum of contributing cost functionals
if weights["w_2"] != 0.0:
cost_timeseries += weights["w_2"] * L2_cost(u)

cost = 0.0
# integrate over nodes, channels, and time
if weights["w_2"] != 0.0:
for n in range(u.shape[0]):
for v in range(u.shape[1]):
for t in range(u.shape[2]):
cost += cost_timeseries[n, v, t] * dt

return cost


@numba.njit
def derivative_control_strength_cost(u, weights):
"""Derivative of the 'control_strength_cost' wrt. the control 'u'.
:param u: Control-dimensions x T array. Control signals.
:type u: np.ndarray
:param weights: Dictionary of weights.
:type weights: dictionary
:return: Control-dimensions x T array of L2-cost gradients.
:rtype: np.ndarray
"""

der = np.zeros((u.shape))

if weights["w_2"] != 0.0:
der += weights["w_2"] * derivative_L2_cost(u)

return der


@numba.njit
def L2_cost(u):
"""'Energy' or 'L2' cost. Penalizes for control strength.
:param u: Control-dimensions x T array. Control signals.
:type u: np.ndarray
:return: L2 cost of the control.
:rtype: float
"""

return 0.5 * u**2.0


@numba.njit
def derivative_L2_cost(u):
"""Derivative of the 'L2_cost' wrt. the control 'u'.
:param u: Control-dimensions x T array. Control signals.
:type u: np.ndarray
:return: Control-dimensions x T array of L2-cost gradients.
:rtype: np.ndarray
"""
return u
Loading

0 comments on commit 762456a

Please sign in to comment.