Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Resting state integration #1123

Conversation

vigneswaran-chandrasekaran
Copy link
Member

@vigneswaran-chandrasekaran vigneswaran-chandrasekaran commented Oct 16, 2019

Calculation of resting-state of the model, using the resting_state function of NeuronGroup class.

Usage syntax:

group = NeuronGroup(...)
equilibrium_point = group.resting_state(initial_guess)

Simple example:

tau = 10*ms
eqs = '''dv/dt = (1-v)/tau : 1'''
group = NeuronGroup(1, eqs, method='exact')
result = group.resting_state({'v':  0})    # giving initial guess

Fixes #1064

@mstimberg
Copy link
Member

We discussed this a bit outside of github, and Vigneswaran made me aware of a few potential issues. We'd like the resting_state method to work without much help of the user, in particular the "initial guess" for the resting state. Unfortunately, it turns out that this is not easy: if the initial guess is very bad we might get bad results. As such, this isn't a big problem as long as we can tell the user that we did not succeed in finding a good resting state. However, we cannot easily detect this situation. E.g., in an example that I published as a gist, we get good results with a crude initial guess for v, m, n, h in a HH model:

result = root(wrapper, x0=np.array([float(-70*mV), 0, 0, 0]))

This results in -63.4846041*mV for v, and values of 0.013356, 0.03483014, and 0.99641202 for m, n, h, which is exactly what we get if we let our model run for a bit. Note that our initial guess for h was far off, but it still converged to the correct solution. It also converges to the same solution if our guess for v is completely off (e.g. 0mV), but h is close (e.g. 1). However, if we set everything to 0 (the default in Vigneswaran's code if the user does not specify a guess), then we get a quite unsatisfying solution:
v=-59.99022873mV, m=0.02691427, n=0.06052376, h=0.99128428

The problem is that from scipy's point of view, it converged to a solution as well here, and it is actually a better solution than the previous one in the sense that |f(X)| is closer to 0 (with dX/dt=f(X)). However, this fix point is not stable and therefore not what we want.

Should we try to somehow figure out the difference between stable and unstable fixed points? Can we do something to find good initial guesses? In the HH model, if we have a decent guess for v, we can calculate corresponding values for m, n, h – should we do some kind of dependency analysis and suggest/require the user to provide a guess for v, but not necessarily for the gating variables?

@thesamovar
Copy link
Member

Can we just run it for a second of simulated time?

@vigneswaran-chandrasekaran
Copy link
Member Author

That is efficient and also simple, but I'm afraid that the values obtained after simulation (for a specific duration), shall still be apocryphal and doesn't guarantee stable convergence. Can we try a heuristic approach of selecting a "Reasonable" set of initial values and make them surf to resting states individually, and check they all converged to a single solution? If not we can return a failure message. Will this be any useful?

@mstimberg
Copy link
Member

Can we just run it for a second of simulated time?

@thesamovar do you mean this as an alternative to this PR, i.e. why bother and not simply run to see where it ends up? Or do you mean for testing whether a resting state is stable?

The main idea behind this PR (and #1064) is to have something that quickly calculates the resting state even for complex models where running things for 1s would take a long time (and how do we know whether we need to run it for 100ms or 10s?).

@mstimberg
Copy link
Member

@vigneswaran-chandrasekaran I'm afraid that the PR is no longer up-to-date with the master branch, we recently merged a big change to use pytest instead of nose for testing (you can find some more info in the documentation). Can you merge latest master into this branch and fix the conflict in test_neurongroup? Two more remarks concerning the test:

  • while your asserts using the floating point epsilon definitely make sense, it'd be easier/more readable to use the assert_allclose function from brian2.tests.utils instead.
  • we are not very good about this in other places of the test suite, but I would prefer if you split up your test function into two or three functions where each of them tests one single feature

I'm also a bit confused – why is the simple dv/dt = -v/tau model at the end of your test unsupported?

@vigneswaran-chandrasekaran
Copy link
Member Author

@mstimberg , Sure I'll make the required changes.

I'm also a bit confused – why is the simple dv/dt = -v/tau model at the end of your test unsupported?

The reason may sound naive. When a model is characterized by resetting or driven by events, the resting state prediction would become complex, hence I thought of raising not NotImplementedError for now would be better.

@mstimberg
Copy link
Member

The reason may sound naive. When a model is characterized by resetting or driven by events, the resting state prediction would become complex, hence I thought of raising not NotImplementedError for now would be better.

I think excluding all models with a threshold/reset is too restrictive. Generally, the resting values are below the threshold and we can find them by simply ignoring threshold/reset. If we want to be extra careful, we could do the following check: after determining the resting state values, we evaluate the threshold with this values. If the threshold condition is fulfilled, we'd raise a warning or an error, because our calculated values would then not be correct. This would only be necessary when the model also defines a reset (e.g. a HH model typically defines a threshold but no reset).

I saw that you did a change to sync your branch with the master branch, but I don't think it worked correctly. You'll have to use an explicit git merge.

@vigneswaran-chandrasekaran vigneswaran-chandrasekaran changed the base branch from master to automatic_benchmarks March 7, 2020 12:04
@vigneswaran-chandrasekaran vigneswaran-chandrasekaran changed the base branch from automatic_benchmarks to master March 7, 2020 12:04
@vigneswaran-chandrasekaran
Copy link
Member Author

Hi 👋, I made the necessary changes and apologies for the mess created by solving the merge conflict 🤦‍♂️ and unexpected commits showing up that are already made in target branch (finally solved as mentioned here ).

I wouldn't able to find why Appveyor (and Travis's Docs_only = yes) are reporting that scipy is not available, although it is mentioned in the /brian2/dev/conda-recipe/meta.yaml file.

Also, regarding incorrect convergence for poor initial values, I tried to plot the nullclines and quiver plots of the system and as @mstimberg mentioned, scipy.optimize focusses on finding the null points without considering the stability of the point. Knowing that, will the usage of Jacobian parameter of the root() helps us to fix the instability (as it would determine the solution's stability from its eigenvectors sign)? or a simple stability check of verifying the derivative of a point left and right to the solution is positive and negative respectively would able to effectively raise warning/exception about the solution to the user? (although mathematically both are identical) Fortunately, I would able to locate the instability of the obtained solution in HH-model using the latter one.

@mstimberg
Copy link
Member

I wouldn't able to find why Appveyor (and Travis's Docs_only = yes) are reporting that scipy is not available, although it is mentioned in the /brian2/dev/conda-recipe/meta.yaml file.

For convenience, the conda package installs a number of additional packages which are not strictly required to run Brian. This includes scipy which is not required in setup.py and also not installed by default on appveyor (neither on travis if we only want to build the documentation). To deal with this, imports of scipy always look like this in the code:

try:
import scipy
scipy_available = True
except ImportError:
scipy_available = False

In your PR, the neurongroups module imports scipy.optimize unconditionally, therefore importing brian2fails if scipy is not present. The quick fix would be to simply require scipy as a dependency (I don't think I'd mind) and add it to the list of packages that get installed in appveyor.yml and .travis.yml. Alternatively, you could use the same technique as in other places, i.e. try to import scipy and raise a NotImplementedError in your function that requires it if it is not available.

@vigneswaran-chandrasekaran
Copy link
Member Author

Thanks for the clear explanation about how packages are installed by Travis and Appveyor 😊
I am not sure about pulling in scipy to dependencies list as we've not used scipy extensively anywhere. So, I used the current practice by raising NotImplemetedError and thanks for the example :) .

@mstimberg
Copy link
Member

Great for the scipy import change, we might change it in the long run but then we'd change it in the other places as well, so let's not worry about it for now. (Minor nitpick: It's more "pythonic" to write if not scipy_available instead of if scipy_available == False).

I've looked a bit into what you said about the stability and I think you are right that we should look at the Jacobian. I'm not sure what you mean by the "will the usage of Jacobian parameter of the root() helps us to fix the instability", are you referring to the jac parameter of the function? I do not think we need this necessarily (but see my comment further below), since it is for providing a function to root. What we need instead is to know the value of the Jacobian at the equilibrium point in the end. The root function actually gives us an estimate of the Jacobian (in the fjac attribute of the result object), but I found that this estimate is quite inaccurate, and we cannot use it to decide about the stability. However, we can calculate the Jacobian ourselves with a similar approach to what we use to evaluate the RHS of our differential equations + using sympy to calculate the actual Jacobian. I've updated my earlier gist with a proof of concept. With its evaluate_jacobian function, we can get the Jacobian matrix evaluated at the equilibrium point and by looking at its eigenvalues (calculated by numpy) determine its stability. In my little example I used earlier this works well, the "good" equilibrium point is identified as stable, the "bad" one as unstable. So this might be quite interesting to integrate with your current code.

If we want to go further, we could hand over a function to scipy.optimize.root that calculates the value for every point that it evaluates. Or probably we'd do it in the same function that we already hand over to root (by stating jac=True). This has the potential to make the function find the zero faster/more accurately.

The only necessary change: in evaluate_rhs we are currently preparing a NeuronGroup with the relevant properties from scratch for every evaluated point. We could do the same thing for the Jacobian, but the calculations there are a bit more involved and sympy is a bit slow in general. A better approach would be to split things into two phases: one where we create the NeuronGroup with all the subexpressions for the RHS and the J_... (for the Jacobian) variables, and one where we just evaluate the model given the values that the root function hands over (i.e. only the set_states and get_states part).

This is admittedly all a bit rough and I quickly wrote the code in the gist without commenting it much, but I hope you get the general idea? Let me know if something is unclear.

@vigneswaran-chandrasekaran
Copy link
Member Author

Yes, I understand the overall idea and the gist looks clear ☺️ I'll start working on this.

(Minor nitpick: It's more "pythonic" to write if not scipy_available instead of if scipy_available == False).

Sorry for that and I'll make the change 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Automatic calculation of resting state
3 participants