-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Resting state integration #1123
[WIP] Resting state integration #1123
Conversation
We discussed this a bit outside of github, and Vigneswaran made me aware of a few potential issues. We'd like the result = root(wrapper, x0=np.array([float(-70*mV), 0, 0, 0])) This results in -63.4846041*mV for v, and values of 0.013356, 0.03483014, and 0.99641202 for m, n, h, which is exactly what we get if we let our model run for a bit. Note that our initial guess for h was far off, but it still converged to the correct solution. It also converges to the same solution if our guess for v is completely off (e.g. 0mV), but h is close (e.g. 1). However, if we set everything to 0 (the default in Vigneswaran's code if the user does not specify a guess), then we get a quite unsatisfying solution: The problem is that from scipy's point of view, it converged to a solution as well here, and it is actually a better solution than the previous one in the sense that |f(X)| is closer to 0 (with dX/dt=f(X)). However, this fix point is not stable and therefore not what we want. Should we try to somehow figure out the difference between stable and unstable fixed points? Can we do something to find good initial guesses? In the HH model, if we have a decent guess for v, we can calculate corresponding values for m, n, h – should we do some kind of dependency analysis and suggest/require the user to provide a guess for v, but not necessarily for the gating variables? |
6d77b8e
to
f2e159b
Compare
Can we just run it for a second of simulated time? |
That is efficient and also simple, but I'm afraid that the values obtained after simulation (for a specific duration), shall still be apocryphal and doesn't guarantee stable convergence. Can we try a heuristic approach of selecting a "Reasonable" set of initial values and make them surf to resting states individually, and check they all converged to a single solution? If not we can return a failure message. Will this be any useful? |
@thesamovar do you mean this as an alternative to this PR, i.e. why bother and not simply run to see where it ends up? Or do you mean for testing whether a resting state is stable? The main idea behind this PR (and #1064) is to have something that quickly calculates the resting state even for complex models where running things for 1s would take a long time (and how do we know whether we need to run it for 100ms or 10s?). |
@vigneswaran-chandrasekaran I'm afraid that the PR is no longer up-to-date with the
I'm also a bit confused – why is the simple |
@mstimberg , Sure I'll make the required changes.
The reason may sound naive. When a model is characterized by resetting or driven by events, the resting state prediction would become complex, hence I thought of raising not |
I think excluding all models with a threshold/reset is too restrictive. Generally, the resting values are below the threshold and we can find them by simply ignoring threshold/reset. If we want to be extra careful, we could do the following check: after determining the resting state values, we evaluate the threshold with this values. If the threshold condition is fulfilled, we'd raise a warning or an error, because our calculated values would then not be correct. This would only be necessary when the model also defines a reset (e.g. a HH model typically defines a threshold but no reset). I saw that you did a change to sync your branch with the master branch, but I don't think it worked correctly. You'll have to use an explicit |
Hi 👋, I made the necessary changes and apologies for the mess created by solving the merge conflict 🤦♂️ and unexpected commits showing up that are already made in target branch (finally solved as mentioned here ). I wouldn't able to find why Appveyor (and Travis's Also, regarding incorrect convergence for poor initial values, I tried to plot the nullclines and quiver plots of the system and as @mstimberg mentioned, |
For convenience, the conda package installs a number of additional packages which are not strictly required to run Brian. This includes scipy which is not required in brian2/brian2/codegen/generators/numpy_generator.py Lines 281 to 285 in ed70408
In your PR, the |
Thanks for the clear explanation about how packages are installed by Travis and Appveyor 😊 |
Great for the I've looked a bit into what you said about the stability and I think you are right that we should look at the Jacobian. I'm not sure what you mean by the "will the usage of Jacobian parameter of the root() helps us to fix the instability", are you referring to the If we want to go further, we could hand over a function to The only necessary change: in This is admittedly all a bit rough and I quickly wrote the code in the gist without commenting it much, but I hope you get the general idea? Let me know if something is unclear. |
Yes, I understand the overall idea and the gist looks clear
Sorry for that and I'll make the change 😊 |
Calculation of resting-state of the model, using the
resting_state
function ofNeuronGroup
class.Usage syntax:
Simple example:
Fixes #1064