Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speedup possible? #56

Open
lungd opened this issue Apr 21, 2017 · 2 comments
Open

Speedup possible? #56

lungd opened this issue Apr 21, 2017 · 2 comments

Comments

@lungd
Copy link
Contributor

lungd commented Apr 21, 2017

I did run following command to profile the program which gets called from a c302 script (openworm/CelegansNeuroML)

java -Xmx400M -agentlib:hprof=file=hprof_times.txt,cpu=times  -Djava.awt.headless=true -jar  "/usr/local/lib/python2.7/dist-packages/pyNeuroML-0.2.10-py2.7.egg/pyneuroml/lib/jNeuroML-0.8.0-jar-with-dependencies.jar"  "LEMS_c302_C2_AVB_VB_DB_VD_DD.xml"  -neuron -run -nogui

Only a very small part of the output:
...
CPU TIME (ms) BEGIN (total = 4181427) Fri Apr 21 17:12:36 2017
rank self accum count trace method
1 33.05% 33.05% 43 305405 java.lang.Object.wait
2 32.87% 65.92% 161205 305413 java.lang.ref.ReferenceQueue.remove
3 2.48% 68.40% 1888856 319750 java.io.DataOutputStream.writeUTF
4 1.74% 70.14% 74793210 319746 java.lang.String.charAt
5 0.89% 71.03% 238653 302248 java.io.UnixFileSystem.normalize
6 0.79% 71.82% 8651067 319701 java.util.zip.Inflater.inflate
7 0.67% 72.49% 28638360 302247 java.lang.String.charAt
8 0.47% 72.96% 8651067 319699 java.util.zip.Inflater.ensureOpen
9 0.44% 73.40% 3777712 319734 java.util.zip.InflaterInputStream.read
10 0.33% 73.73% 73670 319775 com.sun.xml.bind.v2.bytecode.ClassTailor.tailor
...

As you can see wait() and remove() needs 66% of the time.
Is it possible to speed up this program to make the c302 simulations faster?

@pgleeson
Copy link
Member

@lungd Bear in mind that what jNeuroML is doing there is spawning an external process (Neuron), running it and waiting until it completes. All of the simulation of the worm takes place in Neuron and this should be the bulk of the time of the whole process.

In some small networks it may be quicker to run the simulation in jNeuroML itself, it's inherently slower (and normally required a smaller dt, e.g. 0.01 ms) but you don't have the overhead of writing Neuron py/hoc/mod files, compiling the mod file and launching an external process.

One way or another you should experiment with adjusting the dt value, as this is the biggest factor in determining the run time of simulations. If you are running many simulations, you might get away with 0.1 ms in Neuron, but always check the behaviour against runs at 0.01 in nrn (or 0.001 in jnml natively) to make sure the overall behavour is captured.

@lungd
Copy link
Contributor Author

lungd commented Jul 1, 2017

NeuronWriter creates a file with a content containing snippets with following structure:

f = open(...)
num_points = len(py_v_time)

for i in range(num_points):
    f.write(...)
f.close()

A possible improvement would be to create a string first and call write only once, e.g:

num_points = len(py_v_time)
file_string = ''.join(... for i in range(len(py_v_time)))
with open(...) as f:
    f.write(file_string)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: 🆕 New
Development

No branches or pull requests

2 participants