-
Notifications
You must be signed in to change notification settings - Fork 391
Fix issue with sorting when simulating more than 2 billion neurons #3707
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
|
||
| template < typename T > | ||
| inline int | ||
| inline uint64_t |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you choose uint64_t? Given that get_node_id() returns size_t, wouldn't that be a better choice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_node_id() actually returns a uint64_t that's why I chose this type: https://github.com/nest/nest-simulator/blob/master/nestkernel/source.h#L56
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we might just change the type used in Source used to store the node-id to size_t. It's identical to uint64_t on most platforms anyway and there is also no apparent reason why we would need exactly 64 bits there. We just need the same number of bits we use anywhere else for node-ids, which as you already stated is size_t.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that I had looked at the wrong get_node_id(), namely the one in SourceTable:
nest-simulator/nestkernel/source_table.cpp
Lines 155 to 163 in cd8c487
| size_t | |
| nest::SourceTable::get_node_id( const size_t tid, const synindex syn_id, const size_t lcid ) const | |
| { | |
| if ( not kernel().connection_manager.get_keep_source_table() ) | |
| { | |
| throw KernelException( "Cannot use SourceTable::get_node_id when get_keep_source_table is false" ); | |
| } | |
| return sources_[ tid ][ syn_id ][ lcid ].get_node_id(); | |
| } |
It actually takes the value from Source and returns it. I think the best way to sort this out is indeed to change to size_t in Source.
|
Is it not better to use
|
I hade been wondering if that was possible, but I also sometimes feel that with these new C++ constructs, one can make code more difficult to read. A plain |
When using Boost's
integer_sortmethod, we were currently sorting the Sources by converting them toints, which on most platforms are represented using 32 bits. For global node ids requiring more than 32 bits (>2,147,483,647), the sorting fails.