Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modernize random number generation for NEST 3 #1440

Closed
heplesser opened this issue Feb 21, 2020 · 23 comments · Fixed by #1549
Closed

Modernize random number generation for NEST 3 #1440

heplesser opened this issue Feb 21, 2020 · 23 comments · Fixed by #1549
Assignees
Labels
I: Behavior changes Introduces changes that produce different results for some users I: User Interface Users may need to change their code due to changes in function calls S: High Should be handled next T: Enhancement New functionality, model or documentation
Milestone

Comments

@heplesser
Copy link
Contributor

heplesser commented Feb 21, 2020

This issue is a follow-up of #245, #349, #508, #1296, #1381, #1405.

This exposition is based on discussions with NEST developers, especially @jougs and @hakonsbm.

Background

The current NEST random number architecture dates back to 2002 and is thoroughly outdated today. The arrival of carefully standardized RNGs with the C++11 standard, the essentially complete transition to 64-bit CPUs, the availability of dedicated cryptographic RNG hardware (AES) and the transition from 100s to 100s of 1000s of parallel processes create an entirely different landscape for random number generation today.

Key weaknesses include

  1. a difficult user interface for seeding RNGs, requiring users to supply multiple seeds
  2. a similar difficulty in selecting other than the default RNG
  3. use of the Knuth lagged Fibonacci generator as default generator
  4. difficulty to add modern generator libraries such as Random123.

General considerations

NEST

Use of random numbers

Random numbers are consumed in NEST

  1. during node creation to randomize node parameters;
  2. during connection creation
    1. to select connections to be created,
    2. to parameterize connections;
  3. during simulation
    1. to generate stochastic input (randomized spike trains, noisy currents),
    2. to support stochastic spike generation.
      In all cases except certain sub-cases of 2.i, randomization is tied to a node, which in turn is assigned to a specific virtual process, either because the node is concerned (cases 1, 3) or because the node is the target of a connection (case 2). NEST design guarantees that nodes assigned to different VPs can be updated independent of each other, and the same holds for connections if created or parameterized from a target-node perspective. Therefore, for all these purposes, one random number stream per VP is required and sufficient to ensure that different VPs can operate independently.

The only exception are certain sub-cases of 2.i in which all VPs need to draw identical random number streams and discard all numbers that are irrelevant. This pertains to fixed_outdegree and fixed_total_number (initial step) connection patterns. For these purposes, a global random number stream is required.

Quantity of random numbers required

NEST therefore requires in total N_VP+1 random number streams. With a view to future exascale computers, we need to prepare for N_VP=O(10^7) random streams. We can estimate the number of random numbers required per stream as follows:

  • Based on current practice, we assume N = 1000 neurons per stream; this results in a total of O(10^11) neurons, on the scale of the human brain and thus a sensible upper limit.
  • Each neuron receives excitatory and inhibitory Poisson background input, i.e., 2 Poisson distributed random numbers per time step per neuron, each of which typically requires multiple uniform random numbers; we assume for simplicity 2 uniform per Poisson number, i.e., 4 plain random numbers per time step per neuron.
  • A time step is 0.1 ms and a typical simulation duration 100 s simulated time for 10^6 time steps.
  • In total, simulation will consume ~1000 x 4 x 10^6 = 4 x 10^10 random numbers per stream (consumption during network construction will usually be negligible).
  • To allow for a margin of error, we assume O(10^11) random numbers consumed per stream during a single simulation.

Replicability guarantee

A NEST simulation compiled with the same compiler and library, on the same computer hardware and initialised with a fixed random seed, shall generate exactly the same output on each execution.

Exchangeable random number generator

The user shall in an easy way, in particular without recompilation, be able to perform the same simulation with different random number generators. This allows the user to test whether effects observed in simulations may be artefacts of a specific RNG type.

Parallel random number streams

Collision risk

L'Ecuyer et al (2017) provide a recent account of random number generation in highly parallel settings. The main approach to providing independent random number streams in parallel simulations is to use a high-quality random number generator with long period and seed it with a different seed for each stream. They then argue that if we have s streams of length l and a generator with period r, the probability that two streams seeded with different seeds will overlap is given by

p_overlap ~ s^2 l / r .

We have

s = N_VP = 10^7 ~ 2^23
l = 10^11 ~ 2^37

so assuming an rng period of r = 2^128 we arrive at

p_overlap ~ (2^23)^2 x 2^37 / 2^128 = 2^-45 ~ 3 x 10^-14

while for a period of r = 2^256 we obtain

p_overlap ~ 2^-173 ~ 10^-52 .

The probability of stream collisions is thus negligibly small, provided we use random number generators with a period of at least 2^128.

L'Ecuyer (2012, Ch 3.6) points out that certain random number generators classes will show too much regularity in their output if more than r^1/3 numbers are used (this relation is based on exercises in Knuth, TAOCP vol 2, see L'Ecuyer and Simard (2001)). While it is not certain that that analysis also applies to (short) subsequences of the period of an RNG, one might still re-consider the analysis above, but using l^3 ~ 2^111 instead of l when computing p_overlap. We then obtain

Period r | p_overlap

  • | -
    2^128 ~ 10^38| >> 1
    2^256 ~ 10^77 | 2^-99 ~ 10^-30
    2^512 ~ 10^154 | 2^-355 ~10^-107

Thus, for generators with periods of at least 2^256, we have negligible collision probability also under this tightened criterium.

Available seeds

Traditional seed() functions typically accept a 32-bit integer as input and initialize RNG state from this. This allows only for 4 x 10^9 different seeds, so if a simulation requires 10^7 streams, we can at most perform 400 different simulations. This is not acceptable and generators with reliable, better seed generators are required.

Available values

Many existing RNGs return 32-bit integers, which are converted into [0, 1)-uniform random numbers by simple division. This allows at most 2^32=4 x 10^9 different values even for 53-bit-mantissa doubles (see also boostorg/random#61). Given that our streams consume 10^11 numbers, we would expect every possible value to occur many times over during a simulation, which is not ideal. But keep in mind that while we will see the same value many times, it will occur in a different sequence of numbers every time provide the underlying generator has sufficiently long period.

Random number and deviate generator libraries

The following random number and deviate generator libraries have been considered primarily:

GSL and Boost

The GSL random number facilities work strictly with 32 bit random numbers and seeds. This strongly limits there utility in view of the requirements listed above.

The Boost Library Random module uses only 32 bits in generating uniform-[0, 1)-distributed numbers, which form the basis of all random deviate generation. Many deviate generators also appear to rely heavily on constant or pre-computed tables (e.g. normal distribution), which seems little cache efficient and may even cause slow memory access if static data is local to a single CPU. Therefore, the Boost Random module also seems of limited value.

C++ Standard Library

The random number and distribution components of the C++ Standard Library are defined in great detail in §29.6 of the C++ Standard; all references to the standard in the following are to the (see C++17 Working Draft N4659, which is openly available).

Currently, three implementations of the C++ standard library are available

  • GNU's libstdc++ (random, bits/random.h, bits/random.tcc
  • LLVM's libc++ (random)
  • Microsoft's STL (random)
    Where the standard leaves aspects to the implementations, checking these three implementations provides comprehensive answers for all practical purposes.

Advantages

  • Available out of the box with all C++11 compiler installations
  • Clean C++ interface
  • Random number generators are precisely defined, including specific requirements on numbers generated for given seeds.
  • Well-defined, standardised seed-sequence concept for generating complete state data for generators with large state parameterised by a sequence of 32-bit integers; based on seeding scheme of MT19937 (§29.6.7.1)
  • Well-defined, standardised way to convert raw random numbers into uniform-[0, 1)-distributed random numbers in which all mantissa bits are randomised, i.e., 53 bits for double(generate_canonical(), §29.6.7.2). This is independent of whether the underlying generator is a 32 or 64 bit generator.
  • Even though it does not appear to be required explicitly by the standard, all *_distribution classes in all three implementations inspected consume random numbers only via generate_canonical(), i.e., using 53 bits of randomness.
  • All generators support serialisation and de-serialisation via operator<<() and operator>>().

Disadvantages

  • The details of numbers drawn from random distributions is left to the implementation (§29.6.8.1-3) as long as the numbers are distributed correctly. Different implementations (libstdc++, libc++) indeed generate different sequences of random numbers for some distributions.
  • Fewer RNG types available than in GSL; not particularly problematic, since most of the GSL generators are of more historic interest and do not meet our period requirements.

Other generators

A number of other random number generator libraries are available. They usually provide a small number of raw random number generators and would require adaptation to be usable by random distribution generators. Many of these libraries do not seem to follow modern open source code development best practices (e.g. code in open repositories with systematic bug tracking, code review and automated testing).

Recommendations

Principles

  1. NEST 3 shall use the C++ standard library random number and distribution generators.
  2. NEST 3 shall in addition include Random123 generators and possibly others.
  3. Random number generators are wrapped in an abstract base class to support flexible exchange of the random number generator.
  4. Random distribution generators will be wrapped in a typedef, so that one can in principle replace a distribution generator at compile time.
  5. The user specifies only as single 32-bit user_seed.
  6. To seed RNGs for different streams, use seed_seq(user_seed, f(stream_no)).

Comments

  • Ad 1:
  • Ad 2:
    • Random123 generators should be included to provide a completely different type of generators
    • The Random123 license is compatible with inclusion, and code can be shipped with NEST.
    • We should add -march flags as a CMake option to activate hardware support for AES-based generators.
    • Need to figure out how to include AES-based generators only where available (-march).
    • The xoshiroXXX with XXX >= 256 also seem interesting, but require wrapping.
  • Ad 3:
    • The base class needs to implement constexpr result_type min() and max(). Since result_type can differ for different generators (32 or 64 bit), we need to define these methods with return type uint_fast64_t so they can hold all possible values, and protect against generators with even larger return type by static_assert(). Integer promotion rules make this approach safe in light of the generate_canonical() implementations in the three STL variants.
  • Ad 4:
    • Distribution generators need not be exchangeable at runtime, therefore, no base class is necessary.
    • The fact that different STL implementations may generate different random number sequences is unfortunate, but the advantages for the STL (seeding, 53-bit randomness) weigh stronger. Since either GCC or Clang should be available on any system, validations between systems will be possible.
  • Ad 5, 6:
    • All seeds for individual streams are generated from the single 32-bit user_seed provided.
    • The user_seed is never used "as is".
    • stream_no is identical to thread number for per-thread streams, and num_threads+1 for the global stream.
    • f(stream_no) is still to be determined. It needs should not return zero and must be guarantee to map 10^7 different stream_no values to different output values.
    • The C++ std::seed_seq may have weaknesses. But using this architecture, we could later replace the standard-defined seeding rule by a different one.
@heplesser heplesser added T: Enhancement New functionality, model or documentation ZC: Kernel DO NOT USE THIS LABEL ZP: In progess DO NOT USE THIS LABEL S: High Should be handled next I: User Interface Users may need to change their code due to changes in function calls I: Behavior changes Introduces changes that produce different results for some users labels Feb 21, 2020
@heplesser heplesser added this to the NEST 3.0 milestone Feb 21, 2020
@heplesser heplesser self-assigned this Feb 21, 2020
@peteroupc
Copy link

peteroupc commented Feb 22, 2020

An important issue is statistical independence of random number streams, which usually comes into play when generating multiple streams of random numbers. Many strategies that claim to produce this kind of streams don't in fact produce independent random number sequences (one example is PCG's use of additive constants as stream numbers). It's hard to determine whether a given strategy (for a given PRNG) will produce independent streams without testing it.

Many existing RNGs return 32-bit integers, which are converted into [0, 1)-uniform random numbers by simple division. This allows at most 2^32=4 x 10^9 different values even for 53-bit-mantissa doubles (see also boostorg/random#61).

This is another issue on its own. Typically double numbers (64-bit binary floating point values) are generated by dividing or multiplying a random integer by a constant. But there are other approaches as well. Perhaps the most sophisticated is the Rademacher Floating-Point Library (by @christoph-conrads), whose algorithm generates uniform random binary floating-point numbers in an arbitrary range, such that any representable floating-point number has the appropriate chance of occurring. However, the algorithm is far from trivial, and the author has yet to write up how the algorithm works.

Finally, the random number engines currently available in C++ leave a lot to be desired. For example, default_random_engine has unspecified quality; mt19937, ranlux24, and ranlux48 have a bigger-than-necessary state and are nontrivial to seed; and minstd_rand0 and minstd_rand admit too few seeds. Although discard_block_engine and shuffle_block_engine can be used to improve a random engine's quality, it's hard to tell whether the quality is improved this way without testing.

See also my criteria for high-quality random generators.

@peteroupc
Copy link

Well-defined, standardised way to convert raw random numbers into uniform-[0, 1)-distributed random numbers in which all mantissa bits are randomised, i.e., 53 bits for double(generate_canonical(), §29.6.7.2). This is independent of whether the underlying generator is a 32 or 64 bit generator.

According to the paper "A new specification for std::generate_canonical", there are flaws in the current definition of generate_canonical(), rendering it far from well-defined. See also this question.

@heplesser
Copy link
Contributor Author

@peteroupc Thank you for your input! Concerning the weakness of the Mersenne Twisters and similar generators: These weaknesses have been known for over a decade, but I have not yet seen any publication indicating that this weakness affects simulation experiments. Ferrenberg and Landau showed in 1992 that some "good old generators" distorted the results of physics simulations (Ferrenberg and Landau, Phys Rev Lett 69:3382, 1992), but I have not come across any comparable critique of MT generators.

@peteroupc
Copy link

peteroupc commented Mar 2, 2020

@peteroupc Thank you for your input! Concerning the weakness of the Mersenne Twisters and similar generators: These weaknesses have been known for over a decade, but I have not yet seen any publication indicating that this weakness affects simulation experiments. Ferrenberg and Landau showed in 1992 that some "good old generators" distorted the results of physics simulations (Ferrenberg and Landau, Phys Rev Lett 69:3382, 1992), but I have not come across any comparable critique of MT generators.

I know of one: "It Is High Time We Let Go of the Mersenne Twister", S. Vigna.

@heplesser
Copy link
Contributor Author

@peteroupc Thanks for the pointer to the problem that std::generate_canonical may return 1. I think the paper you refer to is actually an argument for using generate_canonical, since the C++ standard experts are clearly aware of the problem and working at improving the standard, so we can expect an improved solution at some point. Until then, we can just wrap generate_canonical in our base class to re-draw if std::generate_canonical() returns 1 (as described in the paper, this only affects the run-time guarantee of the algorithm.

Vigna's paper is interesting, especially the example of Sec 2.1. Unfortunately, he does not provide any estimates about at which matrix size one would expect to see problems for the full-sized MT19937.

In NEST, we have at least since 2002 tried to follow Knuth's advice (TAOCP, Ch 3.6)

The most prudent policy . . . to follow is to run each Monte Carlo program at least twice using quite different sources of random numbers, before taking the answers of the program seriously; this will not only give an indication of the stability of the results, it will also guard against the danger of trusting in a generator with hidden deficiencies. (Every random number generator will fail in at least one application.)

by making it easy for the user to change the RNG used. We will continue to do so and strongly encourage users to re-run simulations using generators from different families.

@peteroupc
Copy link

Vigna's paper is interesting, especially the example of Sec 2.1. Unfortunately, he does not provide any estimates about at which matrix size one would expect to see problems for the full-sized MT19937.

@vigna, do you have a comment?

@peteroupc
Copy link

Many of the generators recommended by @peteroupc have rather short periods for our purposes.

  • You appear to say that 128 bits are "too short" for your purposes. What about 192 bits? Is it the 128-bit and shorter PRNGs you are referring to?
  • There are PRNGs that give each seed its own independent random number sequence, indexed by a counter. Depending on the counter size, this property can reduce the risk of overlap even further.
  • I list the following as an example of a high-quality PRNG: "A high-quality PRNG that outputs hash codes of a C-bit counter and an S-bit seed". Many hash functions, cryptographic and non-cryptographic (such as MurmurHash3, xxHash, MD5, etc.), can serve as the basis for a counter-based PRNG with excellent randomness quality. And hash functions can theoretically take inputs of arbitrary size, thus allowing for periods of arbitrary size. (One possible example is a MurmurHash3 based PRNG that hashes a 64-bit seed and a 192-bit counter.) Random123's PRNGs use particular hash functions, but any other hash function can substitute for them as long as the resulting PRNG provides adequate randomness.

@heplesser
Copy link
Contributor Author

See also https://bashtage.github.io/randomgen/index.html on a modern generator suite for NumPy.

@heplesser
Copy link
Contributor Author

  • You appear to say that 128 bits are "too short" for your purposes. What about 192 bits? Is it the 128-bit and shorter PRNGs you are referring to?

The overlap analysis including the r^1/3 rule of L'Ecuyer indicates that periods of 2^128 are too short to avoid overlaps. With a period of 2^256, the probability of overlap is 10^-30, which seems acceptable. I haven't seen (but not searched either) for generators with period 2^192.

  • There are PRNGs that give each seed its own independent random number sequence, indexed by a counter. Depending on the counter size, this property can reduce the risk of overlap even further.

Indeed, that's why I propose to include Random123 which provides such generators.

  • I list the following as an example of a high-quality PRNG: "A high-quality PRNG that outputs hash codes of a C-bit counter and an S-bit seed". Many hash functions, cryptographic and non-cryptographic (such as MurmurHash3, xxHash, MD5, etc.), can serve as the basis for a counter-based PRNG with excellent randomness quality. And hash functions can theoretically take inputs of arbitrary size, thus allowing for periods of arbitrary size. (One possible example is a MurmurHash3 based PRNG that hashes a 64-bit seed and a 192-bit counter.) Random123's PRNGs use particular hash functions, but any other hash function can substitute for them as long as the resulting PRNG provides adequate randomness.

Random number generators are a small, if crucial, component of NEST. We do not want to engage in RNG development ourselves, but build on well established software, preferably with a wide user community (increases risk of weaknesses coming to light), active developer community (bugfixes, performance improvements, adaptation to evolving standards) and good development practices (code repo, issue tracker, code review, CI testing). I would not want to invest time in devising new schemes ourselves (we need to focus our resources where we are specialists).

@vigna
Copy link

vigna commented Mar 2, 2020

@heplesser there's no need to guess because I just published an exact computation of the overlap probability: http://vigna.di.unimi.it/ftp/papers/overlap.pdf

If you have period P and n processors using L outputs the probability of overlap is bounded by n²L/P, and for n²L much smaller than P the bound is quite precise (the paper contains exacy upper and lower bounds).

@vigna
Copy link

vigna commented Mar 2, 2020

20000⨉20000. You just need it to be larger than the number of state bits. So, quite large. It takes more time to count the odd coefficients (probably a day or two per sample—an educated guess from the fact that the algorithm is cubic). If you want I can try in my spare time.

@jhnnsnk
Copy link
Contributor

jhnnsnk commented Mar 2, 2020

I would like to put up a specific use case from a project by @sdasbach for discussion: It became necessary to assess the influence of different sources of randomization on the network dynamics. In particular, we independently randomized node parameters, the selection of connections to be created, and the parametrization of connections. What would be the preferred way to do that in the new framework?

@heplesser
Copy link
Contributor Author

@heplesser there's no need to guess because I just published an exact computation of the overlap probability: http://vigna.di.unimi.it/ftp/papers/overlap.pdf

If you have period P and n processors using L outputs the probability of overlap is bounded by n²L/P, and for n²L much smaller than P the bound is quite precise (the paper contains exacy upper and lower bounds).

@vigna Thanks!

@heplesser
Copy link
Contributor Author

20000⨉20000. You just need it to be larger than the number of state bits. So, quite large. It takes more time to count the odd coefficients (probably a day or two per sample—an educated guess from the fact that the algorithm is cubic). If you want I can try in my spare time.

@vigna I am slightly confused by this post. Was this meant for a different discussion?

@heplesser
Copy link
Contributor Author

@jhnnsnk In NEST3 (as in NEST2) you will be able to re-seed or even exchange the RNGs at any time. So you could perform each step, then select a different RNG type (or keep the type but choose new seeds). In order to use different sources of randomness for which connections to create and which parameters to use for them, you would have to create the connections first and then set the parameters on them.

But if the RNGs used are any good, you should not need to do this—a single stream of numbers per VP should suffice. I am very curious about what you and @sdasbach found (offline if you do not want to spill the news before a possible publication).

@vigna
Copy link

vigna commented Mar 2, 2020

@heplesser I was answering to

Vigna's paper is interesting, especially the example of Sec 2.1. Unfortunately, he does not provide any estimates about at which matrix size one would expect to see problems for the full-sized MT19937.

@heplesser
Copy link
Contributor Author

... and then libc++ breaks it all ...

@hakonsbm points out that Clang will not accept generators wrapped by a base class, which is required to allow for flexible exchange of RNG types at runtime. The reason is subtle, entirely standard-conformant, most likely performance enhancing, but a deal-breaker for us.

The standard defines in §29.6.1.3, Table 103, that the min() and max() methods of a generator must have compile-time complexity, and the class templates in §29.6.3 declare these methods as static constexpr.

Of the three C++ Library implementations, libc++ exploits this fully in generate_canonical(), by accessing these methods as class methods:

const size_t __logR = __log2<uint64_t, _URNG::max() - _URNG::min() + uint64_t(1)>::value;

This obviously precludes any idea of passing generators via base-class pointers.

Excluding Clang/libc++ is not an option, and we cannot exclude that the other C++ Lib implementations would change to static method access, too. I have a vague idea of how to work around this, but it is so complex I would not want to consider it.

The only way out would be to select the RNG type at compile time, but this would make it far too difficult for users to adhere to the good practice of cross-checking results with different RNG types.

So it seems that we cannot use the C++ Library random generators and distributions after all.

@heplesser
Copy link
Contributor Author

@hakonsbm @jougs I have an idea how we can solve the "no baseclassing" problem in a reasonably elegant way, more later.

@jhnnsnk
Copy link
Contributor

jhnnsnk commented Mar 3, 2020

@jhnnsnk In NEST3 (as in NEST2) you will be able to re-seed or even exchange the RNGs at any time. So you could perform each step, then select a different RNG type (or keep the type but choose new seeds). In order to use different sources of randomness for which connections to create and which parameters to use for them, you would have to create the connections first and then set the parameters on them.

@heplesser Thanks for your reply. In our previous work with NEST2 we used the option to re-seed before each step, but we had to use Python random number generators in addition which is not necessary any more.

@hakonsbm
Copy link
Contributor

hakonsbm commented Mar 3, 2020

@heplesser @jougs By creating a wrapper class for the distributions, I have found a way to call the distribution stored in the distribution wrapper with the generator stored in the RNG wrapper, making all the compilers happy. The RNGs are still exchangeable at runtime, and (most of) the interface and flexibility is kept the same.

@heplesser
Copy link
Contributor Author

@hakonsbm Sounds great!

@christoph-conrads
Copy link

Perhaps the most sophisticated is the Rademacher Floating-Point Library (by @christoph-conrads), whose algorithm generates uniform random binary floating-point numbers in an arbitrary range, such that any representable floating-point number has the appropriate chance of occurring.

Rademacher FPL author here. Thank you for mentioning my library.

The three relevant properties of a uniform random floating-point number generator for the range [a,b) are

  1. uniform distribution of values,
  2. the values are drawn from [a,b),
  3. all floating-point values in range are drawn.

For the interval [0,1), all generators that compute floats as x/2^b possess property 1 and 2 if b is not larger than the number of significand bits. Otherwise the generator violates all properties unless round-toward-zero is employed.

For intervals [a,b) other than [0,1), values are (to the best of my knowledge) always computed as (b-a) * x + a, where x is a random value in [0,1). This works with real numbers but not with floating-point numbers. Consider for example, that there are 2^30 single-precision floats in [0,1) but the target interval may contain 2^30+1 floats; the computed distribution cannot be uniform in this case. There may also be rounding errors when computing b-a, (b-a) * x, or (b-a) * x + a.

If you need to draw from all floating-point values in [0,1) or if you need to draw from a uniform distribution over an interval that is not [0,1), then you must use my library.

@hakonsbm
Copy link
Contributor

hakonsbm commented Mar 9, 2020

The most recent implementation is now on my branch. Compiling with Clang, there are currently 17 11 tests failing:

SLI tests

  • unittests/​test_gif_{cond,psc}_{exp,exp_multisynapse}.sli
    • They all use reference values, but because there is a random component in the models, assertion fails when we get different values with different std libraries.
  • unittests/test_mip_corrdet.sli
    • Same problem with reference values.
  • unittests/test_poisson_ps_intervals.sli
    • Resulting coefficient of variation is not sufficiently close to 1 (difference should be less than 0.001, we get 0.005585544). Update: Increased the rate and simulation time to make statistics more stable.
  • unittests/test_ppd_sup_generator.sli
    • Resulting coefficient of variation is not sufficiently close to 1 (should be 1.0 ± 0.2, we get 1.21687). Update: Adjusting the dead-time avoids the check of CV**2 failing due to bad luck.
  • regressiontests/issue-77.sli
  • mpitests/test_parallel_conn_and_rand.sli
    • The script is not invariant of number of processes when using the normal random distribution Parameter for the weight. Other random Parameters does not make the test fail. This is very strange. I will try to figure out what goes wrong here. Update: The normal random distribution Parameter stored the C++ distribution object. This object was then used by all threads. Because distributions are not stateless, this caused different results when using different number of threads. The Parameter now has one distribution object per thread.

Python tests

  • test_connect_all_patterns.py
    • The test runs a few selected tests again with multiple processes. Running mpirun -n 2 test_connect_fixed_total_number.py fails. Update: Increased the number of iterations in the statistics test for more accurate results.
  • test_growth_curves.py
    • Tests use reference values, se above.
  • test_connect_layers.py
    • Tests use reference values, se above. Update: Using a Kolmogorov-Smirnov test instead of reference values to check the results.
  • test_spatial_kernels.py
    • Test of gamma distribution fails.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I: Behavior changes Introduces changes that produce different results for some users I: User Interface Users may need to change their code due to changes in function calls S: High Should be handled next T: Enhancement New functionality, model or documentation
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants