You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Modify engine.py to evaluate of solutions in parallel. (add toolbox.register("map", some_mapping_function)).
Possibilities include python multiprocessing, Scoop, or pathos. The multiprocessing module is standard python, while Scoop appears to integrate well with DEAP and seems to extend beyong multiprocessing (e.g support for clusters), which is nice. The issue with these modules is that they don't automatically serialize instance methods and certain modules, required by the function mapped to the toolbox.evaluate method, which makes it difficult to use with motifgp without significant modifications. Pathos appears to handle serialization by default, so it might a good start point.
An issue with this multiprocessing is that it might break memoization since the dict for the cache is not shared, so it probably won't be updated or shared between evaluations. This might be possible to fix with a workaround the shared memory using. If not, it might be better to add cache-less parallelism, or reconsider python multiprocessing or scoop.
The text was updated successfully, but these errors were encountered:
Modify engine.py to evaluate of solutions in parallel. (add toolbox.register("map", some_mapping_function)).
Possibilities include python multiprocessing, Scoop, or pathos. The multiprocessing module is standard python, while Scoop appears to integrate well with DEAP and seems to extend beyong multiprocessing (e.g support for clusters), which is nice. The issue with these modules is that they don't automatically serialize instance methods and certain modules, required by the function mapped to the toolbox.evaluate method, which makes it difficult to use with motifgp without significant modifications. Pathos appears to handle serialization by default, so it might a good start point.
An issue with this multiprocessing is that it might break memoization since the dict for the cache is not shared, so it probably won't be updated or shared between evaluations. This might be possible to fix with a workaround the shared memory using. If not, it might be better to add cache-less parallelism, or reconsider python multiprocessing or scoop.
The text was updated successfully, but these errors were encountered: