You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
• composition.py, autodiffcomposition.py and relevant subordinate methods:
- implement synch and track parameter dictionaries that are passed to relevant methods
- add/rename attributes:
- PytorchCompositionWrapper:
- retained_outputs
- retained_targets
- retained_losses
- _nodes_to_execute_after_gradient_calc
- PytorchMechanismWrapper:
- value -> output
- input
- add methods:
- synch_with_psyneulink(): centralize copying of params and values to pnl using methods below
- copy_node_variables_to_psyneulink(): centralize updating of node (mech & comp) variables in PNL
- copy_node_values_to_psyneulink(): centralize updating of node (mech & comp) values in PNL
- copy_results_to_psyneulink(): centralize updating of autodiffcomposition.results
- retain_in_psyneulink(): centralize tracking of pytorch results in PNL using methods below
- retain_torch_outputs: keeps record of targets and copies to AutodiffComposition.pytorch_targets at end of call to learn()
- retain_torch_targets: keeps record of targets and copies to AutodiffComposition.pytorch_targets at end of call to learn()
- retain_torch_losses: keeps record of losses and copies to AutodiffComposition.pytorch_losses at end of call to learn()
• compositionrunner.py, autodiffcomposition.py, pytorchwrappers.py:
- move loss tracking from parameter on autodiff to attribute on its pytorch_rep
- batch_inputs(): add calls to synch_with_psyneulink() and retain_in_psyneulink()
- batch_function_inputs():
- needs calls to synch_with_psyneulink() and retain_in_psyneulink()
• composition.py:
- run(): add _update_results() as helper method than can be overidden (e.g., by autodiffcomposition) for less frequent updating
* • autodiffcomposition.py
- restrict calls to copy_weights_to_psyneulink based on copy_parameters_to_psyneulink_after arg/attribute
- implement handling of optimizations_per_minibatch and copy_parameters_to_psyneulink as attributes and args to learn
- autodiff_training(): fix bug in call to pytorch_rep.forward()
- implement synch and track Parameters
- _manage_synch_and_retain_args()
- run(): support specification of synch and retain args when called directly
- autodiff._update_learning_parameters -> do_optimzation():
- calculates loss for current trial
- calls autodiff_backward() to calculate gradients and update parameters
- updates tracked_loss over triasl
- autodiff_backward() -> new method that is called from do_optimization that calculates and updates the gradients
- self.loss -> self.loss_function
- _update_results() - overriden to call pytoch_rep.retain_for_psyneulink(RUN:trial_output)
- learn():
- move tracked_loss for each minibatch from parameter on autodiff to attribute on its pytorch_rep
(since that is already context dependent, and avoids calls to pnl.parameters._set on every call to forward()
- synch_with_pnl_options:
implement as dict to consolidate synch_projection_matrices_with_torch, synch_node_values_with_torch and synch_node_values_with_torch options passed to learning methods
- retain_in_pnl_options
implement as dict to consolidate retain_torch_outputs_in_results, retain_torch_targets and retain_torch_losses
passed to learning methods
• pytorchwrappers.py
- sublcass PytorchCompositionWrapper from torch.jit.ScriptModule
- retain_for_psyneulink(): implemented
- stores outputs, targets, and losses from Pytorch execution for copying to PsyNeuLink at end of learn().
- PytorchMechanismWrapper:
- .value -> .output
- add .input
- add/rename attributes:
- PytorchCompositionWrapper:
- retained_outputs
- retained_targets
- retained_losses
- _nodes_to_execute_after_gradient_calc
- PytorchMechanismWrapper:
- value -> output
- input
- add methods:
- synch_with_psyneulink(): centralize copying of params and values to pnl using methods below
- copy_node_variables_to_psyneulink(): centralize updating of node (mech & comp) variables in PNL
- copy_node_values_to_psyneulink(): centralize updating of node (mech & comp) values in PNL
- copy_results_to_psyneulink(): centralize updating of autodiffcomposition.results
- retain_in_psyneulink(): centralize tracking of pytorch results in PNL using methods below
- retain_torch_outputs: keeps record of targets and copies to AutodiffComposition.pytorch_targets at end of call to learn()
- retain_torch_targets: keeps record of targets and copies to AutodiffComposition.pytorch_targets at end of call to learn()
- retain_torch_losses: keeps record of losses and copies to AutodiffComposition.pytorch_losses at end of call to learn()
• pytorchEMcompositionwrapper.py
- store_memory():
- implement single call to linalg over memory
- only execute storage_node after last optimization_rep
• keywords.py
- implement LearningScale keywords class
• AutoAssociativeProjection:
make dependent on MaskedMappingProjection in prep for allowing lcamechanism to modulate auto/hetero parameters
* fix Literals import
• Factorize scripts into:
- ScriptControl.py
- TestParams.py
- [MODEL].py
---------
Co-authored-by: jdcpni <pniintel55>
0 commit comments