Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use v2 of emergent, etable, empi, and Cogent Core #28

Open
wants to merge 32 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
0071b83
update to v2 module url
kkoreilly Mar 9, 2024
f4e1ff7
remove old makefile; add core.toml config file
kkoreilly Mar 9, 2024
69227be
update to v2 versions of all emer repos
kkoreilly Mar 9, 2024
6881301
update to new struct field doc paradigm using goki/go-tools
kkoreilly Mar 9, 2024
bf5102a
update to new mat32; rename LayerStru to LayerBase
kkoreilly Mar 9, 2024
4774a22
successfully implement emer.Layer and emer.Prjn interfaces
kkoreilly Mar 9, 2024
b982f90
renamed NetworkStru to NetworkBase; got all of base leabra code building
kkoreilly Mar 9, 2024
8f0e9f4
get core agate, deep, and glong code building
kkoreilly Mar 9, 2024
4e2989c
got all core leabra library code building
kkoreilly Mar 9, 2024
5db1813
start using new ki and gi core packages
kkoreilly Mar 9, 2024
46e78a0
get rid of old kit usage
kkoreilly Mar 9, 2024
caad3f2
get more things building
kkoreilly Mar 9, 2024
7bbf6b6
add enumgen and gtigen; update more to new enums structure
kkoreilly Mar 9, 2024
01e223f
get all enums building with new structure
kkoreilly Mar 9, 2024
815e585
fix ra25 params
kkoreilly Mar 9, 2024
ea32d1e
got ra25 example building and running with v2
kkoreilly Mar 9, 2024
4db101b
MaxParallelData must be at least 1; now ra25 doesn't immediately crash
kkoreilly Mar 10, 2024
142e7f0
must run step functions in separate goroutine
kkoreilly Mar 10, 2024
597a8a2
got ra25 test item working
kkoreilly Mar 10, 2024
96a2ede
remove old version file
kkoreilly Mar 10, 2024
3c2202d
clean up import statements
kkoreilly Mar 10, 2024
c76f9e4
resolve various v2 issues
kkoreilly Mar 10, 2024
9d05975
start updating to new core and emer naming changes
kkoreilly Apr 6, 2024
bf06b3e
remove old ci file
kkoreilly Apr 6, 2024
dcd0b16
update ci to use new core tool
kkoreilly Apr 14, 2024
8822e26
initial renaming of some core packages
kkoreilly Apr 14, 2024
5feda3c
more renaming of core packages
kkoreilly Apr 14, 2024
8c288bc
further renaming of core packages
kkoreilly Apr 14, 2024
418c942
further renaming of core packages, especially of tree and imagex
kkoreilly Apr 14, 2024
dc4f5a3
delete old gtigen files
kkoreilly Apr 15, 2024
2893449
use new RunDialog helper function
kkoreilly Apr 27, 2024
a8621b4
fix ly hyphen grammar mistake and other spelling mistakes
kkoreilly Apr 27, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 0 additions & 57 deletions .github/workflows/ci.yml

This file was deleted.

34 changes: 34 additions & 0 deletions .github/workflows/go.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
name: Go

on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]

jobs:

build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.22'

- name: Set up Core
run: go install cogentcore.org/core/cmd/core@main && core setup

- name: Build
run: go build -v ./...

- name: Test
run: go test -v ./... -coverprofile cover.out

- name: Update coverage report
uses: ncruces/go-coverage-report@v0
with:
coverage-file: cover.out
if: github.event_name == 'push'
39 changes: 0 additions & 39 deletions .travis.yml

This file was deleted.

69 changes: 0 additions & 69 deletions Makefile

This file was deleted.

14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@
[![CI](https://github.com/emer/leabra/actions/workflows/ci.yml/badge.svg)](https://github.com/emer/leabra/actions/workflows/ci.yml)
[![Codecov](https://codecov.io/gh/emer/leabra/branch/master/graph/badge.svg?token=Hw5cInAxY3)](https://codecov.io/gh/emer/leabra)

This is the Go implementation of the Leabra algorithm for biologically-based models of cognition, based on the [Go emergent](https://github.com/emer/emergent) framework (with optional Python interface).
This is the Go implementation of the Leabra algorithm for biologically based models of cognition, based on the [Go emergent](https://github.com/emer/emergent) framework (with optional Python interface).

See [Wiki Install](https://github.com/emer/emergent/wiki/Install) for installation instructions, and the [Wiki Rationale](https://github.com/emer/emergent/wiki/Rationale) and [History](https://github.com/emer/emergent/wiki/History) pages for a more detailed rationale for the new version of emergent, and a history of emergent (and its predecessors).

See the [ra25 example](https://github.com/emer/leabra/blob/master/examples/ra25/README.md) for a complete working example (intended to be a good starting point for creating your own models), and any of the 26 models in the [Comp Cog Neuro sims](https://github.com/CompCogNeuro/sims) repository which also provide good starting points. See the [etable wiki](https://github.com/emer/etable/wiki) for docs and example code for the widely-used etable data table structure, and the `family_trees` example in the CCN textbook sims which has good examples of many standard network representation analysis techniques (PCA, cluster plots, RSA).
See the [ra25 example](https://github.com/emer/leabra/blob/master/examples/ra25/README.md) for a complete working example (intended to be a good starting point for creating your own models), and any of the 26 models in the [Comp Cog Neuro sims](https://github.com/CompCogNeuro/sims) repository which also provide good starting points. See the [etable wiki](https://github.com/emer/etable/wiki) for docs and example code for the widely used etable data table structure, and the `family_trees` example in the CCN textbook sims which has good examples of many standard network representation analysis techniques (PCA, cluster plots, RSA).

See [python README](https://github.com/emer/leabra/blob/master/python/README.md) and [Python Wiki](https://github.com/emer/emergent/wiki/Python) for info on using Python to run models.

Expand All @@ -31,9 +31,9 @@ See [python README](https://github.com/emer/leabra/blob/master/python/README.md)

* There are 3 main levels of structure: `Network`, `Layer` and `Prjn` (projection). The network calls methods on its Layers, and Layers iterate over both `Neuron` data structures (which have only a minimal set of methods) and the `Prjn`s, to implement the relevant computations. The `Prjn` fully manages everything about a projection of connectivity between two layers, including the full list of `Syanpse` elements in the connection. There is no "ConGroup" or "ConState" level as was used in C++, which greatly simplifies many things. The Layer also has a set of `Pool` elements, one for each level at which inhibition is computed (there is always one for the Layer, and then optionally one for each Sub-Pool of units (*Pool* is the new simpler term for "Unit Group" from C++ emergent).

* The `NetworkStru` and `LayerStru` structs manage all the core structural aspects of things (data structures etc), and then the algorithm-specific versions (e.g., `leabra.Network`) use Go's anonymous embedding (akin to inheritance in C++) to transparently get all that functionality, while then directly implementing the algorithm code. Almost every step of computation has an associated method in `leabra.Layer`, so look first in [layer.go](https://github.com/emer/leabra/blob/master/leabra/layer.go) to see how something is implemented.
* The `NetworkBase` and `LayerBase` structs manage all the core structural aspects of things (data structures etc), and then the algorithm-specific versions (e.g., `leabra.Network`) use Go's anonymous embedding (akin to inheritance in C++) to transparently get all that functionality, while then directly implementing the algorithm code. Almost every step of computation has an associated method in `leabra.Layer`, so look first in [layer.go](https://github.com/emer/leabra/blob/master/leabra/layer.go) to see how something is implemented.

* Each structural element directly has all the parameters controlling its behavior -- e.g., the `Layer` contains an `ActParams` field (named `Act`), etc, instead of using a separate `Spec` structure as in C++ emergent. The Spec-like ability to share parameter settings across multiple layers etc is instead achieved through a **styling**-based paradigm -- you apply parameter "styles" to relevant layers instead of assigning different specs to them. This paradigm should be less confusing and less likely to result in accidental or poorly-understood parameter applications. We adopt the CSS (cascading-style-sheets) standard where parameters can be specifed in terms of the Name of an object (e.g., `#Hidden`), the *Class* of an object (e.g., `.TopDown` -- where the class name TopDown is manually assigned to relevant elements), and the *Type* of an object (e.g., `Layer` applies to all layers). Multiple space-separated classes can be assigned to any given element, enabling a powerful combinatorial styling strategy to be used.
* Each structural element directly has all the parameters controlling its behavior -- e.g., the `Layer` contains an `ActParams` field (named `Act`), etc, instead of using a separate `Spec` structure as in C++ emergent. The Spec-like ability to share parameter settings across multiple layers etc is instead achieved through a **styling**-based paradigm -- you apply parameter "styles" to relevant layers instead of assigning different specs to them. This paradigm should be less confusing and less likely to result in accidental or poorly understood parameter applications. We adopt the CSS (cascading-style-sheets) standard where parameters can be specifed in terms of the Name of an object (e.g., `#Hidden`), the *Class* of an object (e.g., `.TopDown` -- where the class name TopDown is manually assigned to relevant elements), and the *Type* of an object (e.g., `Layer` applies to all layers). Multiple space-separated classes can be assigned to any given element, enabling a powerful combinatorial styling strategy to be used.

* Go uses `interfaces` to represent abstract collections of functionality (i.e., sets of methods). The `emer` package provides a set of interfaces for each structural level (e.g., `emer.Layer` etc) -- any given specific layer must implement all of these methods, and the structural containers (e.g., the list of layers in a network) are lists of these interfaces. An interface is implicitly a *pointer* to an actual concrete object that implements the interface. Thus, we typically need to convert this interface into the pointer to the actual concrete type, as in:

Expand Down Expand Up @@ -62,7 +62,7 @@ There are several changes from the original C++ emergent implementation for how

# The Leabra Algorithm

Leabra stands for *Local, Error-driven and Associative, Biologically Realistic Algorithm*, and it implements a balance between error-driven (backpropagation) and associative (Hebbian) learning on top of a biologically-based point-neuron activation function with inhibitory competition dynamics (either via inhibitory interneurons or an approximation thereof), which produce k-Winners-Take-All (kWTA) sparse distributed representations. Extensive documentation is available from the online textbook: [Computational Cognitive Neuroscience](https://CompCogNeuro.org) which serves as a second edition to the original book: *Computational Explorations in Cognitive Neuroscience: Understanding
Leabra stands for *Local, Error-driven and Associative, Biologically Realistic Algorithm*, and it implements a balance between error-driven (backpropagation) and associative (Hebbian) learning on top of a biologically based point-neuron activation function with inhibitory competition dynamics (either via inhibitory interneurons or an approximation thereof), which produce k-Winners-Take-All (kWTA) sparse distributed representations. Extensive documentation is available from the online textbook: [Computational Cognitive Neuroscience](https://CompCogNeuro.org) which serves as a second edition to the original book: *Computational Explorations in Cognitive Neuroscience: Understanding
the Mind by Simulating the Brain*, O'Reilly and Munakata, 2000,
Cambridge, MA: MIT Press. [Computational Explorations..](http://psych.colorado.edu/~oreilly/comp_ex_cog_neuro.html)

Expand Down Expand Up @@ -111,7 +111,7 @@ This repository contains specialized additions to the core algorithm described h

## Timing

Leabra is organized around the following timing, based on an internally-generated alpha-frequency (10 Hz, 100 msec periods) cycle of expectation followed by outcome, supported by neocortical circuitry in the deep layers and the thalamus, as hypothesized in the [DeepLeabra](#deepleabra) extension to standard Leabra:
Leabra is organized around the following timing, based on an internally generated alpha-frequency (10 Hz, 100 msec periods) cycle of expectation followed by outcome, supported by neocortical circuitry in the deep layers and the thalamus, as hypothesized in the [DeepLeabra](#deepleabra) extension to standard Leabra:

* A **Trial** lasts 100 msec (10 Hz, alpha frequency), and comprises one sequence of expectation -- outcome learning, organized into 4 quarters.
+ Biologically, the deep neocortical layers (layers 5, 6) and the thalamus have a natural oscillatory rhythm at the alpha frequency. Specific dynamics in these layers organize the cycle of expectation vs. outcome within the alpha cycle.
Expand Down Expand Up @@ -233,7 +233,7 @@ Learning is based on running-averages of activation variables, parameterized in
+ `AvgLLrn = ((Max - Min) / (Gain - Min)) * (AvgL - Min)`
+ learning strength factor for how much to learn based on AvgL floating threshold -- this is dynamically modulated by strength of AvgL itself, and this turns out to be critical -- the amount of this learning increases as units are more consistently active all the time (i.e., "hog" units). Params on `AvgLParams`, Min = 0.0001, Max = 0.5. Note that this depends on having a clear max to AvgL, which is an advantage of the exponential running-average form above.
+ `AvgLLrn *= MAX(1 - layCosDiffAvg, ModMin)`
+ also modulate by time-averaged cosine (normalized dot product) between minus and plus phase activation states in given receiving layer (layCosDiffAvg), (time constant 100) -- if error signals are small in a given layer, then Hebbian learning should also be relatively weak so that it doesn't overpower it -- and conversely, layers with higher levels of error signals can handle (and benefit from) more Hebbian learning. The MAX(ModMin) (ModMin = .01) factor ensures that there is a minimum level of .01 Hebbian (multiplying the previously-computed factor above). The .01 * .05 factors give an upper-level value of .0005 to use for a fixed constant AvgLLrn value -- just slightly less than this (.0004) seems to work best if not using these adaptive factors.
+ also modulate by time-averaged cosine (normalized dot product) between minus and plus phase activation states in given receiving layer (layCosDiffAvg), (time constant 100) -- if error signals are small in a given layer, then Hebbian learning should also be relatively weak so that it doesn't overpower it -- and conversely, layers with higher levels of error signals can handle (and benefit from) more Hebbian learning. The MAX(ModMin) (ModMin = .01) factor ensures that there is a minimum level of .01 Hebbian (multiplying the previously computed factor above). The .01 * .05 factors give an upper-level value of .0005 to use for a fixed constant AvgLLrn value -- just slightly less than this (.0004) seems to work best if not using these adaptive factors.
+ `AvgSLrn = (1-LrnM) * AvgS + LrnM * AvgM`
+ mix in some of the medium-term factor into the short-term factor -- this is important for ensuring that when neuron turns off in the plus phase (short term), that enough trace of earlier minus-phase activation remains to drive it into the LTD weight decrease region -- LrnM = .1 default.

Expand Down
19 changes: 8 additions & 11 deletions agate/maint.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,18 @@
package agate

import (
"github.com/emer/leabra/glong"
"github.com/emer/leabra/interinhib"
"github.com/emer/leabra/leabra"
"github.com/goki/ki/kit"
"github.com/goki/mat32"
"cogentcore.org/core/math32"

Check failure on line 8 in agate/maint.go

View workflow job for this annotation

GitHub Actions / build

github.com/alecthomas/chroma/[email protected]: missing go.sum entry for go.mod file; to add it:
"github.com/emer/leabra/v2/glong"
"github.com/emer/leabra/v2/interinhib"
"github.com/emer/leabra/v2/leabra"
)

// PulseClearParams are parameters for the synchronous pulse of activation /
// inhibition that clears NMDA maintenance.
type PulseClearParams struct {

// GABAB value activated by the inhibitory pulse
GABAB float32 `desc:"GABAB value activated by the inhibitory pulse"`
GABAB float32
}

func (pc *PulseClearParams) Defaults() {
Expand All @@ -31,14 +30,12 @@
glong.Layer

// parameters for the synchronous pulse of activation / inhibition that clears NMDA maintenance.
PulseClear PulseClearParams `desc:"parameters for the synchronous pulse of activation / inhibition that clears NMDA maintenance."`
PulseClear PulseClearParams

// inhibition from output layer
InterInhib interinhib.InterInhib `desc:"inhibition from output layer"`
InterInhib interinhib.InterInhib
}

var KiT_MaintLayer = kit.Types.AddType(&MaintLayer{}, leabra.LayerProps)

func (ly *MaintLayer) Defaults() {
ly.Layer.Defaults()
ly.NMDA.Gbar = 0.02
Expand All @@ -54,7 +51,7 @@
func (ly *MaintLayer) InhibFmGeAct(ltime *leabra.Time) {
lpl := &ly.Pools[0]
mxact := ly.InterInhibMaxAct(ltime)
lpl.Inhib.Act.Avg = mat32.Max(ly.InterInhib.Gi*mxact, lpl.Inhib.Act.Avg)
lpl.Inhib.Act.Avg = math32.Max(ly.InterInhib.Gi*mxact, lpl.Inhib.Act.Avg)
ly.Inhib.Layer.Inhib(&lpl.Inhib)
ly.PoolInhibFmGeAct(ltime)
ly.InhibFmPool(ltime)
Expand Down
19 changes: 7 additions & 12 deletions agate/network.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,13 @@
package agate

import (
"github.com/emer/emergent/emer"
"github.com/emer/emergent/prjn"
"github.com/emer/emergent/relpos"
"github.com/emer/leabra/deep"
"github.com/emer/leabra/glong"
"github.com/emer/leabra/leabra"
"github.com/emer/leabra/pcore"
"github.com/goki/ki/kit"
"github.com/emer/emergent/v2/emer"
"github.com/emer/emergent/v2/prjn"
"github.com/emer/emergent/v2/relpos"
"github.com/emer/leabra/v2/deep"
"github.com/emer/leabra/v2/glong"
"github.com/emer/leabra/v2/leabra"
"github.com/emer/leabra/v2/pcore"
)

// agate.Network has methods for configuring specialized AGate network components
Expand All @@ -21,10 +20,6 @@ type Network struct {
deep.Network
}

var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)

var NetworkProps = deep.NetworkProps

// Defaults sets all the default parameters for all layers and projections
func (nt *Network) Defaults() {
nt.Network.Defaults()
Expand Down
6 changes: 3 additions & 3 deletions agate/neuron.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
package agate

import (
"github.com/emer/leabra/deep"
"github.com/emer/leabra/glong"
"github.com/emer/leabra/pcore"
"github.com/emer/leabra/v2/deep"
"github.com/emer/leabra/v2/glong"
"github.com/emer/leabra/v2/pcore"
)

var (
Expand Down
Loading
Loading