Skip to content

Commit

Permalink
massive rename of prjn -> path; projection -> pathway
Browse files Browse the repository at this point in the history
  • Loading branch information
rcoreilly committed May 3, 2024
1 parent af59982 commit afcc556
Show file tree
Hide file tree
Showing 64 changed files with 506 additions and 608 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ See [python README](https://github.com/emer/leabra/blob/master/python/README.md)

* Go uses `interfaces` to represent abstract collections of functionality (i.e., sets of methods). The `emer` package provides a set of interfaces for each structural level (e.g., `emer.Layer` etc) -- any given specific layer must implement all of these methods, and the structural containers (e.g., the list of layers in a network) are lists of these interfaces. An interface is implicitly a *pointer* to an actual concrete object that implements the interface.

* To allow for specialized algorithms to extend the basic Leabra algorithm functionality, we have additional algorithm-specific interfaces in `leabra/leabra/leabra.go`, called `LeabraNetwork`, `LeabraLayer`, and `LeabraPrjn` -- all functions should go through this interface so that the final actual function called can be either the default version defined on `leabra.Layer` or a more specialized type (e.g., for simulating the PFC, hippocampus, BG etc). This is what it looks like for example:
* To allow for specialized algorithms to extend the basic Leabra algorithm functionality, we have additional algorithm-specific interfaces in `leabra/leabra/leabra.go`, called `LeabraNetwork`, `LeabraLayer`, and `LeabraPath` -- all functions should go through this interface so that the final actual function called can be either the default version defined on `leabra.Layer` or a more specialized type (e.g., for simulating the PFC, hippocampus, BG etc). This is what it looks like for example:

```Go
func (nt *Network) InitActs() {
Expand All @@ -62,7 +62,7 @@ func (nt *Network) InitActs() {

* The emer interfaces are designed to support generic access to network state, e.g., for the 3D network viewer, but specifically avoid anything algorithmic. Thus, they allow viewing of any kind of network, including PyTorch backprop nets in the eTorch package.

* There are 3 main levels of structure: `Network`, `Layer` and `Prjn` (projection). The Network calls methods on its Layers, and Layers iterate over both `Neuron` data structures (which have only a minimal set of methods) and the `Prjn`s, to implement the relevant computations. The `Prjn` fully manages everything about a projection of connectivity between two layers, including the full list of `Syanpse` elements in the connection. There is no "ConGroup" or "ConState" level as was used in C++, which greatly simplifies many things. The Layer also has a set of `Pool` elements, one for each level at which inhibition is computed (there is always one for the Layer, and then optionally one for each Sub-Pool of units (*Pool* is the new simpler term for "Unit Group" from C++ emergent).
* There are 3 main levels of structure: `Network`, `Layer` and `Path` (pathway). The Network calls methods on its Layers, and Layers iterate over both `Neuron` data structures (which have only a minimal set of methods) and the `Path`s, to implement the relevant computations. The `Path` fully manages everything about a pathway of connectivity between two layers, including the full list of `Syanpse` elements in the connection. There is no "ConGroup" or "ConState" level as was used in C++, which greatly simplifies many things. The Layer also has a set of `Pool` elements, one for each level at which inhibition is computed (there is always one for the Layer, and then optionally one for each Sub-Pool of units (*Pool* is the new simpler term for "Unit Group" from C++ emergent).

* Layers have a `Shape` property, using the `tensor.Shape` type (see `table` package), which specifies their n-dimensional (tensor) shape. Standard layers are expected to use a 2D Y*X shape (note: dimension order is now outer-to-inner or *RowMajor* now), and a 4D shape then enables `Pools` ("unit groups") as hypercolumn-like structures within a layer that can have their own local level of inihbition, and are also used extensively for organizing patterns of connectivity.

Expand All @@ -78,7 +78,7 @@ Here are some of the additional supporting packages, organized by overall functi

* [netview](netview) provides the `NetView` interactive 3D network viewer, implemented in the GoGi 3D framework.

* [prjn](prjn) is a separate package for defining patterns of connectivity between layers (i.e., the `ProjectionSpec`s from C++ emergent). This is done using a fully independent structure that *only* knows about the shapes of the two layers, and it returns a fully general bitmap representation of the pattern of connectivity between them. The `leabra.Prjn` code then uses these patterns to do all the nitty-gritty of connecting up neurons. This makes the projection code *much* simpler compared to the ProjectionSpec in C++ emergent, which was involved in both creating the pattern and also all the complexity of setting up the actual connections themselves. This should be the *last* time any of those projection patterns need to be written (having re-written this code too many times in the C++ version as the details of memory allocations changed).
* [path](path) is a separate package for defining patterns of connectivity between layers (i.e., the `ProjectionSpec`s from C++ emergent). This is done using a fully independent structure that *only* knows about the shapes of the two layers, and it returns a fully general bitmap representation of the pattern of connectivity between them. The `leabra.Path` code then uses these patterns to do all the nitty-gritty of connecting up neurons. This makes the pathway code *much* simpler compared to the ProjectionSpec in C++ emergent, which was involved in both creating the pattern and also all the complexity of setting up the actual connections themselves. This should be the *last* time any of those pathway patterns need to be written (having re-written this code too many times in the C++ version as the details of memory allocations changed).

* [relpos](relpos) provides relative positioning of layers (right of, above, etc).

Expand Down
2 changes: 1 addition & 1 deletion actrf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Docs: [GoDoc](https://pkg.go.dev/github.com/emer/emergent/actrf)

Package actrf provides activation-based receptive field computation, otherwise known as *reverse correlation* or *spike-triggered averaging*. It simply computes the activation weighted average of other *source* patterns of activation -- i.e., sum(act * src) / sum(src) which then shows you the patterns of source activity for which a given unit was active.

The RF's are computed and stored in 4D tensors, where the outer 2D are the 2D projection of the activation tensor (e.g., the activations of units in a layer), and the inner 2D are the 2D projection of the source tensor.
The RF's are computed and stored in 4D tensors, where the outer 2D are the 2D pathway of the activation tensor (e.g., the activations of units in a layer), and the inner 2D are the 2D pathway of the source tensor.

This results in a nice standard RF plot that can be visualized in a tensor grid view.

Expand Down
8 changes: 4 additions & 4 deletions actrf/actrf.go
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,8 @@ func (af *RF) Init(name string, act, src tensor.Tensor) {
// does nothing if shape is already correct.
// return shape ints
func (af *RF) InitShape(act, src tensor.Tensor) []int {
aNy, aNx, _, _ := tensor.Prjn2DShape(act.Shape(), false)
sNy, sNx, _, _ := tensor.Prjn2DShape(src.Shape(), false)
aNy, aNx, _, _ := tensor.Projection2DShape(act.Shape(), false)

Check failure on line 59 in actrf/actrf.go

View workflow job for this annotation

GitHub Actions / build

undefined: tensor.Projection2DShape
sNy, sNx, _, _ := tensor.Projection2DShape(src.Shape(), false)

Check failure on line 60 in actrf/actrf.go

View workflow job for this annotation

GitHub Actions / build

undefined: tensor.Projection2DShape
oshp := []int{aNy, aNx, sNy, sNx}
if tensor.EqualInts(af.RF.Shp.Sizes, oshp) {
return oshp
Expand Down Expand Up @@ -102,14 +102,14 @@ func (af *RF) Add(act, src tensor.Tensor, thr float32) {
aNy, aNx, sNy, sNx := shp[0], shp[1], shp[2], shp[3]
for sy := 0; sy < sNy; sy++ {
for sx := 0; sx < sNx; sx++ {
tv := float32(tensor.Prjn2DValue(src, false, sy, sx))
tv := float32(tensor.Projection2DValue(src, false, sy, sx))

Check failure on line 105 in actrf/actrf.go

View workflow job for this annotation

GitHub Actions / build

undefined: tensor.Projection2DValue
if tv < thr {
continue
}
af.SumSrc.AddScalar([]int{sy, sx}, float64(tv))
for ay := 0; ay < aNy; ay++ {
for ax := 0; ax < aNx; ax++ {
av := float32(tensor.Prjn2DValue(act, false, ay, ax))
av := float32(tensor.Projection2DValue(act, false, ay, ax))

Check failure on line 112 in actrf/actrf.go

View workflow job for this annotation

GitHub Actions / build

undefined: tensor.Projection2DValue
af.SumProd.AddScalar([]int{ay, ax, sy, sx}, float64(av*tv))
}
}
Expand Down
4 changes: 2 additions & 2 deletions actrf/doc.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ which then shows you the patterns of source activity for which a given unit was
active.
The RF's are computed and stored in 4D tensors, where the outer 2D are the
2D projection of the activation tensor (e.g., the activations of units in
a layer), and the inner 2D are the 2D projection of the source tensor.
2D pathway of the activation tensor (e.g., the activations of units in
a layer), and the inner 2D are the 2D pathway of the source tensor.
This results in a nice standard RF plot that can be visualized in a tensor
grid view.
Expand Down
10 changes: 5 additions & 5 deletions actrf/running.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,21 +9,21 @@ import "cogentcore.org/core/tensor"
// RunningAvg computes a running-average activation-based receptive field
// for activities act relative to source activations src (the thing we're projecting rf onto)
// accumulating into output out, with time constant tau.
// act and src are projected into a 2D space (tensor.Prjn2D* methods), and
// act and src are projected into a 2D space (tensor.Projection2D* methods), and
// resulting out is 4D of act outer and src inner.
func RunningAvg(out *tensor.Float32, act, src tensor.Tensor, tau float32) {
dt := 1 / tau
cdt := 1 - dt
aNy, aNx, _, _ := tensor.Prjn2DShape(act.Shape(), false)
tNy, tNx, _, _ := tensor.Prjn2DShape(src.Shape(), false)
aNy, aNx, _, _ := tensor.Projection2DShape(act.Shape(), false)

Check failure on line 17 in actrf/running.go

View workflow job for this annotation

GitHub Actions / build

undefined: tensor.Projection2DShape
tNy, tNx, _, _ := tensor.Projection2DShape(src.Shape(), false)

Check failure on line 18 in actrf/running.go

View workflow job for this annotation

GitHub Actions / build

undefined: tensor.Projection2DShape
oshp := []int{aNy, aNx, tNy, tNx}
out.SetShape(oshp, "ActY", "ActX", "SrcY", "SrcX")
for ay := 0; ay < aNy; ay++ {
for ax := 0; ax < aNx; ax++ {
av := float32(tensor.Prjn2DValue(act, false, ay, ax))
av := float32(tensor.Projection2DValue(act, false, ay, ax))

Check failure on line 23 in actrf/running.go

View workflow job for this annotation

GitHub Actions / build

undefined: tensor.Projection2DValue
for ty := 0; ty < tNy; ty++ {
for tx := 0; tx < tNx; tx++ {
tv := float32(tensor.Prjn2DValue(src, false, ty, tx))
tv := float32(tensor.Projection2DValue(src, false, ty, tx))

Check failure on line 26 in actrf/running.go

View workflow job for this annotation

GitHub Actions / build

undefined: tensor.Projection2DValue
oi := []int{ay, ax, ty, tx}
oo := out.Shape().Offset(oi)
ov := out.Values[oo]
Expand Down
4 changes: 2 additions & 2 deletions decoder/softmax.go
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ type softMaxForSerialization struct {
Weights []float32 `json:"weights"`
}

// Save saves the decoder weights to given file path.
// Save saves the decoder weights to given file paths.
// If path ends in .gz, it will be gzipped.
func (sm *SoftMax) Save(path string) error {
file, err := os.Create(path)
Expand All @@ -265,7 +265,7 @@ func (sm *SoftMax) Save(path string) error {
return encoder.Encode(softMaxForSerialization{Weights: sm.Weights.Values})
}

// Load loads the decoder weights from given file path.
// Load loads the decoder weights from given file paths.
// If the shape of the decoder does not match the shape of the saved weights,
// an error will be returned.
func (sm *SoftMax) Load(path string) error {
Expand Down
10 changes: 5 additions & 5 deletions doc.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This top-level of the repository has no functional code -- everything is organiz
into the following sub-repositories:
* emer: defines the primary structural interfaces for emergent, at the level of
Network, Layer, and Prjn (projection). These contain no algorithm-specific code
Network, Layer, and Path (pathway). These contain no algorithm-specific code
and are only about the overall structure of a network, sufficient to support general
purpose tools such as the 3D NetView. It also houses widely used support classes used
in algorithm-specific code, including things like MinMax and AvgMax, and also the
Expand All @@ -23,14 +23,14 @@ and easier support for making permuted random lists, etc.
* netview provides the NetView interactive 3D network viewer, implemented in the Cogent Core 3D framework.
* prjn is a separate package for defining patterns of connectivity between layers
* path is a separate package for defining patterns of connectivity between layers
(i.e., the ProjectionSpecs from C++ emergent). This is done using a fully independent
structure that *only* knows about the shapes of the two layers, and it returns a fully general
bitmap representation of the pattern of connectivity between them. The leabra.Prjn code
bitmap representation of the pattern of connectivity between them. The leabra.Path code
then uses these patterns to do all the nitty-gritty of connecting up neurons.
This makes the projection code *much* simpler compared to the ProjectionSpec in C++ emergent,
This makes the pathway code *much* simpler compared to the ProjectionSpec in C++ emergent,
which was involved in both creating the pattern and also all the complexity of setting up the
actual connections themselves. This should be the *last* time any of those projection patterns
actual connections themselves. This should be the *last* time any of those pathway patterns
need to be written (having re-written this code too many times in the C++ version as the details
of memory allocations changed).
Expand Down
4 changes: 2 additions & 2 deletions econfig/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Docs: [GoDoc](https://pkg.go.dev/github.com/emer/emergent/econfig)

* Is a replacement for `ecmd` and includes the helper methods for saving log files etc.

* A `map[string]any` type can be used for deferred raw params to be applied later (`Network`, `Env` etc). Example: `Network = {'.PFCLayer:Layer.Inhib.Layer.Gi' = '2.4', '#VSPatchPrjn:Prjn.Learn.LRate' = '0.01'}` where the key expression contains the [params](../params) selector : path to variable.
* A `map[string]any` type can be used for deferred raw params to be applied later (`Network`, `Env` etc). Example: `Network = {'.PFCLayer:Layer.Inhib.Layer.Gi' = '2.4', '#VSPatchPath:Path.Learn.LRate' = '0.01'}` where the key expression contains the [params](../params) selector : path to variable.

* Supports full set of `Open` (file), `OpenFS` (takes fs.FS arg, e.g., for embedded), `Read` (bytes) methods for loading config files. The overall `Config()` version uses `OpenWithIncludes` which processes includes -- others are just for single files. Also supports `Write` and `Save` methods for saving from current state.

Expand All @@ -47,7 +47,7 @@ Docs: [GoDoc](https://pkg.go.dev/github.com/emer/emergent/econfig)
```toml
[Params.Network]
"#Output:Layer.Inhib.Layer.Gi" = 0.7
"Prjn:Prjn.Learn.LRate.Base" = 0.05
"Path:Path.Learn.LRate.Base" = 0.05
```

* Field tag `default:"value"`, used in the [Cogent Core](https://cogentcore.org/core) GUI, sets the initial default value and is shown for the `-h` or `--help` usage info.
Expand Down
2 changes: 1 addition & 1 deletion econfig/econfig_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ func TestArgs(t *testing.T) {
cfg := &TestConfig{}
SetFromDefaults(cfg)
// note: cannot use "-Includes=testcfg.toml",
args := []string{"-save-wts", "-nogui", "-no-epoch-log", "--NoRunLog", "--runs=5", "--run", "1", "--TAG", "nice", "--PatParams.Sparseness=0.1", "--Network", "{'.PFCLayer:Layer.Inhib.Gi' = '2.4', '#VSPatchPrjn:Prjn.Learn.LRate' = '0.01'}", "-Enum=TestValue2", "-Slice=[3.2, 2.4, 1.9]", "leftover1", "leftover2"}
args := []string{"-save-wts", "-nogui", "-no-epoch-log", "--NoRunLog", "--runs=5", "--run", "1", "--TAG", "nice", "--PatParams.Sparseness=0.1", "--Network", "{'.PFCLayer:Layer.Inhib.Gi' = '2.4', '#VSPatchPath:Path.Learn.LRate' = '0.01'}", "-Enum=TestValue2", "-Slice=[3.2, 2.4, 1.9]", "leftover1", "leftover2"}
allArgs := make(map[string]reflect.Value)
FieldArgNames(cfg, allArgs)
leftovers, err := ParseArgs(cfg, args, allArgs, true)
Expand Down
2 changes: 1 addition & 1 deletion econfig/enum.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ package econfig
type TestEnum int32 //enums:enum

// note: we need to add the Layer extension to avoid naming
// conflicts between layer, projection and other things.
// conflicts between layer, pathway and other things.

const (
TestValue1 TestEnum = iota
Expand Down
4 changes: 2 additions & 2 deletions elog/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ Iterate over layers of interest (use `LayersByClass` function). It is *essential
}
```

Here's how to log a projection variable:
Here's how to log a pathway variable:

```Go
ss.Logs.AddItem(&elog.Item{
Expand All @@ -271,7 +271,7 @@ Here's how to log a projection variable:
Range: minmax.F32{Max: 1},
Write: elog.WriteMap{
etime.Scope(etime.Train, etime.Trial): func(ctx *elog.Context) {
ffpj := cly.RecvPrjn(0).(*axon.Prjn)
ffpj := cly.RecvPath(0).(*axon.Path)
ctx.SetFloat32(ffpj.GScale.AvgMax)
}, etime.Scope(etime.AllModes, etime.Epoch): func(ctx *elog.Context) {
ctx.SetAgg(ctx.Mode, etime.Trial, stats.Mean)
Expand Down
2 changes: 1 addition & 1 deletion emer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Docs: [GoDoc](https://pkg.go.dev/github.com/emer/emergent/v2/emer)

Package emer provides minimal interfaces for the basic structural elements of neural networks
including:
* emer.Network, emer.Layer, emer.Unit, emer.Prjn (projection that interconnects layers)
* emer.Network, emer.Layer, emer.Unit, emer.Path (pathway that interconnects layers)

These interfaces are intended to be just sufficient to support visualization and generic
analysis kinds of functions, but explicitly avoid exposing ANY of the algorithmic aspects,
Expand Down
2 changes: 1 addition & 1 deletion emer/doc.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
/*
Package emer provides minimal interfaces for the basic structural elements of neural networks
including:
* emer.Network, emer.Layer, emer.Unit, emer.Prjn (projection that interconnects layers)
* emer.Network, emer.Layer, emer.Unit, emer.Path (pathway that interconnects layers)
These interfaces are intended to be just sufficient to support visualization and generic
analysis kinds of functions, but explicitly avoid exposing ANY of the algorithmic aspects,
Expand Down
Loading

0 comments on commit afcc556

Please sign in to comment.