A nomenclature for Cortical Columns and related concepts.
By Gideon Kowadlo and David Rawlinson
In our last blog post, we discussed the repeating functional columnar structure of the neocortex, and the inconsistent terminology used to discuss it throughout the literature. As mentioned in that post, the function of the column is an important concept for understanding the function of the neocortex, and as a consequence, for designing algorithms that are inspired by the neocortex. We therefore require a clear nomenclature for discussing and working with these concepts.
As promised, here is a follow-up post with definitions of columns and associated concepts. The definitions are based on a paper by Rinkus  (introduced in the previous post). For decades it was widely accepted that the structure of columns in the neocortex is uniform across species and individuals. Recent studies have shown that to be not entirely correct  (summarised here and in  ). Rinkus provides a well founded functional basis for the definition of columns. This approach is more meaningful and robust, and directly relevant to understanding the neocortex algorithmically.
|Illustration of layers and columns in the neocortex.
Reproduced from “Basic Cerebral Cortex Function with Emphasis on Vision” by Ben Best.
Defining the cortical layers is necessary for discussions on cortex. The cortex is a surface that consists of several layers of cells. The density, morphology and function of cells varies between layers. The distribution of connections to other layers varies for each layer, but is relatively constant within a layer.
Although cells in any layer may connect to cells in all other layers, they do this only for cells within the same macrocolumn.
This means that columns extend through all cortex layers. Columns are organised perpendicularly to layers. Since the layers consist of different patterns of cell connectivity and type, layer distinctions are also functional distinctions.
Typically 5-7 layers, described as:
- L1 Molecular Layer
- (non-cellular, just axons)
- [L2, L3] Small pyramidal cells (of two sizes)
- L4 Spherical neurons.
- [L5a, L5b] Large pyramidal cells (a & b often distinguished)
- L6 multiform layer
Macrocolumn (also referred to as a Region or Hypercolumn)
“The function of a macrocolumn is to store sparse distributed representations of its overall input patterns, and to act as a recognizer of those patterns.” 
Overall input includes bottom up input from thalamus and lower cortical areas, top down from higher cortical areas and horizontal from adjacent cortical areas. This is also referred to as the context. The macrocolumn responds to context dependent input patterns.
A standard definition of a macrocolumn is a set of cells that have the same receptive field. In this definition, we specify that all cells in the macrocolumn don’t necessarily have same learned receptive field, but the same potential receptive field.
- 300–600 μm
- 60 – 80 minicolumns per macrocolumn
A subset of cells in the macrocolumn, for which there is a winner take all (WTA) cell, for a given macrocolumn context (overall input pattern). According to this definition, the function of the minicolumn is to enforce sparseness.
The fact that there is only one winner results in an SDR in the macrocolumn. Therefore, the macrocolumn output contains a signal from 1 winning cell in each minicolumn, in each layer (~70 cells in total per layer). In most implementations, WTA is implemented with a competitive process.
A standard definition of a minicolumn is that all cells within it describe a similar feature within that the receptive field of the macrocolumn. This will occur in most cases, but it emerges from the function, which is the basis of our definition.
- ~20 cells (physically localised)
- 20–50 μm
Potential Receptive Field
A set of input bits that can be connected to a cell.
A set of axons that potentially could be synapsed by the dendrites of a neuron.
Learned Receptive Field
The actual set of input bits synapsed to a cell after learning and the effects of mutual inhibition or self-organisation with its neighbours.
The synapses formed by the dendrites of a neuron on input axons.
Many researchers believe that the set of active cells in a single macrocolumn layer can be described as a Sparse Distributed Representation (SDR). We assume this to be the case in our definitions. SDRs can be understood as having the following properties:
A subset of an SDR that has some semantic meaning; 1 or more bits, NOT the whole set of active bits in an SDR.
Compositionality of SDRs emerges from the fact that an SDR contains many attributes in combination.
A distributed representation is one that consists of multiple attributes, those attributes can exist independently, be shared between representations and overlap.