Now that we have the algorithmic building blocks of vertebrate spinal control, we can outline a computational architecture to implement it. For the geometric reasons above, each building block must represent continuous spacetime. But a finite brain can only contain a finite number of spacetime blocks (aka computational modules). A modular representation of continuous spacetime raises practical questions. How can the modules communicate? What scales must the modules span? How can they together tile space and time? What arrangements are most efficient?

**Modules must communicate with spikes.** It is well known that transmitting information losslessley requires quantization. This means that even continuous spacetime modules must communicate with quantized signals. So while a module's internal representation will be equivalent to a moving 3-D image, its external representation or interface format must be built of discrete events. A minimal event would be a single spike, created when the continuous amplitude of a filtered function of the image crosses some detection threshold; a more elaborate event might be a volley of such spikes, which together approximate a moving waveform in the image.

**Modules must span multiple scales. ** The granularity of a module need not be permanently fixed. In fact, a given module of fixed size and memory might represent a spacetime block of any size and temporal extent, depending on the coarseness (and thus scale) of its representation: the coarsest module represents all space, while the finest ones represent details in that space. A set of many modules together thus form a distributed representation of space and time covering many scales.

**Modules must tile space. ** While the physical arrangement of modules in a brain is limited only by aggregate considerations like net wiring length, the portions of space they represent ("receptive fields" in the colloquial sense) must overlap in both scale and space, just as the "image blocks" do in multi-scale JPEG and similar compression algorithms, but with the borders and their derivatives smoothed so that no discrete jumps or blockiness is visible. Such a scheme is efficient because of sub-sampling: it only represents the most necessary details at any moment, not all details possible.

**Sub-sampling time allows recollecting the past by narrative compression.** The logic of sub-sampling applies to time as well. If one could freeze-frame the occasional image from the entire distributed representation, that stored event could prime the circuit into replaying past experience by "filling in" the temporal evolution between discrete events. Several events stored in succession could then form an approximation of past history. Filling-in between discrete events based on assumptions of continuity is already what modules do in converting spikes into moving 3-D waveforms; the only new component is storing the events themselves as a form of persistent or episodic memory.
The process of recording specific episodes, and later reconstituting from them an approximation of spatiotemporal history, is identical to the function performed by human narrative in all its forms (spoken, written, filmed). It can be called "narrative compression" to identify its informational function.

**Episodic operations are coarser and slower than real-time ones. ** A stored episode is a compression of sensory experience, and hence far coarser than the original. Likewise, the timescale between episodes is far longer than the timescale each one contains, so episodic time is coarse-grained too. So any episodic computationsâ€”including memory, cognition, logic, and verbal communicationâ€”must be far slower and lower-resolution than the real-time substrate of which they are composed.