This theory considers psychology as the specific form of computer science which holds for biped mammals, in particular continuous 4-D motor control, external orientation, and social resonance, and metabolic/emotional regulation. It thus makes strong claims about the topology and timescale of the highest-bandwidth operations like sensation and balance, and weak or no predictions about coarse, slow, symbolic operations like language, decisions, and memory.
This theory quantifies "the unconscious" as whatever happens at a timescale faster than conscious thought. But in information theory, the information density of a process increases as time gets short (bandwidth is proportional to sampling frequency, and inversely proportional to time by Nyquist's theorem). Since "thoughts" as we usually know them occur at a few per second, and neural spikes can be discerned with fidelity from milliseconds down to micro-seconds, the vast, vast majority of computational power, bandwidth, representational capacity, etc. must be strictly unconscious, and will so dominate processing that "consciousness" in the usual senses would be undefinable and unimportant.
In other words, in this view what we popularly call "consciousness" and "cognition" are a small, surface island in a vast ocean. The island of consciousness represents a complex and inconsistent arrangement of low-bandwidth, coarse, unreliable, discrete processes united only in their declaration that they and they alone constitute mental reality, and that the surrounding ocean is defined only as "not conscious." Yet in fact clunky consciousness could never possibly keep track of slippery unconsciousness. "Consciousness" is both a distracting subject of study and a blind and untrustworthy arbiter of mental function.
The untrackable machinations of unconscious processing are made even less auditable by what I call the "cloaking paradox," which observes that any efficient continuous simulation would rather spend resources on smoothing its own discontinuities (i.e. flaws) than on recording and compensating for them. Strictly speaking, a brain is already doing the best it can, so if it can't fix a problem, the best solution is to bypass the problem and thereby avoid spending resources on it at all. This is why we aren't visually aware of the blind spot in our retina, of the unused muscle fibers deep in our joints, and of our numerous mental flaws and inconsistencies. We hide our flaws from ourselves for reasons of computational necessity.
And those flaws are enormous. All of us believe it when we say "we have mental flaws", but only a computer scientist can know the utter impossibility of what we take for granted: reconstituting a whole world from information trickling through two retinal pinholes, or remembering the past from splicing a few blurry mental keyframes. When one understands how deeply our brains fool ourselves, one pales.
This theory accepts the reality of cognitive biases, identifying them as Bayesian priors essential to the process of de-compressing sensory input. Biases toward interpolation and extrapolation are obvious consequences of any system using continuous-time 3-D simulation; biases toward the self are necessary for establishing consistent reference frames. Cognitive biases are not faults, but algorithmic optimizations.
This theory's model for social communication is "active resonance," as might exist between any actively-vibrated elastic physical objects, like bells. Thus the mechanisms of contagion for fear and enthusiasm are the same as for yawns: micro-timing vibratory signals conveyed by the body and voice. This channel is high-bandwidth and thus non-symbolic. The effect is the same instrinsic mimicry attributed to "mirror neurons," but here (again) because of computational inevitability rather than experimental evidence.