Question: What statistical assumptions must a brain or any representational system make in order to represent the world as well as we do?
My hypothesis: A brain must employ continuous 3-D priors (e.g. Newtonian momentum in spacetime) in order to convert high-dimensional sensory input into a high-resolution representation of a 3-D world (Elastic Nanocomputation).
Refutation: Choose a reasonable efficiency metric (e.g. 4-D resolution per unit sensory bandwidth), estimate human performance on it, and exhibit a simulation performing equally well which does not employ 3-D priors in its algorithms.
(Every successful 3-D system I am aware of (self-driving cars and UAV's, industrial robots, imaging systems) has 3-D space explicitly included in its software; I know of none which learn the world's dimensionality from scratch).