Random compressed coding with neurons

January 13, 2026
Share on

AUTHORS

Simone Blanco Malerba, Mirko Pieropan, Yoram Burak, Rava Azeredo da Silveira

Cell Rep. 2025 Mar 25;44(3):115412. doi: 10.1016/j.celrep.2025.115412. Epub 2025 Mar 19.

ABSTRACT

Classical models of efficient coding in neurons assume simple mean responses-“tuning curves”- such as bell-shaped or monotonic functions of a stimulus feature. Real neurons, however, can be more complex: grid cells, for example, exhibit periodic responses that impart the neural population code with high accuracy. But do highly accurate codes require fine-tuning of the response properties? We address this question with the use of a simple model: a population of neurons with random, spatially extended, and irregular tuning curves. Irregularity enhances the local resolution of the code but gives rise to catastrophic, global errors. For optimal smoothness of the tuning curves, when local and global errors balance out, the neural population compresses information about a continuous stimulus into a low-dimensional representation, and the resulting distributed code achieves exponential accuracy. An analysis of recordings from monkey motor cortex points to such “compressed efficient coding.” Efficient codes do not require a finely tuned design-they emerge robustly from irregularity or randomness.

Keywords: CP: Neuroscience; Gaussian process; efficient coding; neural coding; receptive field.

Copyright © 2025 The Author(s). Published by Elsevier Inc. All rights reserved.

PMID: 40111998 | DOI: 10.1016/j.celrep.2025.115412

    View in PubMed