Merging information in the entorhinal cortex: what we can learn from robotics experiments
Phillippe Gaussier
ETIS lab., ENSEA, University of Cergy Pontoise, CNRS, France
Place recognition is a complex process involving idiothetic and allothetic information. It can be performed from global visual information or from the sequential exploration of local views. Low-resolution global visual information provides interesting information to determine a context for place recognition. Yet, its use for homing and precise place recognition is problematic since this information is not robust to environmental changes: presence of other agents, displacement of objects… Using local views extracted around some feature points improves generalization capabilities for visual navigation and allows creating a visual compass. We suppose the visual information coming from the temporal and parietal cortical areas (‘what’ and ‘where’ information) are merged at the level of the entorhinal cortex (EC) thanks to conjunctive cells and short-term memories. They allow the building of an efficient code for view or place recognition. In other works, we suppose that some path integration information encoded at the cortical level (RSC or PPC) is projected and folded onto EC performing several modulo operations and ending in ‘grid cells’: a strong yet efficient compression of the cortical activity. Applying the same principle to visual information creates grid cells sensitive to visual information similar to the ones found in primates. Hence, we advocate a model of the hippocampal system, where EC builds a compressed code of the cortical activity allowing the hippocampus proper to learn and predict transitions between complex states.