This paper presents the Visual Place Cell Encoding (VPCE) model, a
biologically inspired computational framework for simulating place cell-like
activation using visual input. Drawing on evidence that visual landmarks play a
central role in spatial encoding, the proposed VPCE model activates visual
place cells by clustering high-dimensional appearance features extracted from
images captured by a robot-mounted camera. Each cluster center defines a
receptive field, and activation is computed based on visual similarity using a
radial basis function. We evaluate whether the resulting activation patterns
correlate with key properties of biological place cells, including spatial
proximity, orientation alignment, and boundary differentiation. Experiments
demonstrate that the VPCE can distinguish between visually similar yet
spatially distinct locations and adapt to environment changes such as the
insertion or removal of walls. These results suggest that structured visual
input, even in the absence of motion cues or reward-driven learning, is
sufficient to generate place-cell-like spatial representations and support
biologically inspired cognitive mapping.
Questo articolo esplora i giri e le loro implicazioni.
Scarica PDF:
2504.15953v1