Memory is the process of encoding, storing, and retrieving information,
allowing humans to retain experiences, knowledge, skills, and facts over time,
and serving as the foundation for growth and effective interaction with the
world. It plays a crucial role in shaping our identity, making decisions,
learning from past experiences, building relationships, and adapting to
changes. In the era of large language models (LLMs), memory refers to the
ability of an AI system to retain, recall, and use information from past
interactions to improve future responses and interactions. Although previous
research and reviews have provided detailed descriptions of memory mechanisms,
there is still a lack of a systematic review that summarizes and analyzes the
relationship between the memory of LLM-driven AI systems and human memory, as
well as how we can be inspired by human memory to construct more powerful
memory systems. To achieve this, in this paper, we propose a comprehensive
survey on the memory of LLM-driven AI systems. In particular, we first conduct
a detailed analysis of the categories of human memory and relate them to the
memory of AI systems. Second, we systematically organize existing
memory-related work and propose a categorization method based on three
dimensions (object, form, and time) and eight quadrants. Finally, we illustrate
some open problems regarding the memory of current AI systems and outline
possible future directions for memory in the era of large language models.
Questo articolo esplora i giri e le loro implicazioni.
Scarica PDF:
2504.15965v2