We introduce Large Language Model-Assisted Preference Prediction (LAPP), a
novel framework for robot learning that enables efficient, customizable, and
expressive behavior acquisition with minimum human effort. Unlike prior
approaches that rely heavily on reward engineering, human demonstrations,
motion capture, or expensive pairwise preference labels, LAPP leverages large
language models (LLMs) to automatically generate preference labels from raw
state-action trajectories collected during reinforcement learning (RL). These
labels are used to train an online preference predictor, which in turn guides
the policy optimization process toward satisfying high-level behavioral
specifications provided by humans. Our key technical contribution is the
integration of LLMs into the RL feedback loop through trajectory-level
preference prediction, enabling robots to acquire complex skills including
subtle control over gait patterns and rhythmic timing. We evaluate LAPP on a
diverse set of quadruped locomotion and dexterous manipulation tasks and show
that it achieves efficient learning, higher final performance, faster
adaptation, and precise control of high-level behaviors. Notably, LAPP enables
robots to master highly dynamic and expressive tasks such as quadruped
backflips, which remain out of reach for standard LLM-generated or handcrafted
rewards. Our results highlight LAPP as a promising direction for scalable
preference-driven robot learning.
Este artículo explora los viajes en el tiempo y sus implicaciones.
Descargar PDF:
2504.15472v1