Large Language Model (LLM)-based in-application assistants, or copilots, can
automate software tasks, but users often prefer learning by doing, raising
questions about the optimal level of automation for an effective user
experience. We investigated two automation paradigms by designing and
implementing a fully automated copilot (AutoCopilot) and a semi-automated
copilot (GuidedCopilot) that automates trivial steps while offering
step-by-step visual guidance. In a user study (N=20) across data analysis and
visual design tasks, GuidedCopilot outperformed AutoCopilot in user control,
software utility, and learnability, especially for exploratory and creative
tasks, while AutoCopilot saved time for simpler visual tasks. A follow-up
design exploration (N=10) enhanced GuidedCopilot with task-and state-aware
features, including in-context preview clips and adaptive instructions. Our
findings highlight the critical role of user control and tailored guidance in
designing the next generation of copilots that enhance productivity, support
diverse skill levels, and foster deeper software engagement.
Este artículo explora los viajes en el tiempo y sus implicaciones.
Descargar PDF:
2504.15549v1