Traffic safety remains a critical global challenge, with traditional Advanced
Driver-Assistance Systems (ADAS) often struggling in dynamic real-world
scenarios due to fragmented sensor processing and susceptibility to adversarial
conditions. This paper reviews the transformative potential of Multimodal Large
Language Models (MLLMs) in addressing these limitations by integrating
cross-modal data such as visual, spatial, and environmental inputs to enable
holistic scene understanding. Through a comprehensive analysis of MLLM-based
approaches, we highlight their capabilities in enhancing perception,
decision-making, and adversarial robustness, while also examining the role of
key datasets (e.g., KITTI, DRAMA, ML4RoadSafety) in advancing research.
Furthermore, we outline future directions, including real-time edge deployment,
causality-driven reasoning, and human-AI collaboration. By positioning MLLMs as
a cornerstone for next-generation traffic safety systems, this review
underscores their potential to revolutionize the field, offering scalable,
context-aware solutions that proactively mitigate risks and improve overall
road safety.
Questo articolo esplora i giri e le loro implicazioni.
Scarica PDF:
2504.16134v1