News Release

Can traffic accident reports aid visual accident anticipation?

Peer-Reviewed Publication

Tsinghua University Press

Comparison of traditional and proposed accident anticipation models.

image: 

The traditional approach (top) relies on object detection, depth estimation, and optical flow processed through a Graph Convolutional Network for accident anticipation. In contrast, our proposed method (bottom) integrates domain knowledge and a Large Language Model (GPT-4o) to enhance interpretability and provide more context-aware feedback.

view more 

Credit: Communications in Transportation Research

To address this question, a research team from the University of Macau designs a dual-branch vision-language framework that incorporates domain knowledge as a mediating factor to evaluate the role of textual information in visual reasoning tasks.

 

They published their study on 14 October 2025, in Communications in Transportation Research.

 

“In the field of accident anticipation, textual accident reports and traffic accident videos are traditionally studied in isolation. However, we argue that there exists an intrinsic relationship between the two modalities. Benefiting from recent advances in vision-language models, we are able to explore this relationship and assess the contribution of accident reports to accident anticipation tasks.”, says Yanchen Guan, a researcher at the Department of Civil Engineering at University of Macau.

 

Domain-Enhanced Dual-Branch Model

 

Real-time traffic accident prediction is a critical component of safety systems in autonomous vehicles. By anticipating potential accidents, autonomous systems can identify and respond to imminent hazards in a timely manner, thereby reducing the risk of injury and property damage. However, achieving high-performance and interpretable accident prediction under constrained computational resources remains a significant challenge.

 

In the study, the research team proposes a deep learning architecture that integrates visual and textual features through domain knowledge, aiming to develop a lightweight, high-accuracy, and interpretable real-time traffic accident prediction system. The framework extracts accident-related factors from accident reports as prior knowledge to assist scene-level accident anticipation. Finally, it leverages this prior knowledge to guide a large language model in generating contextually appropriate driving suggestions and archiving predicted traffic accidents.

 

Domain knowledge is Helpful for Accident Anticipation

The proposed model is evaluated on three real-world datasets—DAD, CCD, and A3D—and achieves strong performance across all benchmarks. The results demonstrate that incorporating domain knowledge as a mediating layer to decompose traffic scenes into contributing factors not only assists the model in making accurate predictions, but also guides the attention of large language models toward accident-inducing elements, thereby enabling the generation of targeted driving suggestions.

 

This study provides valuable insights into the domain of traffic accident prediction and presents a high-accuracy, computationally efficient inference framework. It reveals the underlying connections between textual and visual data, introducing a new research direction that integrates multimodal information for interpretable and efficient accident anticipation. These findings have practical implications for enhancing the safety systems of autonomous vehicles.

 

Future work can further explore the latent correspondence between accident reports and video data, transforming large-scale textual records into richly annotated visual data to support autonomous driving model training. In addition to binary accident prediction, future studies may also refine scene understanding and deliver context-aware, scenario-specific driving recommendations.

 

The above research is published in Communications in Transportation Research (COMMTR), which is a fully open access journal co-published by Tsinghua University Press and Elsevier. COMMTR publishes peer-reviewed high-quality research representing important advances of significance to emerging transport systems. COMMTR is also among the first transportation journals to make the Replication Package mandatory to facilitate researchers, practitioners, and the general public in understanding and advancing existing knowledge. At its discretion, Tsinghua University Press will pay the open access fee for all published papers in 2025.

 

 

About Communications in Transportation Research

Communications in Transportation Research was launched in 2021, with academic support provided by Tsinghua University and China Intelligent Transportation Systems Association. The Editors-in-Chief are Professor Xiaobo Qu, a member of the Academia Europaea from Tsinghua University and Professor Shuai’an Wang from Hong Kong Polytechnic University. The journal mainly publishes high-quality, original research and review articles that are of significant importance to emerging transportation systems, aiming to serve as an international platform for showcasing and exchanging innovative achievements in transportation and related fields, fostering academic exchange and development between China and the global community.

It has been indexed in SCIE, SSCI, Ei Compendex, Scopus, CSTPCD, CSCD, OAJ, DOAJ, TRID and other databases. It was selected as Q1 Top Journal in the Engineering and Technology category of the Chinese Academy of Sciences (CAS) Journal Ranking List. In 2022, it was selected as a High-Starting-Point new journal project of the “China Science and Technology Journal Excellence Action Plan”. In 2024, it was selected as the Support the Development Project of “High-Level International Scientific and Technological Journals”. The same year, it was also chosen as an English Journal Tier Project of the “China Science and Technology Journal Excellence Action Plan Phase Ⅱ”. In 2024, it received the first impact factor (2023 IF) of 12.5, ranking Top1 (1/58, Q1) among all journals in “TRANSPORTATION” category. In 2025, its 2024 IF was announced as 14.5, maintaining the Top 1 position (1/61, Q1) in the same category. Tsinghua University Press will cover the open access fee for all published papers in 2025.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.