Demonstration results of event-based depth estimation for autonomous driving (IMAGE)
Caption
Comparison of depth perception results across different models for autonomous driving. Each column shows the same driving scene under challenging conditions. Competing methods (SCSNet and SE-CFF) often blur object boundaries or miss fine details, while URNet produces clearer and smoother distance maps. The yellow boxes highlight regions where URNet better preserves shapes—like pedestrians and roadside barriers—demonstrating its stronger ability to perceive precise depth even in low-texture or complex areas.
Credit
Visual Intelligence, Tsinghua University Press
Usage Restrictions
News organizations may use or redistribute this image, with proper attribution, as part of news coverage of this paper only.
License
Original content