Exploring Event-driven Dynamic Context for Accident Scene Segmentation
The robustness of semantic segmentation on edge cases of traffic scene is a vital factor for the safety of intelligent transportation. However, most of the critical scenes of traffic accidents are extremely dynamic and previously unseen, which seriously harm the performance of semantic segmentation methods. In addition, the delay of the traditional camera during high-speed driving will further reduce the contextual information in the time dimension. Therefore, we propose to extract dynamic context from event-based data with a higher temporal resolution to enhance static RGB images, even for those from traffic accidents with motion blur, collisions, deformations, overturns, etc. Moreover, in order to evaluate the segmentation performance in traffic accidents, we provide a pixel-wise annotated accident dataset, namely DADA-seg, which contains a variety of critical scenarios from traffic accidents. Our experiments indicate that event-based data can provide complementary information to stabilize semantic segmentation under adverse conditions by preserving fine-grained motion of fast-moving foreground (crash objects) in accidents. Our approach achieves +8.2 more than 20 state-of-the-art semantic segmentation methods. The proposal has been demonstrated to be consistently effective for models learned on multiple source databases including Cityscapes, KITTI-360, BDD, and ApolloScape.
READ FULL TEXT