当前位置: X-MOL 学术Int. J. Appl. Earth Obs. Geoinf. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fusion of satellite and street view data for urban traffic accident hotspot identification
International Journal of Applied Earth Observation and Geoinformation ( IF 7.5 ) Pub Date : 2024-04-30 , DOI: 10.1016/j.jag.2024.103853
Wentong Guo , Cheng Xu , Sheng Jin

As the number of vehicles and the volume of traffic swell in urban centers, cities have experienced a concomitant increase in traffic accidents. Proactively identifying accident-prone hotspots in urban environments holds the promise of preventing traffic mishaps, thereby curtailing the incidence of accidents and reducing property damage. This research introduces the Two-Branch Contextual Feature-Guided Converged Network (TCFGC-Net) utilizing multimodal satellite and street view data. Designed to extract global structural features from satellite imagery and dynamic continuous features from street view imagery, the model aims to improve the accuracy of detecting urban accident hotspots. For the satellite imagery branch, we propose the Contextual Feature Coupled Convolutional Neural Network (Trans-CFCCNN) designed to extract global spatial features and discern feature correlations across adjacent regions. For the street view imagery branch, we develop the Sequential Feature Recurrent Attention Network (SFRAN) to assimilate and integrate dynamic scene features captured from successive street view images. We designed the Multi-Branch Feature Adaptive Fusion Structure (MBFAF) to aggregate different branch features for accurate identification of accident hotspots. Experimental results show that the model performs well, with an overall accuracy of 93.7 %. Ablation studies confirm that relative to standalone street view and satellite branch analyses, implementing multimodal fusion enhances the model's accuracy by 12.05 % and 17.86 %, respectively. The innovative fusion structure proposed herein garners a 4.22 % increase in model accuracy, outpacing conventional feature concatenation techniques. Furthermore, the model outperforms existing deep learning models in terms of overall efficacy. Additionally, to showcase the efficacy of the proposed model structure, we utilize Class Activation Maps (CAM) to provide visual interpretability for the model. These results suggest that the dual-branch fusion model effectively decreases false alarm occurrences and directs the model's focus toward regions more pertinent to accident hotspots. Finally, the code and model used for identifying hotspots of urban traffic accidents in this study are available for access: .

中文翻译:

卫星与街景数据融合用于城市交通事故热点识别

随着城市中心车辆数量和交通量的增加,城市交通事故也随之增加。主动识别城市环境中的事故多发热点有望预防交通事故,从而减少事故发生并减少财产损失。本研究介绍了利用多模态卫星和街景数据的两分支上下文特征引导融合网络(TCFGC-Net)。该模型旨在从卫星图像中提取全局结构特征,从街景图像中提取动态连续特征,旨在提高检测城市事故热点的准确性。对于卫星图像分支,我们提出了上下文特征耦合卷积神经网络(Trans-CFCCNN),旨在提取全局空间特征并辨别相邻区域的特征相关性。对于街景图像分支,我们开发了顺序特征循环注意网络(SFRAN)来同化和集成从连续街景图像中捕获的动态场景特征。我们设计了多分支特征自适应融合结构(MBFAF)来聚合不同分支特征,以准确识别事故热点。实验结果表明,该模型表现良好,总体准确率为93.7%。消融研究证实,相对于独立街景和卫星分支分析,实施多模态融合可将模型的准确性分别提高 12.05% 和 17.86%。本文提出的创新融合结构使模型精度提高了 4.22%,超过了传统的特征串联技术。此外,该模型在整体功效方面优于现有的深度学习模型。此外,为了展示所提出的模型结构的有效性,我们利用类激活图(CAM)为模型提供视觉可解释性。这些结果表明,双分支融合模型有效地减少了误报的发生,并将模型的重点转向与事故热点更相关的区域。最后,本研究中用于识别城市交通事故热点的代码和模型可供访问:。
更新日期:2024-04-30
down
wechat
bug