当前位置: X-MOL 学术Comput. Aided Civ. Infrastruct. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Transformer‐based framework for accurate segmentation of high‐resolution images in structural health monitoring
Computer-Aided Civil and Infrastructure Engineering ( IF 9.6 ) Pub Date : 2024-04-21 , DOI: 10.1111/mice.13211
M. Azimi 1 , T. Y. Yang 1
Affiliation  

High‐resolution image segmentation is essential in structural health monitoring (SHM), enabling accurate detection and quantification of structural components and damages. However, conventional convolutional neural network‐based segmentation methods face limitations in real‐world deployment, particularly when handling high‐resolution images producing low‐resolution outputs. This study introduces a novel framework named Refined‐Segment Anything Model (R‐SAM) to overcome such challenges. R‐SAM leverages the state‐of‐the‐art zero‐shot SAM to generate unlabeled segmentation masks, subsequently employing the DEtection Transformer model to label the instances. The key feature and contribution of the R‐SAM is its refinement module, which improves the accuracy of masks generated by SAM without the need for extensive data annotations and fine‐tuning. The effectiveness of the proposed framework was assessed through qualitative and quantitative analyses across diverse case studies, including multiclass segmentation, simultaneous segmentation and tracking, and 3D reconstruction. The results demonstrate that R‐SAM outperforms state‐of‐the‐art convolution neural network‐based segmentation models with a mean intersection‐over‐union of 97% and a mean boundary accuracy of 87%. In addition, achieving high coefficients of determination in target‐free tracking case studies highlights its versatility in addressing various challenges in SHM.

中文翻译:

基于 Transformer 的框架,用于结构健康监测中高分辨率图像的精确分割

高分辨率图像分割对于结构健康监测(SHM)至关重要,可以准确检测和量化结构部件和损坏。然而,传统的基于卷积神经网络的分割方法在现实世界的部署中面临局限性,特别是在处理产生低分辨率输出的高分辨率图像时。本研究引入了一种名为 Refined-Segment Anything Model (R-SAM) 的新颖框架来克服此类挑战。 R-SAM 利用最先进的零样本 SAM 生成未标记的分割掩模,随后采用 DEtection Transformer 模型来标记实例。 R-SAM 的关键特征和贡献是其细化模块,该模块提高了 SAM 生成的掩模的准确性,而无需大量数据注释和微调。通过对不同案例研究的定性和定量分析,包括多类分割、同时分割和跟踪以及 3D 重建,评估了所提出框架的有效性。结果表明,R-SAM 的平均交并比为 97%,平均边界精度为 87%,优于最先进的基于卷积神经网络的分割模型。此外,在无目标跟踪案例研究中实现高确定系数凸显了其在解决 SHM 中的各种挑战方面的多功能性。
更新日期:2024-04-21
down
wechat
bug