当前位置: X-MOL 学术IEEE Trans. Geosci. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Progressive Self-Supervised Pretraining for Hyperspectral Image Classification
IEEE Transactions on Geoscience and Remote Sensing ( IF 8.2 ) Pub Date : 2024-05-06 , DOI: 10.1109/tgrs.2024.3397740
Peiyan Guan 1 , Edmund Y. Lam 1
Affiliation  

Self-supervised learning has demonstrated considerable success in hyperspectral image (HSI) classification when limited labeled data are available. However, inherent dissimilarities among HSIs require self-supervised pretraining from scratch for each HSI dataset. Pretraining on a large amount of unlabeled data can be time-consuming. In addition, the poor quality of some HSIs can limit the performance of self-supervised learning (SSL) algorithms. To address these issues, we propose to enhance self-supervised pretraining on HSIs with transfer learning. We introduce a progressive self-supervised pretraining (PSP) framework that acquires strong initialization for the final pretraining on the target HSI dataset by sequentially performing self-supervised pretraining on datasets that are increasingly similar to the target HSI, specifically, first on a large general vision dataset and then on a related HSI dataset. This sequential strategy enables the model to progressively learn from domain-general vision knowledge to target-specific hyperspectral knowledge. To mitigate the catastrophic forgetting in sequential training, we develop a regularization method, called self-supervised elastic weight consolidation (SS-EWC), to impose adaptive constraints on the changes to model parameters. Thorough classification experiments on various HSI datasets demonstrate that our framework significantly and consistently improves the self-supervised pretraining on HSIs in terms of both convergence speed and representation quality. Furthermore, our framework exhibits high generalizability and can be applied to various SSL algorithms. Transfer learning continues to prove its usefulness in self-supervised settings.

中文翻译:

高光谱图像分类的渐进式自监督预训练

当可用的标记数据有限时,自监督学习在高光谱图像(HSI)分类方面已取得了相当大的成功。然而,HSI 之间固有的差异需要对每个 HSI 数据集从头开始进行自我监督预训练。对大量未标记数据进行预训练可能非常耗时。此外,一些 HSI 的质量较差可能会限制自监督学习(SSL)算法的性能。为了解决这些问题,我们建议通过迁移学习加强 HSI 的自我监督预训练。我们引入了一种渐进式自监督预训练(PSP)框架,该框架通过对与目标 HSI 越来越相似的数据集顺序执行自监督预训练,为目标 HSI 数据集的最终预训练获得强初始化,具体来说,首先在大型通用数据集上进行自监督预训练。视觉数据集,然后是相关的 HSI 数据集。这种顺序策略使模型能够逐步从领域通用视觉知识学习到特定目标的高光谱知识。为了减轻顺序训练中的灾难性遗忘,我们开发了一种称为自监督弹性权重合并(SS-EWC)的正则化方法,对模型参数的变化施加自适应约束。对各种 HSI 数据集的彻底分类实验表明,我们的框架在收敛速度和表示质量方面显着且持续地改进了 HSI 的自监督预训练。此外,我们的框架具有很高的通用性,可以应用于各种 SSL 算法。迁移学习继续证明其在自我监督环境中的有用性。
更新日期:2024-05-06
down
wechat
bug