当前位置: X-MOL 学术Int. J. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
One-stage self-distillation guided knowledge transfer for long-tailed visual recognition
International Journal of Intelligent Systems ( IF 7 ) Pub Date : 2022-09-09 , DOI: 10.1002/int.23068
Yuelong Xia 1, 2, 3 , Shu Zhang 1, 2, 3 , Jun Wang 2, 3 , Wei Zou 1, 2, 3 , Juxiang Zhou 2, 3 , Bin Wen 1, 2, 3
Affiliation  

Deep learning has achieved remarkable progress for visual recognition on balanced data sets but still performs poorly on real-world long-tailed data distribution. The existing methods mainly decouple the problem into the two-stage decoupling training, that is, representation learning and classifier training, or multistage training based on knowledge distillation, thus resulting in huge training steps and extra computation cost. In this paper, we propose a conceptually simple yet effective One-stage Long-tailed Self-Distillation framework, called OLSD, which simultaneously takes representation learning and classifier training into one-stage training. For representation learning, we take two different sampling distributions and mixup them to input them into two branches, where the collaborative consistency loss is introduced to train network consistency, and we theoretically show that the proposed mixup naturally generates a tail-majority distribution mixup. For classifier training, we introduce balanced self-distillation guided knowledge transfer to improve generalization performance, where we theoretically show that proposed knowledge transfer implicitly minimizes not only cross-entropy but also KL divergence between head-to-tail and tail-to-head. Extensive experiments on long-tailed CIFAR10/100, ImageNet-LT and multilabel long-tailed VOC-LT demonstrate the proposed method's effectiveness.

中文翻译:

用于长尾视觉识别的一级自蒸馏引导知识迁移

深度学习在平衡数据集的视觉识别方面取得了显着进步,但在现实世界的长尾数据分布上仍然表现不佳。现有方法主要将问题解耦为两阶段解耦训练,即表示学习和分类器训练,或基于知识蒸馏的多阶段训练,从而导致训练步骤庞大和额外的计算成本。在本文中,我们提出了一个概念上简单但有效的单阶段长尾自蒸馏框架,称为 OLSD,它同时将表示学习和分类器训练纳入单阶段训练。对于表示学习,我们采用两种不同的采样分布并将它们混合以将它们输入到两个分支中,其中引入了协作一致性损失来训练网络一致性,我们从理论上表明,所提出的混合自然会产生尾多数分布混合。对于分类器训练,我们引入了平衡的自蒸馏引导知识转移来提高泛化性能,我们在理论上表明,所提出的知识转移不仅隐含地最小化了交叉熵,而且最小化了头到尾和尾到头之间的 KL 散度。在长尾 CIFAR10/100、ImageNet-LT 和多标签长尾 VOC-LT 上进行的大量实验证明了所提出方法的有效性。我们引入了平衡的自蒸馏引导知识转移来提高泛化性能,我们在理论上表明,所提出的知识转移不仅隐含地最小化了交叉熵,而且最小化了头到尾和尾到头之间的 KL 散度。在长尾 CIFAR10/100、ImageNet-LT 和多标签长尾 VOC-LT 上进行的大量实验证明了所提出方法的有效性。我们引入了平衡的自蒸馏引导知识转移来提高泛化性能,我们在理论上表明,所提出的知识转移不仅隐含地最小化了交叉熵,而且最小化了头到尾和尾到头之间的 KL 散度。在长尾 CIFAR10/100、ImageNet-LT 和多标签长尾 VOC-LT 上进行的大量实验证明了所提出方法的有效性。
更新日期:2022-09-09
down
wechat
bug