当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-supervised learning for medical image data with anatomy-oriented imaging planes
Medical Image Analysis ( IF 10.9 ) Pub Date : 2024-03-21 , DOI: 10.1016/j.media.2024.103151
Tianwei Zhang , Dong Wei , Mengmeng Zhu , Shi Gu , Yefeng Zheng

Self-supervised learning has emerged as a powerful tool for pretraining deep networks on unlabeled data, prior to transfer learning of target tasks with limited annotation. The relevance between the pretraining pretext and target tasks is crucial to the success of transfer learning. Various pretext tasks have been proposed to utilize properties of medical image data (e.g., three dimensionality), which are more relevant to medical image analysis than generic ones for natural images. However, previous work rarely paid attention to data with anatomy-oriented imaging planes, e.g., standard cardiac magnetic resonance imaging views. As these imaging planes are defined according to the anatomy of the imaged organ, pretext tasks effectively exploiting this information can pretrain the networks to gain knowledge on the organ of interest. In this work, we propose two complementary pretext tasks for this group of medical image data based on the spatial relationship of the imaging planes. The first is to learn the relative orientation between the imaging planes and implemented as regressing their intersecting lines. The second exploits parallel imaging planes to regress their relative slice locations within a stack. Both pretext tasks are conceptually straightforward and easy to implement, and can be combined in multitask learning for better representation learning. Thorough experiments on two anatomical structures (heart and knee) and representative target tasks (semantic segmentation and classification) demonstrate that the proposed pretext tasks are effective in pretraining deep networks for remarkably boosted performance on the target tasks, and superior to other recent approaches.

中文翻译:

具有面向解剖学成像平面的医学图像数据的自监督学习

在对具有有限注释的目标任务进行迁移学习之前,自监督学习已成为在未标记数据上预训练深度网络的强大工具。预训练借口和目标任务之间的相关性对于迁移学习的成功至关重要。已经提出了各种借口任务来利用医学图像数据的属性(例如,三维度),这些属性与医学图像分析比自然图像的通用属性更相关。然而,以前的工作很少关注具有面向解剖学的成像平面的数据,例如标准心脏磁共振成像视图。由于这些成像平面是根据成像器官的解剖结构定义的,因此有效利用这些信息的借口任务可以对网络进行预训练,以获得有关感兴趣器官的知识。在这项工作中,我们根据成像平面的空间关系为这组医学图像数据提出了两个互补的借口任务。第一个是学习成像平面之间的相对方向,并通过回归它们的相交线来实现。第二个利用平行成像平面来回归它们在堆栈中的相对切片位置。这两个借口任务在概念上都很简单且易于实现,并且可以在多任务学习中组合以实现更好的表示学习。对两种解剖结构(心脏和膝盖)和代表性目标任务(语义分割和分类)的彻底实验表明,所提出的借口任务在预训练深度网络方面是有效的,可显着提高目标任务的性能,并且优于其他最新方法。
更新日期:2024-03-21
down
wechat
bug