当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DermSynth3D: Synthesis of in-the-wild annotated dermatology images
Medical Image Analysis ( IF 10.9 ) Pub Date : 2024-03-26 , DOI: 10.1016/j.media.2024.103145
Ashish Sinha , Jeremy Kawahara , Arezou Pakzad , Kumar Abhishek , Matthieu Ruthven , Enjie Ghorbel , Anis Kacem , Djamila Aouada , Ghassan Hamarneh

In recent years, deep learning (DL) has shown great potential in the field of dermatological image analysis. However, existing datasets in this domain have significant limitations, including a small number of image samples, limited disease conditions, insufficient annotations, and non-standardized image acquisitions. To address these shortcomings, we propose a novel framework called . blends skin disease patterns onto 3D textured meshes of human subjects using a differentiable renderer and generates 2D images from various camera viewpoints under chosen lighting conditions in diverse background scenes. Our method adheres to top-down rules that constrain the blending and rendering process to create 2D images with skin conditions that mimic acquisitions, ensuring more meaningful results. The framework generates photo-realistic 2D dermatological images and the corresponding dense annotations for semantic segmentation of the skin, skin conditions, body parts, bounding boxes around lesions, depth maps, and other 3D scene parameters, such as camera position and lighting conditions. allows for the creation of custom datasets for various dermatology tasks. We demonstrate the effectiveness of data generated using by training DL models on synthetic data and evaluating them on various dermatology tasks using real 2D dermatological images. We make our code publicly available at .

中文翻译:


DermSynth3D:野外带注释的皮肤病学图像的合成



近年来,深度学习(DL)在皮肤病学图像分析领域显示出巨大潜力。然而,该领域现有的数据集存在很大的局限性,包括图像样本数量少、疾病条件有限、注释不足以及图像采集不标准化。为了解决这些缺点,我们提出了一个名为 的新颖框架。使用可微分渲染器将皮肤疾病模式混合到人体受试者的 3D 纹理网格上,并在不同背景场景中的选定照明条件下从不同相机视点生成 2D 图像。我们的方法遵循自上而下的规则,限制混合和渲染过程,以创建具有模拟采集的皮肤条件的 2D 图像,确保更有意义的结果。该框架生成逼真的 2D 皮肤病图像和相应的密集注释,用于皮肤、皮肤状况、身体部位、病变周围的边界框、深度图和其他 3D 场景参数(例如相机位置和照明条件)的语义分割。允许为各种皮肤病学任务创建自定义数据集。我们通过在合成数据上训练 DL 模型并使用真实的 2D 皮肤病学图像在各种皮肤病学任务中评估它们来证明所生成数据的有效性。我们在 上公开提供我们的代码。
更新日期:2024-03-26
down
wechat
bug