当前位置: X-MOL 学术ACM Trans. Graph. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
High-Resolution Volumetric Reconstruction for Clothed Humans
ACM Transactions on Graphics  ( IF 6.2 ) Pub Date : 2023-08-21 , DOI: 10.1145/3606032
Sicong Tang 1 , Guangyuan Wang 2 , Qing Ran 2 , Lingzhi Li 2 , Li Shen 2 , Ping Tan 1
Affiliation  

We present a novel method for reconstructing clothed humans from a sparse set of, e.g., 1–6 RGB images. Despite impressive results from recent works employing deep implicit representation, we revisit the volumetric approach and demonstrate that better performance can be achieved with proper system design. The volumetric representation offers significant advantages in leveraging 3D spatial context through 3D convolutions, and the notorious quantization error is largely negligible with a reasonably large yet affordable volume resolution, e.g., 512. To handle memory and computation costs, we propose a sophisticated coarse-to-fine strategy with voxel culling and subspace sparse convolution. Our method starts with a discretized visual hull to compute a coarse shape and then focuses on a narrow band nearby the coarse shape for refinement. Once the shape is reconstructed, we adopt an image-based rendering approach, which computes the colors of surface points by blending input images with learned weights. Extensive experimental results show that our method significantly reduces the mean point-to-surface (P2S) precision of state-of-the-art methods by more than 50% to achieve approximately 2mm accuracy with a 512 volume resolution. Additionally, images rendered from our textured model achieve a higher peak signal-to-noise ratio (PSNR) compared to state-of-the-art methods.



中文翻译:

穿着衣服的人体的高分辨率体积重建

我们提出了一种从一组稀疏图像(例如 1-6 个 RGB 图像)中重建穿着衣服的人类的新方法。尽管最近采用深度隐式表示的工作取得了令人印象深刻的结果,但我们重新审视体积方法并证明通过适当的系统设计可以实现更好的性能。体积表示在通过 3D 卷积利用 3D 空间上下文方面提供了显着的优势,并且臭名昭著的量化误差在很大程度上可以忽略不计,而体积分辨率相当大但负担得起,例如 512。为了处理内存和计算成本,我们提出了一种复杂的粗略到-具有体素剔除和子空间稀疏卷积的精细策略。我们的方法从离散视觉外壳开始计算粗略形状,然后关注粗略形状附近的窄带进行细化。重建形状后,我们采用基于图像的渲染方法,该方法通过将输入图像与学习的权重混合来计算表面点的颜色。大量实验结果表明,我们的方法将最先进方法的平均点到面 (P2S) 精度显着降低了 50% 以上,以 512 体积分辨率实现约 2 毫米的精度。此外,与最先进的方法相比,从我们的纹理模型渲染的图像实现了更高的峰值信噪比 (PSNR)。大量实验结果表明,我们的方法将最先进方法的平均点到面 (P2S) 精度显着降低了 50% 以上,以 512 体积分辨率实现约 2 毫米的精度。此外,与最先进的方法相比,从我们的纹理模型渲染的图像实现了更高的峰值信噪比 (PSNR)。大量实验结果表明,我们的方法将最先进方法的平均点到面 (P2S) 精度显着降低了 50% 以上,以 512 体积分辨率实现约 2 毫米的精度。此外,与最先进的方法相比,从我们的纹理模型渲染的图像实现了更高的峰值信噪比 (PSNR)。

更新日期:2023-08-21
down
wechat
bug