skip to main content
research-article

Textured Mesh Quality Assessment: Large-scale Dataset and Deep Learning-based Quality Metric

Published:05 June 2023Publication History
Skip Abstract Section

Abstract

Over the past decade, three-dimensional (3D) graphics have become highly detailed to mimic the real world, exploding their size and complexity. Certain applications and device constraints necessitate their simplification and/or lossy compression, which can degrade their visual quality. Thus, to ensure the best Quality of Experience, it is important to evaluate the visual quality to accurately drive the compression and find the right compromise between visual quality and data size. In this work, we focus on subjective and objective quality assessment of textured 3D meshes. We first establish a large-scale dataset, which includes 55 source models quantitatively characterized in terms of geometric, color, and semantic complexity, and corrupted by combinations of five types of compression-based distortions applied on the geometry, texture mapping, and texture image of the meshes. This dataset contains over 343k distorted stimuli. We propose an approach to select a challenging subset of 3,000 stimuli for which we collected 148,929 quality judgments from over 4,500 participants in a large-scale crowdsourced subjective experiment. Leveraging our subject-rated dataset, a learning-based quality metric for 3D graphics was proposed. Our metric demonstrates state-of-the-art results on our dataset of textured meshes and on a dataset of distorted meshes with vertex colors. Finally, we present an application of our metric and dataset to explore the influence of distortion interactions and content characteristics on the perceived quality of compressed textured meshes.

Skip Supplemental Material Section

Supplemental Material

tog-22-0011-file004.mov

Supplementary video

mov

127.3 MB

REFERENCES

  1. Abid M., Silva M. Perreira Da, and Callet P. Le. 2020. Perceptual characterization of 3D graphical contents based on attention complexity measures. In Proceedings of the 1st Workshop on Quality of Experience (QoE) in Visual Multimedia Applications (QoEVMA’20). 3136. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Abouelaziz Ilyass, Chetouani Aladine, Hassouni Mohammed El, Latecki Longin Jan, and Cherifi Hocine. 2020. No-reference mesh visual quality assessment via ensemble of convolutional neural networks and compact multi-linear pooling. Pattern Recogn. 100 (2020).Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Abouelaziz Ilyass, Hassouni Mohammed El, and Cherifi Hocine. 2017. A convolutional neural network framework for blind mesh visual quality assessment. In Proceedings of the IEEE International Conference on Image Processing (ICIP’17). 755759. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Alexiou E. and Ebrahimi T.. 2017. On the performance of metrics to predict quality in point cloud representations. In Applications of Digital Image Processing XL, Tescher Andrew G. (Ed.), Vol. 10396. International Society for Optics and Photonics, SPIE, 282297.Google ScholarGoogle Scholar
  5. Alexiou E. and Ebrahimi T.. 2018. Point cloud quality assessment metric based on angular similarity. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME’18). 16. Google ScholarGoogle ScholarCross RefCross Ref
  6. Alexiou E., Upenik E., and Ebrahimi T.. 2017. Towards subjective quality assessment of point cloud imaging in augmented reality. In Proceedings of the IEEE 19th International Workshop on Multimedia Signal Processing (MMSP’17). 16. Google ScholarGoogle ScholarCross RefCross Ref
  7. Alexiou E., Viola I., Borges T. M., Fonseca T. A., Queiroz R. L. de, and Ebrahimi T.. 2019. A comprehensive study of the rate-distortion performance in MPEG point cloud compression. APSIPA Trans. Signal Info. Process. 8 (2019), e27.Google ScholarGoogle ScholarCross RefCross Ref
  8. Alexiou E., Yang N., and Ebrahimi T.. 2020. PointXR: A toolbox for visualization and subjective evaluation of point clouds in virtual reality. In Proceedings of the 12th International Conference on Quality of Multimedia Experience (QoMEX’20). 16. Google ScholarGoogle ScholarCross RefCross Ref
  9. Aspert N., Santa-Cruz D., and Ebrahimi T.. 2002. MESH: Measuring errors between surfaces using the Hausdorff distance. In Proceedings of the IEEE International Conference on Multimedia and Expo. Google ScholarGoogle ScholarCross RefCross Ref
  10. Bosse Sebastian, Maniry Dominique, Müller Klaus-Robert, Wiegand Thomas, and Samek Wojciech. 2018. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27, 1 (2018), 206219. Google ScholarGoogle ScholarCross RefCross Ref
  11. Caillaud F., Vidal V., Dupont F., and Lavoué G.. 2016. Progressive compression of arbitrary textured meshes. Comput. Graph. Forum 35, 7 (Oct.2016), 475484. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Chetouani A.. 2018. Three-dimensional mesh quality metric with reference based on a support vector regression model. J. Electr. Imag. 27, 4 (2018), 19. Google ScholarGoogle ScholarCross RefCross Ref
  13. Christaki Kyriaki, Christakis Emmanouil, and Drakoulis Petros. 2018. Subjective visual quality assessment of immersive 3D media compressed by open-source static 3D mesh codecs. In Proceedings of the 25th International Conference on MultiMedia Modeling (MMM’18). 112.Google ScholarGoogle Scholar
  14. Cignoni Paolo, Callieri Marco, Corsini Massimiliano, Dellepiane Matteo, Ganovelli Fabio, and Ranzuglia Guido. 2008. MeshLab: An open-source mesh processing tool. In Proceedings of the Eurographics Italian Chapter Conference. The Eurographics Association. Google ScholarGoogle ScholarCross RefCross Ref
  15. Corsini Massimiliano, Gelasca Elisa Drelie, Ebrahimi Touradj, and Barni Mauro. 2007. Watermarked 3D mesh quality assessment. IEEE Trans. Multimedia 9 (2007), 247256. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Engelke Ulrich, Kusuma Maulana, Zepernick Hans-Jürgen, and Caldera Manora. 2009. Reduced-reference metric design for objective perceptual quality assessment in wireless imaging. Signal Process.: Image Commun. 24, 7 (2009), 525547. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Gao Fei, Wang Yi, Li Panpeng, Tan Min, Yu Jun, and Zhu Yani. 2017. DeepSim: Deep similarity for image quality assessment. Neurocomputing 257 (2017), 104114. Machine Learning and Signal Processing for Big Multimedia Analysis. Google ScholarGoogle ScholarCross RefCross Ref
  18. Garland Michael and Heckbert Paul S.. 1998. Simplifying surfaces with color and texture using quadric error metrics. In Proceedings of the Conference on Visualization. 263269. Google ScholarGoogle Scholar
  19. Ghadiyaram Deepti and Bovik Alan C.. 2016. Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans. Image Process. 25, 1 (2016), 372387. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Guo Jinjiang, Vidal Vincent, Cheng Irene, Basu Anup, Baskurt Atilla, and Lavoué Guillaume. 2016. Subjective and objective visual quality assessment of textured 3D meshes. ACM Trans. Appl. Percept. 14 (102016), 120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Gutiérrez Jesús, Vigier Toinon, and Callet Patrick Le. 2020. Quality evaluation of 3D objects in mixed reality for different lighting conditions. Electr. Imag. 2020 (Jan.2020). Google ScholarGoogle ScholarCross RefCross Ref
  22. He Zhouyan, Jiang Gangyi, Jiang Zhidi, and Yu Mei. 2021. Towards a colored point cloud quality assessment method using colored texture and curvature projection. In Proceedings of the IEEE International Conference on Image Processing (ICIP’21). 14441448. Google ScholarGoogle ScholarCross RefCross Ref
  23. Hoßfeld Tobias, Keimel Christian, Hirth Matthias, Gardlo Bruno, Habigt Julian, Diepold Klaus, and Tran-Gia Phuoc. 2014. Best practices for QoE crowdtesting: QoE assessment with crowdsourcing. IEEE Trans. Multimedia 16, 2 (2014), 541558. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Huang Xun, Liu Ming-Yu, Belongi Serge, and Kautz Jan. 2018. Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV’18). 179196. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. BT.500-13 ITU-R. 2012. Methodology for the subjective assessment of the quality of television pictures BT Series Broadcasting service. Technical report. International Telecommunication Union, Geneva, Switzerland.Google ScholarGoogle Scholar
  26. P.910 ITU-T. 2008. Subjective video quality assessment methods for multimedia applications. Technical report. International Telecommunication Union, Geneva, Switzerland.Google ScholarGoogle Scholar
  27. Javaheri A., Brites C., Pereira F., and Ascenso J.. 2017. Subjective and objective quality evaluation of compressed point clouds. In Proceedings of the IEEE 19th International Workshop on Multimedia Signal Processing (MMSP’17). 16. Google ScholarGoogle ScholarCross RefCross Ref
  28. Javaheri A., Brites C., Pereira F., and Ascenso J.. 2019. Point cloud rendering after coding: Impacts on subjective and objective quality. Retrieved from https://arXiv:1912.09137.Google ScholarGoogle Scholar
  29. Javaheri A., Brites C., Pereira F., and Ascenso J.. 2020. Mahalanobis based point to distribution metric for point cloud geometry quality evaluation. IEEE Signal Process. Lett. 27 (2020), 13501354. Google ScholarGoogle ScholarCross RefCross Ref
  30. Jiménez Rafael Zequeira, Gallardo Laura Fernández, and Möller Sebastian. 2018. Influence of number of stimuli for subjective speech quality assessment in crowdsourcing. In Proceedings of the 10th International Conference on Quality of Multimedia Experience (QoMEX’18) (2018), 16. Google ScholarGoogle ScholarCross RefCross Ref
  31. Kang Le, Ye Peng, Li Yi, and Doermann David. 2014. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 17331740. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Karunasekera S. A. and Kingsbury N. G.. 1995. A distortion measure for blocking artifacts in images based on human visual sensitivity. IEEE Trans. Image Process. 4, 6 (1995), 713724. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Krasula L., Fliegel K., Callet P. Le, and Klíma M.. 2016. On the accuracy of objective image and video quality models: New methodology for performance evaluation. In Proceedings of the 8th International Conference on Quality of Multimedia Experience (QoMEX) (2016), 16.Google ScholarGoogle Scholar
  34. Lavoué Guillaume. 2009. A local roughness measure for 3D meshes and its application to visual masking. ACM Trans. Appl. Percept. 5, 4, Article 21 (2009), 23 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Lavoué Guillaume. 2011. A multiscale metric for 3D mesh visual quality assessment. Comput. Graph. Forum 30, 5 (2011), 14271437.Google ScholarGoogle ScholarCross RefCross Ref
  36. Lavoué Guillaume, Larabi Mohamed Chaker, and Vasa Libor. 2016. On the efficiency of image metrics for evaluating the visual quality of 3D models. IEEE Trans. Visual. Comput. Graph. 22, 8 (2016), 19871999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Lazzarotto Davi, Alexiou Evangelos, and Ebrahimi Touradj. 2021. Benchmarking of objective quality metrics for point cloud compression. In Proceedings of the 23rd IEEE International Workshop on Multimedia Signal Processing (MMSP’21). 16.Google ScholarGoogle ScholarCross RefCross Ref
  38. Liu Qi, Yuan Hui, Hamzaoui Raouf, Su Honglei, Hou Junhui, and Yang Huan. 2021b. Reduced reference perceptual quality model with application to rate control for video-based point cloud compression. IEEE Trans. Image Process. 30 (2021), 66236636. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Liu Qi, Yuan Hui, Su Honglei, Liu Hao, Wang Yu, Yang Huan, and Hou Junhui. 2021c. PQA-Net: Deep no reference point cloud quality assessment via multi-view projection. IEEE Trans. Circ. Syst. Video Technol. 31, 12 (2021), 4645–4660. Google ScholarGoogle ScholarCross RefCross Ref
  40. Liu Yipeng, Yang Qi, Xu Yiling, and Yang Le. 2021a. Point cloud quality assessment: Dataset construction and learning-based no-reference approach. Retrieved from https://arXiv:2012.11895.Google ScholarGoogle Scholar
  41. Maggiordomo Andrea, Ponchio Federico, Cignoni Paolo, and Tarini Marco. 2020. Real-world textured things: A repository of textured models generated with modern photo-reconstruction tools. Retrieved from https://arXiv:2004.14753.Google ScholarGoogle Scholar
  42. Mantiuk Rafał, Kim Kil Joong, Rempel Allan G., and Heidrich Wolfgang. 2011. HDR-VDP-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graph. 30, 4, Article 40 (July2011), 14 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Mantiuk Rafał K., Tomaszewska Anna, and Mantiuk Radosław. 2012. Comparison of four subjective methods for image quality assessment. Comput. Graph. Forum 31, 8 (Dec.2012), 24782491. http://doi.wiley.com/10.1111/j.1467-8659.2012.03188.xGoogle ScholarGoogle ScholarCross RefCross Ref
  44. Mekuria R., Blom K., and Cesar P.. 2017. Design, implementation, and evaluation of a point cloud codec for tele-immersive video. IEEE Trans. Circ. Syst. Video Technol. 27, 4 (Apr.2017), 828842.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Meynet Gabriel, Digne Julie, and Lavoué Guillaume. 2019. PC-MSDM: A quality metric for 3D point clouds. In Proceedings of the 11th International Conference on Quality of Multimedia Experience (QoMEX’19). 13. Google ScholarGoogle ScholarCross RefCross Ref
  46. Meynet Gabriel, Nehmé Yana, Digne Julie, and Lavoué Guillaume. 2020. PCQM: A full-reference quality metric for colored 3D point clouds. In Proceedings of the 12th International Conference on Quality of Multimedia Experience (QoMEX’20). 16. Google ScholarGoogle ScholarCross RefCross Ref
  47. Nehmé Yana, Callet Patrick Le, Dupont Florent, Farrugia Jean-Philippe, and Lavoué Guillaume. 2021a. Exploring crowdsourcing for subjective quality assessment of 3D graphics. In Proceedings of the IEEE International Workshop on Multimedia Signal Processing (MMSP’21).Google ScholarGoogle Scholar
  48. Nehmé Yana, Dupont Florent, Farrugia Jean-Philippe, Callet Patrick Le, and Lavoué Guillaume. 2021b. Visual quality of 3D meshes with diffuse colors in virtual reality: Subjective and objective evaluation. IEEE Trans. Visual. Comput. Graph. 27, 3 (2021), 22022219. Google ScholarGoogle ScholarCross RefCross Ref
  49. Nehmé Yana, Farrugia Jean-Philippe, Dupont Florent, Callet Patrick Le, and Lavoué Guillaume. 2020. Comparison of subjective methods for quality assessment of 3D graphics in virtual reality. ACM Trans. Appl. Percept. 18, 1 (Dec.2020), 123. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Nouri Anass, Charrier Christophe, and Lézoray Olivier. 2017. 3D blind mesh quality assessment index. In Proceedings of the IS&T International Symposium on Electronic Imaging.Google ScholarGoogle ScholarCross RefCross Ref
  51. Pan Yixin, Cheng I., and Basu A.. 2005. Quality metric for approximating subjective evaluation of 3D objects. IEEE Trans. Multimedia 7, 2 (Apr.2005), 269279. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Perry S., Cong H. P., Cruz L. A. da Silva, Prazeres J., Pereira M., Pinheiro A., Dumic E., Alexiou E., and Ebrahimi T.. 2020. Quality evaluation of static point clouds encoded using MPEG codecs. In Proceedings of the IEEE International Conference on Image Processing (ICIP’20). 34283432. Google ScholarGoogle ScholarCross RefCross Ref
  53. Preiss Jens, Fernandes Felipe, and Urban Philipp. 2014. Color-image quality assessment: From prediction to optimization. IEEE Trans. Image Process. 23, 3 (2014), 13661378. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Quach Maurice, Chetouani Aladine, Valenzise Giuseppe, and Dufaux Frederic. 2021. A deep perceptual metric for 3D point clouds. Electr. Imag. 2021, 9 (2021), 25712577. Google ScholarGoogle ScholarCross RefCross Ref
  55. Redi Judith, Siahaan Ernestasia, Korshunov Pavel, Habigt Julian, and Hossfeld Tobias. 2015. When the crowd challenges the lab: Lessons learnt from subjective studies on image aesthetic appeal. In Proceedings of the 4th International Workshop on Crowdsourcing for Multimedia. 3338. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Reimann Max, Wegen Ole, Pasewaldt Sebastian, Semmo Amir, Döllner Jürgen, and Trapp Matthias. 2021. Teaching data-driven video processing via crowdsourced data collection. In Proceedings of Eurographics’21: Education Papers. Google ScholarGoogle ScholarCross RefCross Ref
  57. Rogowitz Bernice E. and Rushmeier Holly E.. 2001. Are image quality metrics adequate to evaluate the quality of geometric objects? InProceedings of SPIE: The International Society for Optical Engineering. Google ScholarGoogle ScholarCross RefCross Ref
  58. Secord Adrian, Lu Jingwan, Finkelstein Adam, Singh Manish, and Nealen Andrew. 2011. Perceptual models of viewpoint preference. ACM Trans. Graph. 30, 5 (Oct.2011).Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Sheikh H. R. and Bovik A. C.. 2006. Image information and visual quality. IEEE Trans. Image Process. 15, 2 (Feb.2006), 430444. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Su H., Duanmu Z., Liu W., Liu Q., and Wang Z.. 2019. Perceptual quality assessment of 3D point clouds. In Proceedings of the IEEE International Conference on Image Processing (ICIP’19). 31823186.Google ScholarGoogle ScholarCross RefCross Ref
  61. Subramanyam Shishir, Li Jie, Viola Irene, and Cesar Pablo. 2020. Comparing the quality of highly realistic digital humans in 3DoF and 6DoF: A volumetric video case study. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR’20). IEEE, 127136.Google ScholarGoogle ScholarCross RefCross Ref
  62. Talebi Hossein and Milanfar Peyman. 2018. NIMA: Neural image assessment. IEEE Trans. Image Process. 27, 8 (2018), 39984011. Google ScholarGoogle ScholarCross RefCross Ref
  63. Tao Wen-xu, Jiang Gang-yi, Jiang Zhi-di, and Yu Mei. 2021. Point Cloud Projection and Multi-Scale Feature Fusion Network Based Blind Quality Assessment for Colored Point Clouds. New York, NY, 52665272. Google ScholarGoogle Scholar
  64. Tariq Taimoor, Tursun Okan Tarhan, Kim Munchurl, and Didyk Piotr. 2020. Why are deep representations good perceptual quality features? Computer Vision – ECCV 2020 (2020), 445461. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Tian Dihong and AlRegib G.. 2008. Batex3: Bit allocation for progressive transmission of textured 3D models. IEEE Trans. Circ. Syst. Video Technol. 18, 1 (2008), 2335.Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Tian Dong, Ochimizu Hideaki, Feng Chen, Cohen Robert, and Vetro Anthony. 2017. Geometric distortion metrics for point cloud compression. In Proceedings of the IEEE International Conference on Image Processing (ICIP’17). 34603464. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Torkhani Fakhri, Wang Kai, and Chassery Jean-Marc. 2014. A curvature-tensor-based perceptual quality metric for 3D triangular meshes. Mach. Graph. Vision 23, 1 (2014), 5982.Google ScholarGoogle ScholarCross RefCross Ref
  68. Torkhani Fakhri, Wang Kai, and Chassery Jean-Marc. 2015. Perceptual quality assessment of 3D dynamic meshes: Subjective and objective studies. Signal Process.: Image Commun. 31, 2 (Feb.2015), 185204. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Torlig E. M., Alexiou E., Fonseca T. A., Queiroz R. L. de, and Ebrahimi T.. 2018. A novel methodology for quality assessment of voxelized point clouds. In Applications of Digital Image Processing XLI, Tescher Andrew G. (Ed.), Vol. 10752. International Society for Optics and Photonics, SPIE, 174190.Google ScholarGoogle Scholar
  70. Vanhoey K., Sauvage B., Kraemer P., and Lavoué G.. 2017. Visual quality assessment of 3D models: On the influence of light-material interaction. ACM Trans. Appl. Percept. 15, 1 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Váša Libor and Rus Jan. 2012. Dihedral angle mesh error: A fast perception correlated distortion measure for fixed connectivity triangle meshes. Comput. Graph. Forum 31, 5 (2012).Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Viola Irene, Subramanyam Shishir, and Cesar Pablo. 2020. A color-based objective quality metric for point cloud contents. In Proceedings of the 12th International Conference on Quality of Multimedia Experience (QoMEX’20). 16. Google ScholarGoogle ScholarCross RefCross Ref
  73. Viola Irene, Subramanyam Shishir, Li Jie, and Cesar Pablo. 2022. On the impact of VR assessment on the Quality of Experience of Highly Realistic Digital Humans. Retrieved from https://arXiv:2201.07701.Google ScholarGoogle Scholar
  74. VQEG. 2010. Report on the validation of video quality models for high definition video content. Technical report. Video Quality Experts Group.Google ScholarGoogle Scholar
  75. Wang Kai, Torkhani Fakhri, and Montanvert Annick. 2012. A fast roughness-based approach to the assessment of 3D mesh visual quality. Comput. Graph. 36, 7 (2012), 808–818.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Wang Z., Bovik A. C., Sheikh H. R., and Simoncelli E. P.. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 4 (2004), 600612.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Watson Benjamin. 2001. Measuring and predicting visual fidelity. ACM Siggraph (2001), 213220. Google ScholarGoogle Scholar
  78. Wu Jinjian, Ma Jupo, Liang Fuhu, Dong Weisheng, Shi Guangming, and Lin Weisi. 2020. End-to-end blind image quality prediction with cascaded deep neural network. IEEE Trans. Image Process. 29 (2020), 74147426. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Wu Xinju, Zhang Yun, Fan Chunling, Hou Junhui, and Kwong Sam. 2021. Subjective quality database and objective study of compressed point clouds with 6DoF head-mounted display. IEEE Trans. Circ. Syst. Video Technol. 31, 12 (2021), 46304644.Google ScholarGoogle ScholarCross RefCross Ref
  80. Yang Ceyuan, Wang Zhe, Zhu Xinge, Huang Chen, Shi Jianping, and Lin Dahua. 2018. Pose guided human video generation. In Proceedings of the European Conference on Computer Vision (ECCV’18). 204219. Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Yang Q., Chen H., Ma Z., Xu Y., Tang R., and Sun J.. 2020. Predicting the perceptual quality of point cloud: A 3D-to-2D projection-based exploration. IEEE Trans. Multimedia 23 (2020), 38773891.Google ScholarGoogle Scholar
  82. Yang Qi, Ma Zhan, Xu Yiling, Li Zhu, and Sun Jun. 2020. Inferring point cloud quality via graph similarity. IEEE Trans. Pattern Anal. Mach. Intell. 44, 6 (2020), 3015–3029. Google ScholarGoogle ScholarCross RefCross Ref
  83. Yildiz Zeynep Cipiloglu, Oztireli A. Cengiz, and Capin Tolga. 2020. A machine learning framework for full-reference 3D shape quality assessment. Visual Comput. 36, 1 (2020), 127139. Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. Yu Honghai and Winkler Stefan. 2013. Image complexity and spatial information. In Proceedings of the 5th International Workshop on Quality of Multimedia Experience (QoMEX’13). 1217. Google ScholarGoogle ScholarCross RefCross Ref
  85. Zerman Emin, Gao Pan, Ozcinar Cagri, and Smolic Aljosa. 2019. Subjective and objective quality assessment for volumetric video compression. Electr. Imag. 2019, 10 (2019), 323–1.Google ScholarGoogle Scholar
  86. Zerman Emin, Ozcinar Cagri, Gao Pan, and Smolic Aljosa. 2020. Textured mesh vs coloured point cloud: A subjective study for volumetric video compression. In Proceedings of the 12th International Conference on Quality of Multimedia Experience (QoMEX’20). IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  87. Zhang Richard, Isola Phillip, Efros Alexei A., Shechtman Eli, and Wang Oliver. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’18). 586595. Google ScholarGoogle ScholarCross RefCross Ref
  88. Zhang Yujie, Yang Qi, and Xu Yiling. 2021. MS-GraphSIM: Inferring Point Cloud Quality via Multiscale Graph Similarity. ACM, New York, NY, 12301238. Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. Zhu Qing, Zhao Junqiao, Du Zhiqiang, and Zhang Yeting. 2010. Quantitative analysis of discrete 3D geometrical detail levels based on perceptual metric. Comput. Graph. 34, 1 (2010), 5565. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Textured Mesh Quality Assessment: Large-scale Dataset and Deep Learning-based Quality Metric

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in

            Full Access

            • Published in

              cover image ACM Transactions on Graphics
              ACM Transactions on Graphics  Volume 42, Issue 3
              June 2023
              181 pages
              ISSN:0730-0301
              EISSN:1557-7368
              DOI:10.1145/3579817
              Issue’s Table of Contents

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 5 June 2023
              • Online AM: 14 April 2023
              • Accepted: 9 March 2023
              • Revised: 16 December 2022
              • Received: 23 February 2022
              Published in tog Volume 42, Issue 3

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            Full Text

            View this article in Full Text.

            View Full Text

            HTML Format

            View this article in HTML Format .

            View HTML Format