Chinese
Adv Search
Home | Accepted | Article In Press | Current Issue | Archive | Special Issues | Collections | Featured Articles | Statistics

2020, 2(1): 43-55 Published Date:2020-2-20

DOI: 10.1016/j.vrih.2019.12.001

View Synthesis from multi-view RGB data using multi-layered representation and volumetric estimation

Full Text: PDF (5) HTML (382)

Export: EndNote | Reference Manager | ProCite | BibTex | RefWorks

Abstract:

Background Aiming at free-view exploration of complicated scenes, this paper presents a method for interpolating views among multi RGB cameras.Methods In this study, we combine the idea of cost volume, which represent 3D information, and 2D semantic segmentation of the scene, to accomplish view synthesis of complicated scenes. We use the idea of cost volume to estimate the depth and confidence map of the scene, and use a multi-layer representation and resolution of the data to optimize the view synthesis of the main object. Results/Conclusions By applying different treatment methods on different layers of the volume, we can handle complicated scenes containing multiple persons and plentiful occlusions. We also propose the view-interpolation
multi-view reconstruction
view interpolation pipeline to iteratively optimize the result. We test our method on varying data of multi-view scenes and generate decent results.
Keywords: View interpolation ; Cost volume ; Multi-layer processing ; Multi-view reconstruction ; Iterative optimization

Cite this article:

Zhaoqi SU, Tiansong ZHOU, Kun LI, David BRADY, Yebin LIU. View Synthesis from multi-view RGB data using multi-layered representation and volumetric estimation. Virtual Reality & Intelligent Hardware, 2020, 2(1): 43-55 DOI:10.1016/j.vrih.2019.12.001

1. Ballan L, Brostow G J, Puwein J, Pollefeys M. Unstructured video-based rendering: interactive exploration of casually captured videos. In: ACM SIGGRAPH 2010 papers. Los Angeles, California, ACM, 2010, 1–11 DOI:10.1145/1833349.1778824

2. Penner E, Zhang L. Soft 3D reconstruction for view synthesis. ACM Transactions on Graphics, 2017, 36(6): 1–11 DOI:10.1145/3130800.3130855

3. He K M, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV). Venice, IEEE, 2017 DOI:10.1109/iccv.2017.322

4. Wang S H, Sun J D, Phillips P, Zhao G H, Zhang Y D. Polarimetric synthetic aperture radar image segmentation by convolutional neural network using graphical processing units. Journal of Real-Time Image Processing, 2018, 15(3): 631–642 DOI:10.1007/s11554-017-0717-0

5. Girshick R, Radosavovic I, Gkioxari G, Dollár P, He K. Detectron. https://github.com/facebookresearch/detectron, 2018

6. Fuhrmann S, Langguth F, Goesele M. MVE: a multi-view reconstruction environment. In: Proceedings of the Eurographics Workshop on Graphics and Cultural Heritage. Darmstadt, Germany, Eurographics Association, 2014, 11–18DOI:10.2312/gch.20141299

7. Guo K W, Xu F, Yu T, Liu X Y, Dai Q H, Liu Y B. Real-time geometry, albedo and motion reconstruction using a single RGBD camera. ACM Transactions on Graphics, 2017, 36(4): 1 DOI:10.1145/3072959.3126786

8. Liu Y B, Cao X, Dai Q H, Xu W L. Continuous depth estimation for multi-view stereo. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL, IEEE, 2009 DOI:10.1109/cvpr.2009.5206712

9. Mustafa A, Kim H, Guillemaut J Y, Hilton A. Temporally coherent 4D reconstruction of complex dynamic scenes. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA, IEEE, 2016 DOI:10.1109/cvpr.2016.504

10. Chen S E, Williams L. View interpolation for image synthesis. In: Proceedings of the 20th annual conference on Computer graphics and interactive techniques-SIGGRAPH '93. New York, USA, ACM Press, 1993DOI:10.1145/166117.166153

11. Zitnick C L, Kang S B, Uyttendaele M, Winder S, Szeliski R. High-quality video view interpolation using a layered representation. ACM Transactions on Graphics, 2004, 23(3): 600–608 DOI:10.1145/1015706.1015766

12. Li S, Zhu C, Sun M T. Hole filling with multiple reference views in DIBR view synthesis. IEEE Transactions on Multimedia, 2018, 20(8): 1948–1959 DOI:10.1109/tmm.2018.2791810

13. Kalantari N K, Wang T C, Ramamoorthi R. Learning-based view synthesis for light field cameras. ACM Transactions on Graphics, 2016, 35(6): 1–10 DOI:10.1145/2980179.2980251

14. Flynn J, Neulander I, Philbin J, Snavely N. Deep stereo: learning to predict new views from the world's imagery. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA, IEEE, 2016 DOI:10.1109/cvpr.2016.595

15. Zhou T, Tucker R, Flynn J, Fyffe G, Snavely N. Stereo magnification: Learning view synthesis using multiplane images. In SIGGRAPH, 2018

16. Hedman P, Philip J, Price T, FrahmJ-M, Drettakis G, Brostow G. Deep blending for free-viewpoint image-based rendering. ACM Transactions on Graphics, 2018, 37(6):1–15 DOI:10.1145/3272127.3275084

17. Mildenhall B, Srinivasan P P, Ortiz-Cayon R, Kalantari N K, Ramamoorthi R, Ng R, Kar A. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG), 2019

18. Srinivasan P P, Tucker R, Barron J T, Ramamoorthi R, Ng R, Snavely N. Pushing the boundaries of view extrapolation with multiplane images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, 175–184

19. He K, Sun J, Tang X. Guided Image Filtering. Berlin, Heidelberg, Springer Berlin Heidelberg, 2010,1–14

20. Brox T, Bruhn A, Papenberg N, Weickert J. High Accuracy Optical Flow Estimation Based on a Theory for Warping. Berlin, Heidelberg, Springer Berlin Heidelberg, 2004, 25–36

email E-mail this page

Articles by authors

VRIH