Chinese
Adv Search
Home | Accepted | Article In Press | Current Issue | Archive | Special Issues | Collections | Featured Articles | Statistics

2021, 3(2): 171-181 Published Date:2021-4-20

DOI: 10.1016/j.vrih.2020.12.004

Cumulus cloud modeling from images based on VAE-GAN

Full Text: PDF (4) HTML (55)

Export: EndNote | Reference Manager | ProCite | BibTex | RefWorks

Abstract:

Background
Cumulus clouds are important elements in creating virtual outdoor scenes. Modeling cumulus clouds that have a specific shape is difficult owing to the fluid nature of the cloud. Image-based modeling is an efficient method to solve this problem. Because of the complexity of cloud shapes, the task of modeling the cloud from a single image remains in the development phase.
Methods
In this study, a deep learning-based method was developed to address the problem of modeling 3D cumulus clouds from a single image. The method employs a three-dimensional autoencoder network that combines the variational autoencoder and the generative adversarial network. First, a 3D cloud shape is mapped into a unique hidden space using the proposed autoencoder. Then, the parameters of the decoder are fixed. A shape reconstruction network is proposed for use instead of the encoder part, and it is trained with rendered images. To train the presented models, we constructed a 3D cumulus dataset that included 200 3D cumulus models. These cumulus clouds were rendered under different lighting parameters.
Results
The qualitative experiments showed that the proposed autoencoder method can learn more structural details of 3D cumulus shapes than existing approaches. Furthermore, some modeling experiments on rendering images demonstrated the effectiveness of the reconstruction model.
Conclusion
The proposed autoencoder network learns the latent space of 3D cumulus cloud shapes. The presented reconstruction architecture models a cloud from a single image. Experiments demonstrated the effectiveness of the two models.
Keywords: 3D cloud model ; 3D autoencoder network ; Generative adversarial network

Cite this article:

Zili ZHANG, Yunchi CEN, Fan ZHANG, Xiaohui LIANG. Cumulus cloud modeling from images based on VAE-GAN. Virtual Reality & Intelligent Hardware, 2021, 3(2): 171-181 DOI:10.1016/j.vrih.2020.12.004

1. York New, Press ACM, 1985, 297–304 DOI:10.1145/325334.325248

2. Blinn J F. A generalization of algebraic surface drawing. ACM Transactions on Graphics, 1982, 1(3): 235–256 DOI:10.1145/357306.357310

3. Goswami P, Neyret F. Real-time landscape-size convective clouds simulation. In: Proceedings of the 19th Symposium on Interactive 3D Graphics and Games. San Francisco California, New York, ACM, 2015 DOI:10.1145/2699276.2721396

4. York New, Press ACM, 1984 DOI:10.1145/800031.808594

5. Miyazaki R, Yoshida S, Dobashi Y, Nishita T. A method for modeling clouds based on atmospheric fluid dynamics. In: Proceedings Ninth Pacific Conference on Computer Graphics and Applications Pacific Graphics 2001. 2001, 363–372 DOI:10.1109/PCCGA.2001.962893

6. Ferreira Barbosa C W, Dobashi Y, Yamamoto T. Adaptive cloud simulation using position based fluids. Computer Animation and Virtual Worlds, 2015, 26(3/4): 367–375 DOI:10.1002/cav.1657

7. Dobashi Y, Kusumoto K, Nishita T, Yamamoto T. Feedback control of cumuliform cloud formation based on computational fluid dynamics. ACM Transactions on Graphics, 2008, 27(3): 1–8 DOI:10.1145/1360612.1360693

8. Dobashi Y, Shinzo Y, Yamamoto T. Modeling of clouds from a single photograph. Computer Graphics Forum, 2010, 29(7): 2083–2090 DOI:10.1111/j.1467-8659.2010.01795.x

9. Yuan C Q, Liang X H, Hao S Y, Qi Y, Zhao Q P. Modelling cumulus cloud shape from a single image. Computer Graphics Forum, 2014, 33(6): 288–297 DOI:10.1111/cgf.12350

10. Okabe M, Dobashi Y, Anjyo K, Onai R. Fluid volume modeling from sparse multi-view images by appearance transfer. ACM Transactions on Graphics, 2015, 34(4): 1–10 DOI:10.1145/2766958

11. Zhang Z L, Liang X H, Yuan C Q, Li F W B. Modeling cumulus cloud scenes from high-resolution satellite images. Computer Graphics Forum, 2017, 36(7): 229–238 DOI:10.1111/cgf.13288

12. Eckert M L, Heidrich W, Thuerey N. Coupled fluid density and motion from single views. Computer Graphics Forum, 2018, 37(8): 47–58 DOI:10.1111/cgf.13511

13. York New, Press ACM, 1984 DOI:10.1145/800031.808594

14. Fan H, Su H, Guibas L J. A point set generation network for 3D object reconstruction from a single image. Proceedings of the IEEE conference on computer vision and pattern recognition. 2017, 605–613 DOI:10.1109/CVPR.2017.264

15. Choy C B, Xu D F, Gwak J, Chen K, Savarese S. 3D-R2N2: A unified approach for single and multi-view 3D object reconstruction. In: Computer Vision–ECCV 2016. Cham: Springer International Publishing, 2016, 628–644 DOI:10.1007/978-3-319-46484-8_38

16. Navaneet K L, Mathew A, Kashyap S, Hung W C, Jampani V, Babu R V. From Image collections to point clouds with self-supervised shape and pose networks. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020, 1129–1137 DOI:10.1109/CVPR42600.2020.00121

17. Kato H, Ushiku Y, Harada T. Neural 3D mesh renderer. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, 3907–3916

18. Navaneet K L, Mathew A, Kashyap S, Hung W C, Jampani V, Babu R V. From image collections to point clouds with self-supervised shape and pose networks. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020, 1129–1137 DOI:10.1109/CVPR42600.2020.00121

19. Brock A, Lim T, Ritchie J M. Generative and discriminative voxel modeling with convolutional neural networks. 2016

20. Gadelha M, Maji S, Wang R. 3D shape induction from 2D views of multiple objects. 2017 International Conference on 3D Vision (3DV). IEEE, 2017, 402–411

21. Yang B, Rosa S, Markham A. Dense 3D object reconstruction from a single depth view. IEEE transactions on pattern analysis and machine intelligence, 2018, 41(12): 2820–2834 DOI:10.1109/TPAMI.2018.2868195

22. Liu C X, Kong D H, Wang S F, Li J H, Yin B C. DLGAN: depth-preserving latent generative adversarial network for 3D reconstruction. IEEE Transactions on Multimedia, 2020, 1 DOI:10.1109/tmm.2020.3017924

23. Yi L, Kim V G, Ceylan D, Shen I C, Yan M Y, Su H, Lu C W, Huang Q X, Sheffer A, Guibas L. A scalable active framework for region annotation in 3D shape collections. ACM Transactions on Graphics, 2016, 35(6): 1–12 DOI:10.1145/2980179.2980238

24. Kingma D P, Welling M. Auto-encoding variational bayes. 2013

25. Mirza M, Osindero S. Conditional generative adversarial nets. 2014

26. Wu J, Zhang C, Xue T, Freeman W T, Tenenbaum J B. Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain, Curran Associates Inc, 2016, 82–90

27. He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016, 770–778 DOI:10.1109/CVPR.2016.90

email E-mail this page

Articles by authors

VRIH