Chinese
Adv Search
Home | Accepted | Article In Press | Current Issue | Archive | Special Issues | Collections | Featured Articles | Statistics

2020,  2 (3):   1 - 2   Published Date:2020-6-20

DOI: 10.3724/SP.J.2096-5796.2020.02.03

Content

3D sensing represents the main channel through which humans, or robotics agents, understand and interact with each other and with the real world. As such, many 3D acquisition technologies and devices have been developed and applied in emerging applications, such as autonomous systems, augmented reality and digital production. A typical 3D visual system takes RGB and/or range images of an object or scene and generates 3D geometry. There are several classic solutions for different settings, for example, structure from motion (SfM) for scattered images, Simultaneous Localization and Mapping (SLAM) for structured images along temporal axis. A research trend has been introducing deep learning into many conventional operations, such as pose estimation, spatial computation, and scene recognition. Besides 3D geometry, modeling of texture, material and lighting properties are also part of 3D visual processing.
In this special issue, we will introduce six articles to illustrate the recent progress of sensors and algorithms for 3D modeling and processing. The selected works include three surveys and three research articles, covering rich topics ranging from 3D acquisition and processing, point cloud registration, image-based 3D reconstruction to video-based viewpoint interpolation.
Zheng et al. provide a survey on the subject of shape from shading (SfS), also known as photometric stereo method, which estimates surface normal of each pixel to compute 3D coordinate. Different than SLAM and SfM, which are formulated using the change of camera pose, SfS models the change of illumination. Due to the unknown reflectance and global illumination effects of real-world objects, traditional SfS methods are only effective for limited categories of reflectance. Most of the recent interests in this area are to leverage the power of deep learning tools to improve the robustness and expand the capability. This survey includes data-driven based methods that use neural network to represent the reflectance model. They have extensively studied the difference and relationships of learning-based methods from the perspective of input, network architecture, and data collection, and presented their capability in approximating the general reflectance for which traditional hand-crafted models usually fail.
Two surveys from Wang et al. and Zhang et al. study point cloud based 3D modeling methods. The point cloud is obtained either from depth sensor or large scale laser scanner, and is registered to build a 3D model after a few steps of point cloud processing. Wang et al. review the latest advances in 3D reconstruction for LiDAR-based mobile mapping system (MMS) and its applications on urban modeling. Zhang et al. focus on point cloud registration and study different network designs. As a comprehensive survey, feature extraction, object extraction, semantic segmentation, and motion estimation are also discussed.
Domain knowledge provides strong prior for modeling specific object and scene, such as human hand and street building. Li et al. propose a neural hand reconstruction method using a single RGB image as input. They use UV map as intermediate representation of 3D hand to reduce the complexity of mapping between 2D image and 3D shape. Hu et al. discover roof structure units and link topology based on visual perception rules of proximity, similarity and continuity. They use a bottom up merging scheme and represent the model with a hierarchical topology tree.
Free viewpoint navigation is important in VR and other 3D platforms. However, realistic novel view generation is still a difficult problem, even with the help of 3D model. Wang et al. introduce a deep learning based method on free-viewpoint video generation by interpolating a set of synchronously video sequences.
With the advent of new sensing technologies and devices, 3D visual processing and reconstruction are called for in a wide range of applications, coping with different scene complexity and commodity consideration. Great challenges come with exciting opportunities. These technologies are on the verge of revolutionizing the way we communicate, entertain, and manufacture. We hope that this special issue will help bring technology new front to the research community and serve the need of related applications.
Baoquan CHEN
June 2, 2020

Reference