Chinese

Virtual Reality & Intelligent Hardware

Virtual Reality & Intelligent Hardware

News » More

2020-01-15

2019-04-23

2018-11-22

2018-11-12

2018-11-09

2020, Vol. 2 No. 1 Publish Date：2020-2

EndNote

Reference Manager

ProCite

BiteTex

RefWorks

Article

Co-axial depth sensor with an extended depth range for AR/VR applications

DOI：10.1016/j.vrih.2019.10.004

2020, 2(1) : 1-11

Abstract (180) | PDF (18) | HTML (151)
Background
Depth sensor is an essential element in virtual and augmented reality devices to digitalize users’ environment in real time. The current popular technologies include the stereo, structured light, and Time-of-Flight (ToF). The stereo and structured light method require a baseline separation between multiple sensors for depth sensing, and both suffer from a limited measurement range. The ToF depth sensors have the largest depth range but the lowest depth map resolution. To overcome these problems, we propose a co-axial depth map sensor which is potentially more compact and cost-effective than conventional structured light depth cameras. Meanwhile, it can extend the depth range while maintaining a high depth map resolution. Also, it provides a high-resolution 2D image along with the 3D depth map.
Methods
This depth sensor is constructed with a projection path and an imaging path. Those two paths are combined by a beamsplitter for a co-axial design. In the projection path, a cylindrical lens is inserted to add extra power in one direction which creates an astigmatic pattern. For depth measurement, the astigmatic pattern is projected onto the test scene, and then the depth information can be calculated from the contrast change of the reflected pattern image in two orthogonal directions. To extend the depth measurement range, we use an electronically focus tunable lens at the system stop and tune the power to implement an extended depth range without compromising depth resolution.
Results
In the depth measurement simulation, we project a resolution target onto a white screen which is moving along the optical axis and then tune the focus tunable lens power for three depth measurement subranges, namely, near, middle and far. In each sub-range, as the test screen moves away from the depth sensor, the horizontal contrast keeps increasing while the vertical contrast keeps decreasing in the reflected image. Therefore, the depth information can be obtained by computing the contrast ratio between features in orthogonal directions.
Conclusions
The proposed depth map sensor could implement depth measurement for an extended depth range with a co-axial design.
A smart assistance system for cable assembly by combining wearable augmented reality with portable visual inspection

DOI：10.1016/j.vrih.2019.12.002

2020, 2(1) : 12-27

Abstract (194) | PDF (12) | HTML (135)
Background
Assembly guided by paper documents is the most widespread type used in the process of aircraft cable assembly. This process is very complicated and requires assembly workers with high-level skills. The technologies of wearable Augmented Reality (AR) and portable visual inspection can be exploited to improve the efficiency and the quality of cable assembly.
Methods
In this study, we propose a smart assistance system for cable assembly that combines wearable AR with portable visual inspection. Specifically, a portable visual device based on binocular vision and deep learning is developed to realize fast detection and recognition of cable brackets that are installed on aircraft airframes. A Convolutional Neural Network (CNN) is then developed to read the texts on cables after images are acquired from the camera of the wearable AR device. An authoring tool that was developed to create and manage the assembly process is proposed to realize visual guidance of the cable assembly process based on a wearable AR device. The system is applied to cable assembly on an aircraft bulkhead prototype.
Results
The results show that this system can recognize the number, types, and locations of brackets, and can correctly read the text of aircraft cables. The authoring tool can assist users who lack professional programming experience in establishing a process plan, i.e., assembly outline based on AR for cable assembly.
Conclusions
The system can provide quick assembly guidance for aircraft cable with texts, images, and a 3D model. It is beneficial for reducing the dependency on paper documents, labor intensity, and the error rate.
Virtual assembly framework for performance analysis of large optics

DOI：10.1016/j.vrih.2020.01.001

2020, 2(1) : 28-42

Abstract (160) | PDF (2) | HTML (115)
Background
A longstanding technological challenge exists regarding the precise assembly design and performance optimization of large optics in high power laser facilities, comprising a combination of many complex problems involving mechanical, material, and laser beam physics.
Method
In this study, an augmented virtual assembly framework based on a multiphysics analysis and digital simulation is presented for the assembly optimization of large optics. This framework focuses on the fundamental impact of the structural and assembly parameters of a product on its optical performance; three-dimensional simulation technologies improve the accuracy and measurability of the impact. Intelligent iterative computation algorithms have been developed to optimize the assembly plan of large optics, which are significantly affected by a series of constraints including dynamic loads and nonlinear ambient excitations.
Results
Finally, using a 410-mm-aperture frequency converter as the study case, we present a detailed illustration and discussion to validate the performance of the proposed system in large optics assembly and installation engineering.
View Synthesis from multi-view RGB data using multi-layered representation and volumetric estimation

DOI：10.1016/j.vrih.2019.12.001

2020, 2(1) : 43-55

Abstract (184) | PDF (4) | HTML (124)
Background Aiming at free-view exploration of complicated scenes, this paper presents a method for interpolating views among multi RGB cameras.Methods In this study, we combine the idea of cost volume, which represent 3D information, and 2D semantic segmentation of the scene, to accomplish view synthesis of complicated scenes. We use the idea of cost volume to estimate the depth and confidence map of the scene, and use a multi-layer representation and resolution of the data to optimize the view synthesis of the main object. Results/Conclusions By applying different treatment methods on different layers of the volume, we can handle complicated scenes containing multiple persons and plentiful occlusions. We also propose the view-interpolation
$→$
multi-view reconstruction
$→$
view interpolation pipeline to iteratively optimize the result. We test our method on varying data of multi-view scenes and generate decent results.
Survey on path and view planning for UAVs

DOI：10.1016/j.vrih.2019.12.004

2020, 2(1) : 56-69

Abstract (162) | PDF (16) | HTML (115)
Background In recent decades, unmanned aerial vehicles (UAVs) have developed rapidly and been widely applied in many domains, including photography, reconstruction, monitoring, and search and rescue. In such applications, one key issue is path and view planning, which tells UAVs exactly where to fly and how to search.MethodsWith specific consideration for three popular UAV applications (scene reconstruction, environment exploration, and aerial cinematography), we present a survey that should assist researchers in positioning and evaluating their works in the context of existing solutions. Results/Conclusions It should also help newcomers and practitioners in related fields quickly gain an overview of the vast literature. In addition to the current research status, we analyze and elaborate on advantages, disadvantages, and potential explorative trends for each application domain.
Study of ghost image suppression in polarized catadioptric virtual reality optical systems

DOI：10.1016/j.vrih.2019.10.005

2020, 2(1) : 70-78

Abstract (169) | PDF (6) | HTML (126)
Background
This paper introduces a polarized catadioptric virtual reality optical system. With a focus on the issue of serious ghost image in the system, root causes are analyzed based on design principles and optical structure.
Methods
The distribution of stray light is simulated using Lighttools, and three major ghost paths are selected using the area of the diffuse spot,
$Sd$
and the energy ratio of the stray light, K as evaluation means. A method to restrain the ghost image through optimization of the structure of the optical system by controlling the focal power of the ghost image path is proposed. Results/Conclusions The results show that the
$Sd$
for the ghost image path increases by 40% and K decreases by 40% after optimization. Ghost image is effectively suppressed, which provides the theoretical basis and technical support for ghost suppression in a virtual reality optical system.
Study on the adaptability of augmented reality smartglasses for astigmatism based on holographic waveguide grating

DOI：10.1016/j.vrih.2019.12.003

2020, 2(1) : 79-85

Abstract (139) | PDF (10) | HTML (122)
Background
Augmented reality (AR) smartglasses are considered as the next generation of smart devices to replace mobile phones, and are widely concerned. But at present, AR smartglasses are usually designed according to the human normal eyes. In order to experience AR smartglasses perfectly, abnormal eye users must first wear diopters.
Methods
For people with astigmatism to use AR smartglasses without wearing a diopter lens, a cylindrical lens waveguide grating is designed in this study based on the principle of holographic waveguide grating. First, a cylindrical lens waveguide substrate is constructed for external light deflection to satisfy the users' normal viewing of the real world. Further, a variable period grating structure is established based on the cylindrical lens waveguide substrate to normally emit the light from the virtual world in the optical machine to the human eyes. Finally, the structural parameters of grating are optimized to improve the diffraction efficiency.
Results
The results show that the structure of cylindrical lens waveguide grating allows people with astigmatism to wear AR smartglasses directly. The total light utilization rate reaches 90% with excellent imaging uniformity. The brightness difference is less than 0.92% and the vertical field of view is 10°.
Conclusions
This research serves as a guide for AR product designs for people with long/short sightedness and promotes the development of such products.