Home About the Journal Latest Work Current Issue Archive Special Issues Editorial Board

2020, Vol. 2 No. 6 Publish Date:2020-12

Previous
View Abstracts Download Citations

EndNote

Reference Manager

ProCite

BiteTex

RefWorks

Review

A survey on monocular 3D human pose estimation

2020, 2(6) : 471-500

DOI:10.1016/j.vrih.2020.04.005

Abstract (408) PDF (18) HTML (523)
Recovering human pose from RGB images and videos has drawn increasing attention in recent years owing to minimum sensor requirements and applicability in diverse fields such as human-computer interaction, robotics, video analytics, and augmented reality. Although a large amount of work has been devoted to this field, 3D human pose estimation based on monocular images or videos remains a very challenging task due to a variety of difficulties such as depth ambiguities, occlusion, background clutters, and lack of training data. In this survey, we summarize recent advances in monocular 3D human pose estimation. We provide a general taxonomy to cover existing approaches and analyze their capabilities and limitations. We also present a summary of extensively used datasets and metrics, and provide a quantitative comparison of some representative methods. Finally, we conclude with a discussion on realistic challenges and open problems for future research directions.

Article

Object registration using an RGB-D camera for complex product augmented assembly guidance

2020, 2(6) : 501-517

DOI:10.1016/j.vrih.2020.01.004

Abstract (349) PDF (9) HTML (294)
Background
Augmented assembly guidance aims to help users complete assembly operations more efficiently and quickly through augmented reality technology, breaking the limitations of traditional assembly guidance technology which is single in content and boring in way. Object registration is one of the key technologies in augmented assembly guidance process, which can affect the location and direction of virtual assembly guidance information in real assembly environment.
Methods
This paper presents an object registration method based on RGB-D camera, which combines Lucas-Kanade (LK) optical flow algorithm and Iterative Closet Point (ICP) algorithm. An augmented assembly guidance system for complex products through this method is built. Meanwhile, in order to compare the effectiveness of the proposed method, we also implemented object registration based on an open source augmented reality SDK Vuforia.
Results
An engine model and a complex weapon cabin equipment are taken as an case to verify this work. The result shows that the registration method proposed in this paper is more accurate and stable compared with that based on Vuforia and the augmented assembly guidance system through this method greatly improves the user's time compared with the traditional assembly.
Conclusions
Therefore, we can conclude that the object registration method mentioned in this paper can be well applied in the augmented assembly guidance system, which can do enhance the efficiency of assembly considerably.
A multichannel human-swarm robot interaction system in augmented reality

2020, 2(6) : 518-533

DOI:10.1016/j.vrih.2020.05.006

Abstract (318) PDF (9) HTML (286)
Background
A large number of robots have put forward the new requirements for human-robot interaction. One of the problems in human-swarm robot interaction is how to naturally achieve an efficient and accurate interaction between humans and swarm robot systems. To address this, this paper proposes a new type of human-swarm natural interaction system.
Methods
Through the cooperation between three-dimensional (3D) gesture interaction channel and natural language instruction channel, a natural and efficient interaction between a human and swarm robots is achieved.
Results
First, A 3D lasso technology realizes a batch-picking interaction of swarm robots through oriented bounding boxes. Second, control instruction labels for swarm-oriented robots are defined. The instruction label is integrated with the 3D gesture and natural language through instruction label filling. Finally, the understanding of natural language instructions is realized through a text classifier based on the maximum entropy model. A head-mounted augmented reality display device is used as a visual feedback channel.
Conclusions
The experiments on selecting robots verify the feasibility and availability of the system.
Affine transformation of virtual 3D object using 2D localization of fingertips

2020, 2(6) : 534-555

DOI:10.1016/j.vrih.2020.10.001

Abstract (286) PDF (9) HTML (252)
Background
Interactions with virtual 3D objects in the virtual reality (VR) environment using the gesture of fingers captured in a wearable 2D camera have emerging applications in real-life.
Method
This paper presents an approach of a two-stage convolutional neural network, one for the detection of hand and another for the fingertips. One purpose of VR environments is to transform a virtual 3D object with affine parameters by using the gesture of thumb and index fingers.
Results
To evaluate the performance of the proposed system, one existing, and another developed egocentric fingertip databases are employed so that learning involves large variations that are common in real-life. Experimental results show that the proposed fingertip detection system outperforms the existing systems in terms of the precision of detection.
Conclusion
The interaction performance of the proposed system in the VR environment is higher than that of the existing systems in terms of estimation error and correlation between the ground truth and estimated affine parameters.
Virtual 3D environment for exploring the spatial ability of students

2020, 2(6) : 556-568

DOI:10.1016/j.vrih.2020.08.001

Abstract (347) PDF (10) HTML (298)
Background
Spatial ability is an unique type of intelligence; it can be distinguished from other forms of intelligence and plays an essential role in an individual's success in many academic fields, particular in this era of technology. Instruction-assisted 3D technology can display stereo graphics and promote students' understanding of the geometrical structure and characteristics of graphics. Spatial ability includes several aspects. Few software programs are available for training different aspects of spatial ability for senior high school students. This study aims to explore an effective method for training the spatial ability for senior high school students, and to promote the development of students' independent inquiry ability.
Methods
First, an inquiry design strategy to improve the spatial ability of students is proposed. Based on this strategy, unity3D was used to develop a 3D inquiry environment that can use leap motion to complete a gesture interaction. Finally, researchers carried out experience-based activities and issued user experience questionnaires to participants to verify the application effect of the spatial ability inquiry environment and used interviews to understand the user experience of participants exploring the leap motion device in a 3D inquiry environment.
Results
32 learners participated in the experiment. learners have a high score for perceived usefulness and willingness to use. Compared with the perceived ease of use and perceived usefulness, the average score of the application effect is relatively low. In terms of willingness to use, most of the learners expressed their willingness to use a similar inquiry environment for spatial ability training in the future.
Conclusions
The spatial ability inquiry environment can help learners better understand different concepts. The users showed a strong willingness to continue using the device. The device also updates the teaching concept to a certain extent and emphasizes the dominant position of a student.
Effects of Virtual-real fusion on immersion, presence, and learning performance in laboratory education

2020, 2(6) : 569-584

DOI:10.1016/j.vrih.2020.07.010

Abstract (332) PDF (7) HTML (263)
Background
Virtual-reality (VR) fusion techniques have become increasingly popular in recent years, and several previous studies have applied them to laboratory education. However, without a basis for evaluating the effects of virtual-real fusion on VR in education, many developers have chosen to abandon this expensive and complex set of techniques.
Methods
In this study, we experimentally investigate the effects of virtual-real fusion on immersion, presence, and learning performance. Each participant was randomly assigned to one of three conditions: a PC environment (PCE) operated by mouse; a VR environment (VRE) operated by controllers; or a VR environment running virtual-real fusion (VR-VRFE), operated by real hands.
Results
The analysis of variance (ANOVA) and t-test results for presence and self-efficacy show significant differences between the PCE*VR-VRFE condition pair. Furthermore, the results show significant differences in the intrinsic value of learning performance for pairs PCE*VR-VRFE and VRE*VR-VRFE, and a marginally significant difference was found for the immersion group.
Conclusions
The results suggest that virtual-real fusion can offer improved immersion, presence, and self-efficacy compared to traditional PC environments, as well as a better intrinsic value of learning performance compared to both PC and VR environments. The results also suggest that virtual-real fusion offers a lower sense of presence compared to traditional VR environments.