Adv Search
Home | Accepted | Article In Press | Current Issue | Archive | Special Issues | Collections | Featured Articles | Statistics

Article In Press

View Abstracts Download Citations


Reference Manager




Survey and evaluation of monocular visual-inertial SLAM algorithms for augmented reality


Available Online:2019-05-20

Abstract (189) | PDF (65) | HTML (198)
Although VSLAM/VISLAM has achieved great success, it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an appropriate benchmark. For AR applications in practice, a variety of challenging situations (e.g., fast motion, strong rotation, serious motion blur, dynamic interference) may be easily encountered since a home user may not carefully move the AR device, and the real environment may be quite complex. In addition, the frequency of camera lost should be minimized and the recovery from the failure status should be fast and accurate for good AR experience. Existing SLAM datasets/benchmarks generally only provide the evaluation of pose accuracy and their camera motions are somehow simple and do not fit well the common cases in the mobile AR applications. With the above motivation, we build a new visual-inertial dataset as well as a series of evaluation criteria for AR. We also review the existing monocular VSLAM/VISLAM approaches with detailed analyses and comparisons. Especially, we select 8 representative monocular VSLAM/VISLAM approaches/systems and quantitatively evaluate them on our benchmark. Our dataset, sample code and corresponding evaluation tools are available at the benchmark website
Gesture-based target acquisition in virtual and augmented reality


Available Online:2019-05-15

Abstract (20) | PDF (14) | HTML (11)
Background Gesture is a basic interaction channel that is frequently used by humans to communicate in daily life. In this paper, we explore to use gesture-based approaches for target acquisition in virtual and augmented reality. A typical process of gesture-based target acquisition is: when a user intends to acquire a target, she performs a gesture with her hands, head or other parts of the body, the computer senses and recognizes the gesture and infers the most possible target.
We build mental model and behavior model of the user to study two key parts of the interaction process. Mental model describes how user thinks up a gesture for acquiring a target, and can be the intuitive mapping between gestures and targets. Behavior model describes how user moves the body parts to perform the gestures, and the relationship between the gesture that user intends to perform and signals that computer senses.
In this paper, we present and discuss three pieces of research that focus on the mental model and behavior model of gesture-based target acquisition in VR and AR.
We show that leveraging these two models, interaction experience and performance can be improved in VR and AR environments.
Overview of 3D scene viewpoints


Available Online:2019-03-26

Abstract (112) | PDF (33) | HTML (121)

The research of three-dimensional (3D) scene viewpoints is a relatively frontier research problem in computer graphics and virtual reality technology. It has a broad application in the field of virtual scene understanding, image-based scene modeling, and visualization computing. With the development of computer graphics and the vigorous growth of graphics data, it becomes increasingly important to analyze the optimal viewpoint of complex scenes. The high-quality viewpoints of 3D scene can guide observers to comprehensively understanding the geometric information and scene’s features of virtual environment, searching the hidden relations of hierarchical components, and improving the efficiency of 3D scene exploring. At the same time, it is very important to simplify global lighting and visualization calculation in mesh optimization and rendering. This research has a wide range of applications, including 3D model retrieval , visual features extraction, user’s attention analysis, geometric information analysis, ray tracing algorithm, molecular visualization and scene intelligent computing.