English
高级检索
首页 | 最新接受 | 预出版 | 当期目次 | 过刊浏览 | 专刊特辑 | 虚拟专辑 | 特色推荐 | 浏览统计

2019年, 第1卷 第1期 出版年月:2019-2

下一期
查看摘要 导出题录 pdf 下载电子期刊

EndNote

Reference Manager

ProCite

BiteTex

RefWorks

社论

On the launch of Virtual Reality & Intelligent Hardware

DOI:10.3724/SP.J.2096-5796.2018.0000

2019, 1(1) : 1-1

PDF (52) | HTML (390)

论文

Design of finger gestures for locomotion in virtual reality

DOI:10.3724/SP.J.2096-5796.2018.0007

2019, 1(1) : 1-9

摘要 (987) | PDF (96) | HTML (601)
Background
Within a virtual environment (VE) the control of locomotion (e.g., self-travel) is critical for creating a realistic and functional experience. Usually the direction of locomotion, while using a head-mounted display (HMD), is determined by the direction the head is pointing and the forward or backward motion is controlled with a hand held controllers. However, hand held devices can be difficult to use while the eyes are covered with a HMD. Free hand gestures, that are tracked with a camera or a hand data glove, have an advantage of eliminating the need to look at the hand controller but the design of hand or finger gestures for this purpose has not been well developed.
Methods
This study used a depth-sensing camera to track fingertip location (curling and straightening the fingers), which was converted to forward or backward self-travel in the VE. Fingertip position was converted to self-travel velocity using a mapping function with three parameters: a region of zero velocity (dead zone) around the relaxed hand position, a linear relationship of fingertip position to velocity (slope or β) beginning at the edge of the dead zone, and an exponential relationship rather than a linear one mapping fingertip position to velocity (exponent). Using a HMD, participants moved forward along a virtual road and stopped at a target on the road by controlling self-travel velocity with finger flexion and extension. Each of the 3 mapping function parameters was tested at 3 levels. Outcomes measured included usability ratings, fatigue, nausea, and time to complete the tasks.
Results
Twenty subjects participated but five did not complete the study due to nausea. The size of the dead zone had little effect on performance or usability. Subjects preferred lower β values which were associated with better subjective ratings of control and reduced time to complete the task, especially for large targets. Exponent values of 1.0 or greater were preferred and reduced the time to complete the task, especially for small targets.
Conclusions
Small finger movements can be used to control velocity of self-travel in VE. The functions used for converting fingertip position to movement velocity influence usability and performance.

综述

Prospects and challenges in augmented reality displays

DOI:10.3724/SP.J.2096-5796.2018.0009

2019, 1(1) : 10-20

摘要 (1123) | PDF (79) | HTML (1092)
Augmented reality (AR) displays are attracting significant attention and efforts. In this paper, we review the adopted device configurations of see-through displays, summarize the current development status and highlight future challenges in micro-displays. A brief introduction to optical gratings is presented to help understand the challenging design of grating-based waveguide for AR displays. Finally, we discuss the most recent progress in diffraction grating and its implications.
Data fusion methods in multimodal human computer dialog

DOI:10.3724/SP.J.2096-5796.2018.0010

2019, 1(1) : 21-38

摘要 (860) | PDF (55) | HTML (783)
In multimodal human computer dialog, non-verbal channels, such as facial expression, posture, gesture, etc, combined with spoken information, are also important in the procedure of dialogue. Nowadays, in spite of high performance of users’ single channel behavior computing, it is still great challenge to understand users’ intention accurately from their multimodal behaviors. One reason for this challenge is that we still need to improve multimodal information fusion in theories, methodologies and practical systems. This paper presents a review of data fusion methods in multimodal human computer dialog. We first introduce the cognitive assumption of single channel processing, and then discuss its implementation methods in human computer dialog; for the task of multi-modal information fusion, serval computing models are presented after we introduce the principle description of multiple data fusion. Finally, some practical examples of multimodal information fusion methods are introduced and the possible and important breakthroughs of the data fusion methods in future multimodal human-computer interaction applications are discussed.
A review on image-based rendering

DOI:10.3724/SP.J.2096-5796.2018.0004

2019, 1(1) : 39-54

摘要 (1188) | PDF (157) | HTML (884)
Image-based rendering is important both in the field of computer graphics and computer vision, and it is also widely used in virtual reality technology. For more than two decades, people have done a lot of work on the research of image-based rendering, and these methods can be divided into two categories according to whether the geometric information of the scene is utilized. According to this classification, we introduce some classical methods and representative methods proposed in recent years. We also compare and analyze the basic principles, advantages and disadvantages of different methods. Finally, some suggestions are given for research directions on image-based rendering techniques in the future.
A survey on image and video stitching

DOI:10.3724/SP.J.2096-5796.2018.0008

2019, 1(1) : 55-83

摘要 (1233) | PDF (138) | HTML (3487)
Image/video stitching is a technology for solving the field of view (FOV) limitation of images/videos. It stitches multiple overlapping images/videos to generate a wide-FOV image/video, and has been used in various fields such as sports broadcasting, video surveillance, street view, and entertainment. This survey reviews image/video stitching algorithms, with a particular focus on those developed in recent years. Image stitching first calculates the corresponding relationships between multiple overlapping images, deforms and aligns the matched images, and then blends the aligned images to generate a wide-FOV image. A seamless method is always adopted to eliminate such potential flaws as ghosting and blurring caused by parallax or objects moving across the overlapping regions. Video stitching is the further extension of image stitching. It usually stitches selected frames of original videos to generate a stitching template by performing image stitching algorithms, and the subsequent frames can then be stitched according to the template. Video stitching is more complicated with moving objects or violent camera movement, because these factors introduce jitter, shakiness, ghosting, and blurring. Foreground detection technique is usually combined into stitching to eliminate ghosting and blurring, while video stabilization algorithms are adopted to solve the jitter and shakiness. This paper further discusses panoramic stitching as a special-extension of image/video stitching. Panoramic stitching is currently the most widely used application in stitching. This survey reviews the latest image/video stitching methods, and introduces the fundamental principles/advantages/weaknesses of image/video stitching algorithms. Image/video stitching faces long-term challenges such as wide baseline, large parallax, and low-texture problem in the overlapping region. New technologies may present new opportunities to address these issues, such as deep learning-based semantic correspondence, and 3D image stitching. Finally, this survey discusses the challenges of image/video stitching and proposes potential solutions.
Gesture interaction in virtual reality

DOI:10.3724/SP.J.2096-5796.2018.0006

2019, 1(1) : 84-112

摘要 (817) | PDF (112) | HTML (941)
With the development of virtual reality (VR) and human-computer interaction technology, how to use natural and efficient interaction methods in the virtual environment has become a hot topic of research. Gesture is one of the most important communication methods of human beings, which can effectively express users’ demands. In the past few decades, gesture-based interaction has made significant progress. This article focuses on the gesture interaction technology and discusses the definition and classification of gestures, input devices for gesture interaction, and gesture interaction recognition technology. The application of gesture interaction technology in virtual reality is studied, the existing problems in the current gesture interaction are summarized, and the future development is prospected.
增强现实在颅颌面整形手术中的应用

DOI:10.3724/SP.J.2096-5796.2018.0002

2019, 1(1) : 113-120

摘要 (973) | PDF (86) | HTML (1237)
增强现实(Augmented Reality,AR)作为从虚拟现实(Virtual Reality,VR)发展延伸出的一个领域,近年来受到各个行业及科研机构的高度关注,并得到了快速发展,其采用三维注册虚实结合的方式,在诸多领域得到了广泛应用。本文完整论述了近年来增强现实技术在医学颅颌面整形手术中的应用,从术前应用和术中导航两个角度介绍了国内外VR技术和AR技术在手术中的应用,详细阐述了AR在应用过程中的关键技术,包括显示技术、注册技术和交互技术,涵盖了AR的所有技术要点,探讨了目前AR技术在手术应用中存在的难题,对未来AR技术在手术应用中的发展趋势进行了展望。