Chinese
Adv Search
Home | Accepted | Article In Press | Current Issue | Archive | Special Issues | Collections | Featured Articles | Statistics

Current Issue

2019, Vol. 1 No. 3 Publish Date:2019-6

Previous
View Abstracts Download Citations

EndNote

Reference Manager

ProCite

BiteTex

RefWorks

Editorial

Human-computer interactions for virtual reality

DOI:10.3724/SP.J.2096-5796.2019.00001

2019, 1(3) : 1-2

PDF (34) | HTML (133)

Review

Review of studies on target acquisition in virtual reality based on the crossing paradigm

DOI:10.3724/SP.J.2096-5796.2019.0006

2019, 1(3) : 251-264

Abstract (257) | PDF (57) | HTML (187)
Crossing is a fundamental paradigm for target selection in human-computer interaction systems. This paradigm was first introduced to virtual reality (VR) interactions by Tu et al., who investigated its performance in comparison to pointing, and concluded that crossing is generally no less effective than pointing and has unique advantages. However, owing to the characteristics of VR interactions, there are still many factors to consider when applying crossing to a VR environment. Thus, this review summarizes the main techniques for object selection in VR and crossing-related studies. Then, factors that may affect crossing interactions are analyzed from the perspectives of the input space and visual space. The aim of this study is to provide a reference for future studies on target selection based on the crossing paradigm in virtual reality.

Article

Tactile sensitivity in ultrasonic haptics: Do different parts of hand and different rendering methods have an impact on perceptual threshold?

DOI:10.3724/SP.J.2096-5796.2019.0009

2019, 1(3) : 265-275

Abstract (180) | PDF (21) | HTML (180)
Background
Ultrasonic tactile representation utilizes focused ultrasound to create tactile sensations on the bare skin of a user’s hand that is not in contact with a device. This study is a preliminary investigation on whether different ultrasonic haptic rendering methods have an impact on the perceptual threshold.
Methods
This study conducted experiments with the adaptive step method to obtain participants’ perceptual thresholds. We examine (1) whether different parts on the palm of the hand have different perceptual thresholds; (2) whether the perceptual threshold is different when the ultrasonic focus point is stationary and when it moves in different trajectories; (3) whether different moving speeds of the ultrasonic focus point have an influence on the perceptual threshold; and (4) whether the addition of a DC offset to the modulating wave has an impact on the perceptual threshold.
Results
The results show that the center of the palm is more sensitive to ultrasonic haptics than the fingertip; compared with a fast-moving focus point, the palm is more sensitive to a stationary and slow-moving focus point. When the modulating wave has a DC offset, the palm is sensitive to a much smaller modulation amplitude.
Conclusion
For the future ultrasonic tactile representation systems, dynamic adjustment of intensity is required to compensate the difference in perceptual thresholds under different rendering methods to achieve more realistic ultrasonic haptics.
Gesture-based target acquisition in virtual and augmented reality

DOI:10.3724/SP.J.2096-5796.2019.0007

2019, 1(3) : 276-289

Abstract (194) | PDF (32) | HTML (139)
Background
Gesture is a basic interaction channel that is frequently used by humans to communicate in daily life. In this paper, we explore to use gesture-based approaches for target acquisition in virtual and augmented reality. A typical process of gesture-based target acquisition is: when a user intends to acquire a target, she performs a gesture with her hands, head or other parts of the body, the computer senses and recognizes the gesture and infers the most possible target.
Methods
We build mental model and behavior model of the user to study two key parts of the interaction process. Mental model describes how user thinks up a gesture for acquiring a target, and can be the intuitive mapping between gestures and targets. Behavior model describes how user moves the body parts to perform the gestures, and the relationship between the gesture that user intends to perform and signals that computer senses.
Results
In this paper, we present and discuss three pieces of research that focus on the mental model and behavior model of gesture-based target acquisition in VR and AR.
Conclusions
We show that leveraging these two models, interaction experience and performance can be improved in VR and AR environments.
Virtual fire drill system supporting co-located collaboration

DOI:10.3724/SP.J.2096-5796.2019.0012

2019, 1(3) : 290-302

Abstract (191) | PDF (20) | HTML (136)
Background
Due to the restriction of display mode, in most of the virtual reality systems with multiple people in the same physical space, the program renders the scene based on the position and perspective of the one user, so that other users just see the same scene, resulting in vision disorder.
Methods
To improve experience of multi-user co-located collaboration, in this study, we propose a fire drill system supporting co-located collaboration, in which three co-located users can collaborate to complete the virtual firefighting mission. Firstly, with multi-view stereoscopic projective display technology and ultra wideband (UWB) technology, co-located users can roam independently and watch virtual scenes through the correct perspective view based on their own position by wearing dedicated shutter glasses, thus carrying out different virtual tasks, which improves the flexibility of co-located collaboration. Secondly, we design simulated firefighting water-gun using the micro-electromechanical system sensor, through which users can interact with virtual environment, and thus provide a better interactive experience. Finally, we develop a workbench including a holographic display module and multi-touch operation module for virtual scene assembly and virtual environment control.
Results
The controller can use the workbench to adjust the virtual layout in real time, and control the virtual task process to increase the flexibility and playability of system.
Conclusions
Our work can be employed in a wide range of related virtual reality applications.
Influence of multi-modality on moving target selection in virtual reality

DOI:10.3724/SP.J.2096-5796.2019.0013

2019, 1(3) : 303-315

Abstract (131) | PDF (15) | HTML (122)
Background
Owing to recent advances in virtual reality (VR) technologies, effective user interaction with dynamic content in 3D scenes has become a research hotspot. Moving target selection is a basic interactive task in which the user performance research in tasks is significant to user interface design in VR. Different from the existing static target selection studies, the moving target selection in VR is affected by the change in target speed, angle and size, and lack of research on some key factors.
Methods
This study designs an experimental scenario in which the users play badminton under the condition of VR. By adding seven kinds of modal clues such as vision, audio, haptics, and their combinations, five kinds of moving speed and four kinds of serving angles, and the effect of these factors on the performance and subjective feelings in moving target selection in VR, is studied.
Results
The results show that the moving speed of the shuttlecock has a significant impact on the user performance. The angle of service has a significant impact on hitting rate, but has no significant impact on the hitting distance. The acquisition of the user performance by the moving target is mainly influenced by vision under the combined modalities; adding additional modalities can improve user performance. Although the hitting distance of the target is increased in the trimodal condition, the hitting rate decreases.
Conclusion
This study analyses the results of user performance and subjective perception, and then provides suggestions on the combination of modality clues in different scenarios.
Swarm robotics platform for intelligent interaction

DOI:10.3724/SP.J.2096-5796.2019.0019

2019, 1(3) : 316-329

Abstract (132) | PDF (16) | HTML (165)
Background
This paper introduces a versatile edutainment platform based on a swarm robotics system that can support multiple interaction methods. We aim to create a re-usable open-ended tangible tool for a variety of educational and entertainment scenarios by utilizing the unique advantages of swarm robots such as flexible mobility, mutual perception, and free control of robot number.
Methods
Compared with the tangible user interface, the swarm user interface (SUI) possesses more flexible locomotion and more controllable widgets. However, research on SUI is still limited to system construction, and the upper interaction modes along with vivid applications have not been sufficiently studied.
Results
This study illustrates possible interaction modes for swarm robotics and feasible application scenarios based on these fundamental interaction modes. We also discuss the implementation of swarm robotics (including software and hardware), then design several simple experiments to verify the location accuracy of the swarm robotics system.
Trajectory prediction model for crossing-based target selection

DOI:10.3724/SP.J.2096-5796.2019.0017

2019, 1(3) : 330-340

Abstract (137) | PDF (14) | HTML (138)
Background
Crossing-based target selection motion may attain less error rates and higher interactive speed in some cases. Most of the research in target selection fields are focused on the analysis of the interaction results. Additionally, as trajectories play a much more important role in crossing-based target selection compared to the other interactive techniques, an ideal model for trajectories can help computer designers make predictions about interaction results during the process of target selection rather than at the end of the whole process.
Methods
In this paper, a trajectory prediction model for crossing-based target selection tasks is proposed by taking the reference of a dynamic model theory.
Results
Simulation results demonstrate that our model performed well with regard to the prediction of trajectories, endpoints and hitting time for target-selection motion, and the average error of trajectories, endpoints and hitting time values were found to be 17.28%, 2.73mm and 11.50%, respectively.