Home About the Journal Latest Work Current Issue Archive Special Issues Editorial Board

Human-computer interaction

Human-computer interactions constitute an important subject for the development and popularization of information technologies, as they are not only an important frontier technology in computer science but also an important auxiliary technology in virtual reality (VR). In recent years, Chinese researchers have made significant advances in human-computer interactions. To systematically display China's latest advances in human-computer interactions and thus provide an impetus for the development of VR and other related fields, we have solicited articles for this special issue from experts in this area to participate in the review process. The following articles have been selected for publication in this special issue.
View Abstracts Download Citations pdf Download E-Journal


Reference Manager





Human-computer interactions for virtual reality

2019, 1(3) : 1-2


PDF (55) HTML (1273)


Review of studies on target acquisition in virtual reality based on the crossing paradigm

2019, 1(3) : 251-264


Abstract (2196) PDF (82) HTML (1810)
Crossing is a fundamental paradigm for target selection in human-computer interaction systems. This paradigm was first introduced to virtual reality (VR) interactions by Tu et al., who investigated its performance in comparison to pointing, and concluded that crossing is generally no less effective than pointing and has unique advantages. However, owing to the characteristics of VR interactions, there are still many factors to consider when applying crossing to a VR environment. Thus, this review summarizes the main techniques for object selection in VR and crossing-related studies. Then, factors that may affect crossing interactions are analyzed from the perspectives of the input space and visual space. The aim of this study is to provide a reference for future studies on target selection based on the crossing paradigm in virtual reality.


Tactile sensitivity in ultrasonic haptics: Do different parts of hand and different rendering methods have an impact on perceptual threshold?

2019, 1(3) : 265-275


Abstract (1888) PDF (41) HTML (1523)
Ultrasonic tactile representation utilizes focused ultrasound to create tactile sensations on the bare skin of a user’s hand that is not in contact with a device. This study is a preliminary investigation on whether different ultrasonic haptic rendering methods have an impact on the perceptual threshold.
This study conducted experiments with the adaptive step method to obtain participants’ perceptual thresholds. We examine (1) whether different parts on the palm of the hand have different perceptual thresholds; (2) whether the perceptual threshold is different when the ultrasonic focus point is stationary and when it moves in different trajectories; (3) whether different moving speeds of the ultrasonic focus point have an influence on the perceptual threshold; and (4) whether the addition of a DC offset to the modulating wave has an impact on the perceptual threshold.
The results show that the center of the palm is more sensitive to ultrasonic haptics than the fingertip; compared with a fast-moving focus point, the palm is more sensitive to a stationary and slow-moving focus point. When the modulating wave has a DC offset, the palm is sensitive to a much smaller modulation amplitude.
For the future ultrasonic tactile representation systems, dynamic adjustment of intensity is required to compensate the difference in perceptual thresholds under different rendering methods to achieve more realistic ultrasonic haptics.
Gesture-based target acquisition in virtual and augmented reality

2019, 1(3) : 276-289


Abstract (1773) PDF (54) HTML (1565)
Gesture is a basic interaction channel that is frequently used by humans to communicate in daily life. In this paper, we explore to use gesture-based approaches for target acquisition in virtual and augmented reality. A typical process of gesture-based target acquisition is: when a user intends to acquire a target, she performs a gesture with her hands, head or other parts of the body, the computer senses and recognizes the gesture and infers the most possible target.
We build mental model and behavior model of the user to study two key parts of the interaction process. Mental model describes how user thinks up a gesture for acquiring a target, and can be the intuitive mapping between gestures and targets. Behavior model describes how user moves the body parts to perform the gestures, and the relationship between the gesture that user intends to perform and signals that computer senses.
In this paper, we present and discuss three pieces of research that focus on the mental model and behavior model of gesture-based target acquisition in VR and AR.
We show that leveraging these two models, interaction experience and performance can be improved in VR and AR environments.
Virtual fire drill system supporting co-located collaboration

2019, 1(3) : 290-302


Abstract (1833) PDF (32) HTML (1573)
Due to the restriction of display mode, in most of the virtual reality systems with multiple people in the same physical space, the program renders the scene based on the position and perspective of the one user, so that other users just see the same scene, resulting in vision disorder.
To improve experience of multi-user co-located collaboration, in this study, we propose a fire drill system supporting co-located collaboration, in which three co-located users can collaborate to complete the virtual firefighting mission. Firstly, with multi-view stereoscopic projective display technology and ultra wideband (UWB) technology, co-located users can roam independently and watch virtual scenes through the correct perspective view based on their own position by wearing dedicated shutter glasses, thus carrying out different virtual tasks, which improves the flexibility of co-located collaboration. Secondly, we design simulated firefighting water-gun using the micro-electromechanical system sensor, through which users can interact with virtual environment, and thus provide a better interactive experience. Finally, we develop a workbench including a holographic display module and multi-touch operation module for virtual scene assembly and virtual environment control.
The controller can use the workbench to adjust the virtual layout in real time, and control the virtual task process to increase the flexibility and playability of system.
Our work can be employed in a wide range of related virtual reality applications.
Influence of multi-modality on moving target selection in virtual reality

2019, 1(3) : 303-315


Abstract (1791) PDF (24) HTML (1360)
Owing to recent advances in virtual reality (VR) technologies, effective user interaction with dynamic content in 3D scenes has become a research hotspot. Moving target selection is a basic interactive task in which the user performance research in tasks is significant to user interface design in VR. Different from the existing static target selection studies, the moving target selection in VR is affected by the change in target speed, angle and size, and lack of research on some key factors.
This study designs an experimental scenario in which the users play badminton under the condition of VR. By adding seven kinds of modal clues such as vision, audio, haptics, and their combinations, five kinds of moving speed and four kinds of serving angles, and the effect of these factors on the performance and subjective feelings in moving target selection in VR, is studied.
The results show that the moving speed of the shuttlecock has a significant impact on the user performance. The angle of service has a significant impact on hitting rate, but has no significant impact on the hitting distance. The acquisition of the user performance by the moving target is mainly influenced by vision under the combined modalities; adding additional modalities can improve user performance. Although the hitting distance of the target is increased in the trimodal condition, the hitting rate decreases.
This study analyses the results of user performance and subjective perception, and then provides suggestions on the combination of modality clues in different scenarios.
Swarm robotics platform for intelligent interaction

2019, 1(3) : 316-329


Abstract (1530) PDF (39) HTML (1405)
This paper introduces a versatile edutainment platform based on a swarm robotics system that can support multiple interaction methods. We aim to create a re-usable open-ended tangible tool for a variety of educational and entertainment scenarios by utilizing the unique advantages of swarm robots such as flexible mobility, mutual perception, and free control of robot number.
Compared with the tangible user interface, the swarm user interface (SUI) possesses more flexible locomotion and more controllable widgets. However, research on SUI is still limited to system construction, and the upper interaction modes along with vivid applications have not been sufficiently studied.
This study illustrates possible interaction modes for swarm robotics and feasible application scenarios based on these fundamental interaction modes. We also discuss the implementation of swarm robotics (including software and hardware), then design several simple experiments to verify the location accuracy of the swarm robotics system.
Trajectory prediction model for crossing-based target selection

2019, 1(3) : 330-340


Abstract (2013) PDF (29) HTML (1344)
Crossing-based target selection motion may attain less error rates and higher interactive speed in some cases. Most of the research in target selection fields are focused on the analysis of the interaction results. Additionally, as trajectories play a much more important role in crossing-based target selection compared to the other interactive techniques, an ideal model for trajectories can help computer designers make predictions about interaction results during the process of target selection rather than at the end of the whole process.
In this paper, a trajectory prediction model for crossing-based target selection tasks is proposed by taking the reference of a dynamic model theory.
Simulation results demonstrate that our model performed well with regard to the prediction of trajectories, endpoints and hitting time for target-selection motion, and the average error of trajectories, endpoints and hitting time values were found to be 17.28%, 2.73mm and 11.50%, respectively.