Chinese
Adv Search
Home | Accepted | Article In Press | Current Issue | Archive | Special Issues | Collections | Featured Articles | Statistics

Accepted

View Abstracts Download Citations

EndNote

Reference Manager

ProCite

BiteTex

RefWorks

A swarm robot platform for intelligent interaction

DOI:10.3724/SP.J.2096-5796.2019.0019

Accepted Date:2019-05-17

Abstract (0) | PDF (0)

This paper introduces a versatile edutainment platform based on swarm robotics system which can support multi intearaction methods. We hope to create an open-ended tangible tool that can be reused in a variety of educational and entertainment scenarios by utilizing unique advantages of swarm robots, like flexible mobility, mutual perception and free control of robot's number. Compared with TUI, swarm user interface(SUI) owns more flexible locomotivity and more controllable widgets, whereas the most  research on SUI is still limited to system construction and the upper interaction modes along with vivid applications haven't been further studied.  The paper illustrates possible interaction modes for swarm robotics and feasible application scenarios based on these fundamental interaction modes. We also disccuss the implementation of swarm robotics, including software and hardware, then design several simple experiments to verify the location accuracy of swarm robotic system.

The influence of multi-modality on moving target selection in virtual reality

DOI:10.3724/SP.J.2096-5796.2019.0013

Accepted Date:2019-05-16

Abstract (7) | PDF (1)

Recent advances in virtual reality (VR) technologies, how to make users interact effectively with dynamic content in 3D scenes has become a research hotspot. Moving target selection is a basic interactive task, which the user performance research of tasks is significance to user interface design in VR. Different from the existing static target selection studies, the moving target selection in VR are affected by the change of target speed, angle and size, and lack of research on some key factors. This paper design an experimental scenario in which users play badminton under the condition of virtual reality. By adding seven kinds of modal clues such as vision, audition haptics and their combinations, five kinds of moving speed, four kinds of serving angles, study these factor effect on the performance and subjective feelings of moving target selection in VR. The results show that the moving speed of badminton has a significant impact on user performance. The angle of service has a significant impact on hitting rate, but has no significant impact on hitting distance. The acquisition of user performance by moving target is mainly influenced by vision under the combined modalities, adding additional modalities can improve user performance. Although the hitting distance of the target is increased in the trimodal condition, the hitting rate decreases. This paper analyses the results of the user performance and subjective perception, then gives suggestions on the combination of modality clues for different scenarios.

Effect of haptic feedback on the virtual lab about friction

DOI:10.3724/SP.J.2096-5796.2019.0020

Accepted Date:2019-05-14

Abstract (7) | PDF (4)

With the utilization of multimedia devices in education in recent years, more haptic devices for education are gradually adopted and developed. Compared with visual and auditory channel, the development of applications with haptic channel is still in the initial stage and it’s not clear how force feedback influences the instructional effect of educational applications as well as the subjective feeling of users. In this paper, we designed an educational application with a haptic device (Haply) to explore the effect of force feedback on self-learning. Subjects in the experiment group used the designed application to learn the knowledge about friction by themselves with force feedback while subjects in control group study the same knowledge without force feedback. A post-test and questionnaire are designed to assess the learning outcome. The experimental result indicates that force feedback is beneficial to the effect of educational application and using haptic device can improve the effect of the application and motivate the enthusiasm of student

Virtual fire drill system supporting co-located collaboration

DOI:10.3724/SP.J.2096-5796.2019.0012

Accepted Date:2019-04-25

Abstract (6) | PDF (2)

Background Due to the restriction of display mode, in most of the virtual reality systems with multiple people in the same physical space, the program renders the scene based on the position and perspective of the one user, so that other users just see the same scene, resulting in vision disorder. Methods To improve experience of multi-user co-located collaboration, in this study, we propose a fire drill system supporting co-located collaboration, in which three co-located users can collaborate to complete the virtual firefighting mission. Firstly, with multi-view stereoscopic projective display technology and ultra wideband (UWB) technology, co-located users can roam independently and watch virtual scenes through the correct perspective view based on their own position by wearing dedicated shutter glasses, thus carrying out different virtual tasks, which improves the flexibility of co-located collaboration. Secondly, we design simulated firefighting water-gun using the micro-electromechanical system sensor, through which users can interact with virtual environment, and thus provide a better interactive experience. Finally, we develop a workbench including a holographic display module and multi-touch operation module for virtual scene assembly and virtual environment control. Results The controller can use the workbench to adjust the virtual layout in real time, and control the virtual task process to increase the flexibility and playability of system. Conclusions  Our work can be employed in a wide range of related virtual reality applications.

Tactile sensitivity in ultrasonic haptics: do different parts of hand and rendering methods have an impact on perceptual threshold?

DOI:10.3724/SP.J.2096-5796.2019.0009

Accepted Date:2019-04-24

Abstract (7) | PDF (3)

Ultrasonic tactile representation utilizes the method of ultrasound focusing to create tactile sensations on the bare skin of user's hand which is not in contact with a device. This paper is a preliminary study on whether some different ultrasonic haptic rendering methods have an impact on the perceptual threshold. We examine 1) whether different parts on the hand palm have different perceptual thresholds; 2) whether the perceptual threshold is different when the ultrasonic focus point is stationary and when the ultrasonic focus point moves in different trajectories; 3) whether different moving speeds of the ultrasonic focus point have an influence on perceptual threshold; 4) whether there is a difference in the perceptual threshold when the modulating wave has and does not have a DC offset. A user study is conducted and results show that the center of the palm is more sensitive to ultrasonic haptics than the fingertip; compared to the fast-moving focus point, the palm is more sensitive to stationary and slow-moving focus point; when the modulating wave has a DC offset, the palm is sensitive to a much smaller modulation amplitude. The results could be of value for the design of future ultrasonic tactile representation systems because for more realistic ultrasonic haptics, dynamic adjustment of intensity is needed to compensate the difference in perceptual thresholds under different rendering methods.

Trajectory prediction model for crossing-based target selection

DOI:10.3724/SP.J.2096-5796.2019.0017

Accepted Date:2019-04-01

Abstract (6) | PDF (2)

Background Crossing-based target selection motion may attain less error rates and higher interactive speed in some cases. Most of the research in target selection fields are focused on the analysis of the interaction results. Additionally, as trajectories play a much more important role in crossing-based target selection compared to the other interactive techniques, an ideal model for trajectories can help computer designers make predictions about interaction results during the process of target selection rather than at the end of the whole process. Methods In this paper, a trajectory prediction model for crossing-based target selection tasks is proposed by taking the reference of a dynamic model theory. Results Simulation results demonstrate that our model performed well with regard to the prediction of trajectories, endpoints and hitting time for target-selection motion, and the average error of trajectories, endpoints and hitting time values were found to be 17.28%, 2.73 mm and 11.50%, respectively.

A review of studies on target acquisition in virtual reality based on the crossing paradigm

DOI:10.3724/SP.J.2096-5796.2019.0006

Accepted Date:2019-03-28

Abstract (36) | PDF (23)

Crossing is a fundamental paradigm for target selection in Human-computer Interaction systems. Such paradigm was firstly introduced to virtual reality interaction by Tu et al. [1], which investigated its performance in comparison to pointing and concluded that crossing generally had no worse efficacy than pointing with unique advantages as well. However, due to characteristics of VR interaction, there are still many factors to be considered when applying crossing to VR environments. Hence, this review paper first summarizes main techniques for object selection in VR and crossing-related studies. Then, this paper analyzes factors which may affect crossing interaction from the perspectives of input space and visual space. The aim of this paper is to provide references for future studies of target selection based on the crossing paradigm in virtual reality. 

Edge vector based large graph visualization and interactive exploration

DOI:10.3724/SP.J.2096-5796.2019.0010

Accepted Date:2019-03-22

Abstract (15) | PDF (5)

The demand for graph analysis is increasing. High quality and high readability graph layout is important for graph analysis. In the past years, we investigate this topic and propose a unified framework for graph layout and exploration. This framework maintains the readability during layout and interaction process. It controls the edge lengths and directions instead of only lengths. We can model most existing layout constraints, as well as develop new ones. For interactive exploration on the detail of a graph, we extend our framework to a new focus + context fisheye view. Traditional fisheye views for exploring large graphs introduce substantial distortions that often lead to a decreased readability of paths and other interesting structures. We use edge directions as constraints for graph layout optimization allows us not only to reduce spatial and temporal distortions during fisheye zooms, but also to improve the readability of the graph structure. Furthermore, the framework enables us to optimize fisheye lenses towards specific tasks and design a family of new lenses. We implement our framework with GPU parallel computing, which allows us process large graphs with up to 10,000 nodes at interactive rates.