Chinese
Adv Search
Home | Accepted | Article In Press | Current Issue | Archive | Special Issues | Collections | Featured Articles | Statistics

2020, Vol. 2 No. 4 Publish Date:2020-8

Next Previous
View Abstracts Download Citations pdf Download E-Journal

EndNote

Reference Manager

ProCite

BiteTex

RefWorks

Editorial

VR and experiment simulation

DOI:10.3724/SP.J.2096-5796.2020.02.04

2020, 2(4) : 1-1

PDF (19) | HTML (200)

Article

Multimodal interaction design and application in augmented reality for chemical experiment

DOI:10.1016/j.vrih.2020.07.005

2020, 2(4) : 291-304

Abstract (236) | PDF (11) | HTML (227)
Background
Augmented reality classrooms have become an interesting research topic in the field of education, but there are some limitations. Firstly, most researchers use cards to operate experiments, and a large number of cards cause difficulty and inconvenience for users. Secondly, most users conduct experiments only in the visual modal, and such single-modal interaction greatly reduces the users’ real sense of interaction. In order to solve these problems, we propose the Multimodal Interaction Algorithm based on Augmented Reality (ARGEV), which is based on visual and tactile feedback in Augmented Reality. In addition, we design a Virtual and Real Fusion Interactive Tool Suite (VRFITS) with gesture recognition and intelligent equipment.
Methods
The ARGVE method fuses gesture, intelligent equipment, and virtual models. We use a gesture recognition model trained by a convolutional neural network to recognize the gestures in AR, and to trigger a vibration feedback after a recognizing a five-finger grasp gesture. We establish a coordinate mapping relationship between real hands and the virtual model to achieve the fusion of gestures and the virtual model.
Results
The average accuracy rate of gesture recognition was 99.04%. We verify and apply VRFITS in the Augmented Reality Chemistry Lab (ARCL), and the overall operation load of ARCL is thus reduced by 29.42%, in comparison to traditional simulation virtual experiments.
Conclusions
We achieve real-time fusion of the gesture, virtual model, and intelligent equipment in ARCL. Compared with the NOBOOK virtual simulation experiment, ARCL improves the users’ real sense of operation and interaction efficiency.
Thermal perception method of virtual chemistry experiments

DOI:10.1016/j.vrih.2020.07.003

2020, 2(4) : 305-315

Abstract (228) | PDF (10) | HTML (179)
Background
With the aim of addressing the difficulty in identifying temperatures in virtual chemistry experiments, we propose a temperature-sensing simulation method of virtual chemistry experiments.
Methods
We construct a virtual chemistry experiment temperature simulation platform, based on which a wearable temperature generation device is developed. The typical middle school virtual experiments of concentrated sulfuric acid dilution and ammonium nitrate dissolution are conducted to verify the actual effect of the device.
Results
The platform is capable to indicate near real-world experimental situations. The performance of the device not only meets the temperature sensing characteristics of human skin, but also matches the temperature change of virtual chemistry experiments in real-time.
Conclusions
It is demonstrated that this temperature-sensing simulation method can represent exothermic or endothermic chemistry experiments, which is beneficial for students to gain understanding of the principles of thermal energy transformation in chemical reactions, thus avoiding the danger that may be posed in the course of traditional teaching of chemistry experiments effectively. Although this method does not have a convenient enough operation for users, the immersion of virtual chemical experiments can be enhanced.
Virtual & augmented reality for biological microscope in experiment education

DOI:10.1016/j.vrih.2020.07.004

2020, 2(4) : 316-329

Abstract (214) | PDF (4) | HTML (216)
Background
Mixed-reality technologies, including virtual reality (VR) and augmented reality (AR) , are considered to be promising potential tools for science teaching and learning processes that could foster positive emotions, motivate autonomous learning, and improve learning outcomes.
Methods
In this study, a technology-aided biological microscope learning system based on VR/AR is presented. The structure of the microscope is described in a detailed three-dimensional (3D) model, each component being represented with their topological interrelationships and associations among them being established. The interactive behavior of the model was specified, and a standard operating guide was compiled. The motion control of components was simulated based on collision detection. Combined with immersive VR equipment and AR technology, we developed a virtual microscope subsystem and a mobile virtual microscope guidance system.
Results
The system consisted of a VR subsystem and an AR subsystem. The focus of the VR subsystem was to simulate operating the microscope and associated interactive behaviors that allowed users to observe and operate the components of the 3D microscope model by means of natural interactions in an immersive scenario. The AR subsystem allowed participants to use a mobile terminal that took a picture of a microscope from a textbook and then displayed the structure and functions of the instrument, as well as the relevant operating guidance. This flexibly allowed students to use the system before or after class without time and space constraints. The system allowed users to switch between the VR and AR subsystems.
Conclusions
The system is useful for helping learners (especially K-12 students) to recognize a microscope's structure and grasp the required operational skills by simulating operations using an interactive process. In the future, such technology-assisted education would be a successful learning platform in an open learning space.
Interaction design for paediatric emergency VR training

DOI:10.1016/j.vrih.2020.07.006

2020, 2(4) : 330-344

Abstract (240) | PDF (6) | HTML (187)
Background
Virtual reality (VR) in healthcare training has increased adoption and support, but efforts are still required to mitigate usability concerns.
Methods
This study conducted a usability study of an in-use emergency medicine VR training application, available on commercially available VR hardware and with a standard interaction design. Nine users without prior VR experience but with relevant medical expertise completed two simulation scenarios for a total of 18 recorded sessions. They completed NASA Task Load Index and System Usability Scale questionnaires after each session, and their performance was recorded for the tracking of user errors.
Results and Conclusions
Our results showed a medium (and potentially optimal) Workload and an above average System Usability Score. There was significant improvement in several factors between users’ first and second sessions, notably increased Performance evaluation. User errors with the strongest correlation to usability were not directly tied to interaction design, however, but to a limited ‘possibility space’. Suggestions for closing this ‘gulf of execution’ were presented, including ‘voice control’ and ‘hand-tracking’, which are only feasible for this commercial product now with the availability of the Oculus Quest headset. Moreover, wider implications for VR medical training were outlined, and potential next steps towards a standardized design identified.
An intelligent navigation experimental system based on multi-mode fusion

DOI:10.1016/j.vrih.2020.07.007

2020, 2(4) : 345-353

Abstract (236) | PDF (5) | HTML (195)
At present, most experimental teaching systems lack guidance of an operator, and thus users often do not know what to do during an experiment. The user load is therefore increased, and the learning efficiency of the students is decreased. To solve the problem of insufficient system interactivity and guidance, an experimental navigation system based on multi-mode fusion is proposed in this paper. The system first obtains user information by sensing the hardware devices, intelligently perceives the user intention and progress of the experiment according to the information acquired, and finally carries out a multi-modal intelligent navigation process for users. As an innovative aspect of this study, an intelligent multi-mode navigation system is used to guide users in conducting experiments, thereby reducing the user load and enabling the users to effectively complete their experiments. The results prove that this system can guide users in completing their experiments, and can effectively reduce the user load during the interaction process and improve the efficiency.
Virtual simulation experiment of the design and manufacture of a beer bottle-defect detection system

DOI:10.1016/j.vrih.2020.07.002

2020, 2(4) : 354-367

Abstract (217) | PDF (3) | HTML (196)
Background
Machine learning-based beer bottle-defect detection is a complex technology that runs automatically; however, it consumes considerable memory, is expensive, and poses a certain danger when training novice operators. Moreover, some topics are difficult to learn from experimental lectures, such as digital image processing and computer vision. However, virtual simulation experiments have been widely used to good effect within education. A virtual simulation of the design and manufacture of a beer bottle-defect detection system will not only help the students to increase their image-processing knowledge, but also improve their ability to solve complex engineering problems and design complex systems.
Methods
The hardware models for the experiment (camera, light source, conveyor belt, power supply, manipulator, and computer) were built using the 3DS MAX modeling and animation software. The Unreal Engine 4 (UE4) game engine was utilized to build a virtual design room, design the interactive operations, and simulate the system operation.
Results
The results showed that the virtual-simulation system received much better experimental feedback, which facilitated the design and manufacture of a beer bottle-defect detection system. The specialized functions of the functional modules in the detection system, including a basic experimental operation menu, power switch, image shooting, image processing, and manipulator grasping, allowed students (or virtual designers) to easily build a detection system by retrieving basic models from the model library, and creating the beer-bottle transportation, image shooting, image processing, defect detection, and defective-product removal. The virtual simulation experiment was completed with image processing as the main body.
Conclusions
By mainly focusing on bottle mouth-defect detection, the detection system dedicates more attention to the user and the task. With more detailed tasks available, the virtual system will eventually yield much better results as a training tool for image-processing education. In addition, a novel visual perception-thinking pedagogical framework enables better comprehension than the traditional lecture-tutorial style.
Cloud-to-end rendering and storage management for virtual reality in experimental education

DOI:10.1016/j.vrih.2020.07.001

2020, 2(4) : 368-380

Abstract (229) | PDF (11) | HTML (172)
Background
Real-time 3D rendering and interaction is important for virtual reality (VR) experimental education. Unfortunately, standard end-computing methods prohibitively escalate computational costs. Thus, reducing or distributing these requirements needs urgent attention, especially in light of the COVID-19 pandemic.
Methods
In this study, we design a cloud-to-end rendering and storage system for VR experimental education comprising two models: background and interactive. The cloud server renders items in the background and sends the results to an end terminal in a video stream. Interactive models are then lightweight-rendered and blended at the end terminal. An improved 3D warping and hole-filling algorithm is also proposed to improve image quality when the user’s viewpoint changes.
Results
We build three scenes to test image quality and network latency. The results show that our system can render 3D experimental education scenes with higher image quality and lower latency than any other cloud rendering systems.
Conclusions
Our study is the first to use cloud and lightweight rendering for VR experimental education. The results demonstrate that our system provides good rendering experience without exceeding computation costs.