English
高级检索
首页 | 最新接受 | 预出版 | 当期目次 | 过刊浏览 | 专刊特辑 | 虚拟专辑 | 特色推荐 | 浏览统计

预出版

查看摘要 导出题录

EndNote

Reference Manager

ProCite

BiteTex

RefWorks

Effects of Virtual-real fusion on immersion, presence, and learning performance in laboratory education

DOI:10.3724/SP.J.2096-5796.20.00043

在线出版:2020-09-11

摘要 (32) | PDF (4) | HTML (22)
Background
Virtual-reality (VR) fusion techniques have become increasingly popular in recent years, and several previous studies have applied them to laboratory education. However, without a basis for evaluating the effects of virtual-real fusion on VR in education, many developers have chosen to abandon this expensive and complex set of techniques.
Methods
In this study, we experimentally investigate the effects of virtual-real fusion on immersion, presence, and learning performance. Each participant was randomly assigned to one of three conditions: a PC environment (PCE) operated by mouse; a VR environment (VRE) operated by controllers; or a VR environment running virtual-real fusion (VR-VRFE), operated by real hands.
Results
The analysis of variance (ANOVA) and t-test results for presence and self-efficacy show significant differences between the PCE*VR-VRFE condition pair. Furthermore, the results show significant differences in the intrinsic value of learning performance for pairs PCE*VR-VRFE and VRE*VR-VRFE, and a marginally significant difference was found for the immersion group.
Conclusions
The results suggest that virtual-real fusion can offer improved immersion, presence, and self-efficacy compared to traditional PC environments, as well as a better intrinsic value of learning performance compared to both PC and VR environments. The results also suggest that virtual-real fusion offers a lower sense of presence compared to traditional VR environments.
Multimodal teaching, learning and training in virtual reality: a review and case study

DOI:10.3724/SP.J.2096-5796.20.00020

在线出版:2020-09-09

摘要 (75) | PDF (9) | HTML (33)
It is becoming increasingly prevalent in digital learning research to encompass an array of different meanings, spaces, processes, and teaching strategies for discerning a global perspective on constructing the student learning experience. Multimodality is an emergent phenomenon that may influence how digital learning is designed, especially when employed in highly interactive and immersive learning environments such as Virtual Reality (VR). VR environments may aid students’ efforts to be active learners through consciously attending to, and reflecting on, critique leveraging reflexivity and novel meaning-making most likely to lead to a conceptual change. This paper employs eleven industrial case-studies to highlight the application of multimodal VR-based teaching and training as a pedagogically rich strategy that may be designed, mapped and visualized through distinct VR-design elements and features. The outcomes of the use cases contribute to discern in-VR multimodal teaching as an emerging discourse that couples system design-based paradigms with embodied, situated and reflective praxis in spatial, emotional and temporal VR learning environments.
Emotional dialog generation via multiple classifiers based on a generative adversarial network

DOI:10.3724/SP.J.2096-5796.20.00042

在线出版:2020-08-12

摘要 (121) | PDF (6) | HTML (101)
Background
Human-machine dialog generation is an essential topic of research in the field of natural language processing. Generating high-quality, diverse, fluent, and emotional conversation is a challenging task. Based on continuing advancements in artificial intelligence and deep learning, new methods have come to the forefront in recent times. In particular, the end-to-end neural network model provides an extensible conversation generation framework that has the potential to enable machines to understand semantics and automatically generate responses. However, neural network models come with their own set of questions and challenges. The basic conversational model framework tends to produce universal, meaningless, and relatively "safe" answers.
Methods
Based on generative adversarial networks (GANs), a new emotional dialog generation framework called EMC-GAN is proposed in this study to address the task of emotional dialog generation. The proposed model comprises a generative and three discriminative models. The generator is based on the basic sequence-to-sequence (Seq2Seq) dialog generation model, and the aggregate discriminative model for the overall framework consists of a basic discriminative model, an emotion discriminative model, and a fluency discriminative model. The basic discriminative model distinguishes generated fake sentences from real sentences in the training corpus. The emotion discriminative model evaluates whether the emotion conveyed via the generated dialog agrees with a pre-specified emotion, and directs the generative model to generate dialogs that correspond to the category of the pre-specified emotion. Finally, the fluency discriminative model assigns a score to the fluency of the generated dialog and guides the generator to produce more fluent sentences.
Results
Based on the experimental results, this study confirms the superiority of the proposed model over similar existing models with respect to emotional accuracy, fluency, and consistency.
Conclusions
The proposed EMC-GAN model is capable of generating consistent, smooth, and fluent dialog that conveys pre-specified emotions, and exhibits better performance with respect to emotional accuracy, consistency, and fluency compared to its competitors.
Virtual reality research and development in NTU

DOI:10.3724/SP.J.2096-5796.20.00028

在线出版:2020-08-12

摘要 (108) | PDF (12) | HTML (68)
In 1981, Nanyang Technological Institute was established in Singapore to train engineers and accountants to keep up with the fast-growing economy of the country. In 1991, the institute was upgraded to Nanyang Technological University (NTU). NTU holds the rank for world’s top young university for six consecutive years according to the Quacquarelli Symonds (QS) world university ranking. Virtual Reality (VR) research began in NTU in the late 1990s. NTU’s colleges, schools, institutes, and centers have contributed toward the excellence of VR research. This article briefly describes the VR research directions and activities in NTU.
VR and AR in human performance researchAn NUS experience

DOI:10.3724/SP.J.2096-5796.20.00021

在线出版:2020-08-12

摘要 (111) | PDF (13) | HTML (83)
With the mindset of constant improvement in efficiency and safety in the workspace and training in Singapore, there is a need to explore varying technologies and their capabilities to fulfil this need. The ability of Virtual Reality (VR) and Augmented Reality (AR) to create an immersive experience of tying the virtual and physical environments coupled with information filtering capabilities brings a possibility of introducing this technology into the training process and workspace. This paper surveys current research trends, findings and limitation of VR and AR in its effect on human performance, specifically in Singapore, and our experience in the National University of Singapore (NUS).
VEGO: A novel design towards customizable and adjustable head-mounted display for VR

DOI:10.3724/SP.J.2096-5796.20.00034

在线出版:2020-07-30

摘要 (127) | PDF (13) | HTML (113)
Background
Virtual Reality (VR) technologies have advanced fast and have been applied to a wide spectrum of sectors in the past few years. VR can provide an immersive experience to users by generating virtual images and displaying the virtual images to the user with a head-mounted display (HMD) which is a primary component of VR. Normally, an HMD contains a list of hardware components, e.g., housing pack, micro LCD display, microcontroller, optical lens, etc. Settings of VR HMD to accommodate the user’s inter-pupil distance (IPD) and the user’s eye focus power are important for the user’s experience with VR.
Methods
Although various methods have been developed towards IPD and focus adjustments for VR HMD, the increased cost and complexity impede the possibility for users who wish to assemble their own VR HMD for various purposes, e.g., DIY teaching, etc. In our paper, we present a novel design towards building a customizable and adjustable HMD for VR in a cost-effective manner. Modular design methodology is adopted, and the VR HMD can be easily printed with 3D printers. The design also features adjustable IPD and variable distance between the optical lens and the display. It can help to mitigate the vergence and accommodation conflict issue.
Results
A prototype of the customizable and adjustable VR HMD has been successfully built up with off-the-shelf components. A VR software program running on Raspberry Pi board has been developed and can be utilized to show the VR effects. A user study with 20 participants is conducted with positive feedback on our novel design.
Conclusions
Modular design can be successfully applied for building up VR HMD with 3D printing. It helps to promote the wide application of VR at affordable costs while featuring flexibility and adjustability.
VR industrial applicationsA singapore perspective

DOI:10.3724/SP.J.2096-5796.20.00017

在线出版:2020-07-14

摘要 (192) | PDF (7) | HTML (113)
Virtual Reality (VR) has been around for a long time but has come into the spotlight only recently. From an industrial perspective, this article serves as a proverbial scalpel to dissect the different use cases and commercial applications of VR in Singapore. Before researching the Singapore market, we examine how VR has evolved. At the moment, the global annual budget for VR (and augmented reality) is at an upward trend with a leading growth in market value for the training sector. VR in Singapore has also seen a rapid development in recent years. We discuss some of the Singapore government’s initiatives to promote the commercial adoption of VR for the digital economy of the nation. To address the mass adoption of VR, we present VRcollab’s business solutions for the construction and building industry. 2020 is one of the most important years for VR in history.
Development of augmented reality serious games with a vibrotactile feedback jacket

DOI:10.3724/SP.J.2096-5796.20.00022

在线出版:2020-07-01

摘要 (230) | PDF (6) | HTML (149)
Background
In the past few years, augmented reality (AR) has rapidly advanced and has been applied in different fields. One of the successful AR applications is the immersive and interactive serious games, which can be used for education and learning purposes.
Methods
In this project, a prototype of an AR serious game is developed and demonstrated. Gamers utilize a head-mounted device and a vibrotactile feedback jacket to explore and interact with the AR serious game. Fourteen vibration actuators are embedded in the vibrotactile feedback jacket to generate immersive AR experience. These vibration actuators are triggered in accordance with the designed game scripts. Various vibration patterns and intensity levels are synthesized in different game scenes. This article presents the details of the entire software development of the AR serious game, including game scripts, game scenes with AR effects design, signal processing flow, behavior design, and communication configuration. Graphics computations are processed using the graphics processing unit in the system.
Results /Conclusions
The performance of the AR serious game prototype is evaluated and analyzed. The computation loads and resource utilization of normal game scenes and heavy computation scenes are compared. With 14 vibration actuators placed at different body positions, various vibration patterns and intensity levels can be generated by the vibrotactile feedback jacket, providing different real-world feedback. The prototype of this AR serious game can be valuable in building large-scale AR or virtual reality educational and entertainment games. Possible future improvements of the proposed prototype are also discussed in this article.
A survey on monocular 3D human pose estimation

DOI:10.3724/SP.J.2096-5796.20.00023

在线出版:2020-06-18

摘要 (360) | PDF (19) | HTML (159)
Recovering human pose from RGB images and videos has drawn increasing attention in recent years owing to minimum sensor requirements and applicability in diverse fields such as human-computer interaction, robotics, video analytics, and augmented reality. Although a large amount of work has been devoted to this field, 3D human pose estimation based on monocular images or videos remains a very challenging task due to a variety of difficulties such as depth ambiguities, occlusion, background clutters, and lack of training data. In this survey, we summarize recent advances in monocular 3D human pose estimation. We provide a general taxonomy to cover existing approaches and analyze their capabilities and limitations. We also present a summary of extensively used datasets and metrics, and provide a quantitative comparison of some representative methods. Finally, we conclude with a discussion on realistic challenges and open problems for future research directions.
A multichannel human-swarm robot interaction system in augmented reality

DOI:10.3724/SP.J.2096-5796.20.00014

在线出版:2020-05-29

摘要 (238) | PDF (2) | HTML (196)
Background
A large number of robots have put forward the new requirements for human-robot interaction. One of the problems in human-swarm robot interaction is how to naturally achieve an efficient and accurate interaction between humans and swarm robot systems. To address this, this paper proposes a new type of human-swarm natural interaction system.
Methods
Through the cooperation between three-dimensional (3D) gesture interaction channel and natural language instruction channel, a natural and efficient interaction between a human and swarm robots is achieved.
Results
First, A 3D lasso technology realizes a batch-picking interaction of swarm robots through oriented bounding boxes. Second, control instruction labels for swarm-oriented robots are defined. The instruction label is integrated with the 3D gesture and natural language through instruction label filling. Finally, the understanding of natural language instructions is realized through a text classifier based on the maximum entropy model. A head-mounted augmented reality display device is used as a visual feedback channel.
Conclusions
The experiments on selecting robots verify the feasibility and availability of the system.
Object registration using an RGB-D camera for complex product augmented assembly guidance

DOI:10.3724/SP.J.2096-5796.19.00006

在线出版:2020-05-19

摘要 (260) | PDF (8) | HTML (221)
Background
Augmented assembly guidance aims to help users complete assembly operations more efficiently and quickly through augmented reality technology, breaking the limitations of traditional assembly guidance technology which is single in content and boring in way. Object registration is one of the key technologies in augmented assembly guidance process, which can affect the location and direction of virtual assembly guidance information in real assembly environment.
Methods
This paper presents an object registration method based on RGB-D camera, which combines Lucas-Kanade (LK) optical flow algorithm and Iterative Closet Point (ICP) algorithm. An augmented assembly guidance system for complex products through this method is built. Meanwhile, in order to compare the effectiveness of the proposed method, we also implemented object registration based on an open source augmented reality SDK Vuforia.
Results
An engine model and a complex weapon cabin equipment are taken as an case to verify this work. The result shows that the registration method proposed in this paper is more accurate and stable compared with that based on Vuforia and the augmented assembly guidance system through this method greatly improves the user's time compared with the traditional assembly.
Conclusions
Therefore, we can conclude that the object registration method mentioned in this paper can be well applied in the augmented assembly guidance system, which can do enhance the efficiency of assembly considerably.