Adv Search
Home | Accepted | Article In Press | Current Issue | Archive | Special Issues | Collections | Featured Articles | Statistics

Article In Press

View Abstracts Download Citations


Reference Manager




Emotional dialog generation via multiple classifiers based on a generative adversarial network


Available Online:2020-08-12

Abstract (18) | PDF (3) | HTML (19)
Human-machine dialog generation is an essential topic of research in the field of natural language processing. Generating high-quality, diverse, fluent, and emotional conversation is a challenging task. Based on continuing advancements in artificial intelligence and deep learning, new methods have come to the forefront in recent times. In particular, the end-to-end neural network model provides an extensible conversation generation framework that has the potential to enable machines to understand semantics and automatically generate responses. However, neural network models come with their own set of questions and challenges. The basic conversational model framework tends to produce universal, meaningless, and relatively "safe" answers.
Based on generative adversarial networks (GANs), a new emotional dialog generation framework called EMC-GAN is proposed in this study to address the task of emotional dialog generation. The proposed model comprises a generative and three discriminative models. The generator is based on the basic sequence-to-sequence (Seq2Seq) dialog generation model, and the aggregate discriminative model for the overall framework consists of a basic discriminative model, an emotion discriminative model, and a fluency discriminative model. The basic discriminative model distinguishes generated fake sentences from real sentences in the training corpus. The emotion discriminative model evaluates whether the emotion conveyed via the generated dialog agrees with a pre-specified emotion, and directs the generative model to generate dialogs that correspond to the category of the pre-specified emotion. Finally, the fluency discriminative model assigns a score to the fluency of the generated dialog and guides the generator to produce more fluent sentences.
Based on the experimental results, this study confirms the superiority of the proposed model over similar existing models with respect to emotional accuracy, fluency, and consistency.
The proposed EMC-GAN model is capable of generating consistent, smooth, and fluent dialog that conveys pre-specified emotions, and exhibits better performance with respect to emotional accuracy, consistency, and fluency compared to its competitors.
Virtual reality research and development in NTU


Available Online:2020-08-12

Abstract (10) | PDF (10) | HTML (8)
In 1981, Nanyang Technological Institute was established in Singapore to train engineers and accountants to keep up with the fast-growing economy of the country. In 1991, the institute was upgraded to Nanyang Technological University (NTU). NTU holds the rank for world’s top young university for six consecutive years according to the Quacquarelli Symonds (QS) world university ranking. Virtual Reality (VR) research began in NTU in the late 1990s. NTU’s colleges, schools, institutes, and centers have contributed toward the excellence of VR research. This article briefly describes the VR research directions and activities in NTU.
VR and AR in human performance researchAn NUS experience


Available Online:2020-08-12

Abstract (13) | PDF (9) | HTML (4)
With the mindset of constant improvement in efficiency and safety in the workspace and training in Singapore, there is a need to explore varying technologies and their capabilities to fulfil this need. The ability of Virtual Reality (VR) and Augmented Reality (AR) to create an immersive experience of tying the virtual and physical environments coupled with information filtering capabilities brings a possibility of introducing this technology into the training process and workspace. This paper surveys current research trends, findings and limitation of VR and AR in its effect on human performance, specifically in Singapore, and our experience in the National University of Singapore (NUS).
VEGO: A novel design towards customizable and adjustable head-mounted display for VR


Available Online:2020-07-30

Abstract (31) | PDF (9) | HTML (26)
Virtual Reality (VR) technologies have advanced fast and have been applied to a wide spectrum of sectors in the past few years. VR can provide an immersive experience to users by generating virtual images and displaying the virtual images to the user with a head-mounted display (HMD) which is a primary component of VR. Normally, an HMD contains a list of hardware components, e.g., housing pack, micro LCD display, microcontroller, optical lens, etc. Settings of VR HMD to accommodate the user’s inter-pupil distance (IPD) and the user’s eye focus power are important for the user’s experience with VR.
Although various methods have been developed towards IPD and focus adjustments for VR HMD, the increased cost and complexity impede the possibility for users who wish to assemble their own VR HMD for various purposes, e.g., DIY teaching, etc. In our paper, we present a novel design towards building a customizable and adjustable HMD for VR in a cost-effective manner. Modular design methodology is adopted, and the VR HMD can be easily printed with 3D printers. The design also features adjustable IPD and variable distance between the optical lens and the display. It can help to mitigate the vergence and accommodation conflict issue.
A prototype of the customizable and adjustable VR HMD has been successfully built up with off-the-shelf components. A VR software program running on Raspberry Pi board has been developed and can be utilized to show the VR effects. A user study with 20 participants is conducted with positive feedback on our novel design.
Modular design can be successfully applied for building up VR HMD with 3D printing. It helps to promote the wide application of VR at affordable costs while featuring flexibility and adjustability.
Cloud-to-end rendering and storage management for virtual reality in experimental education


Available Online:2020-07-28

Abstract (27) | PDF (4) | HTML (25)
Real-time 3D rendering and interaction is important for virtual reality (VR) experimental education. Unfortunately, standard end-computing methods prohibitively escalate computational costs. Thus, reducing or distributing these requirements needs urgent attention, especially in light of the COVID-19 pandemic.
In this study, we design a cloud-to-end rendering and storage system for VR experimental education comprising two models: background and interactive. The cloud server renders items in the background and sends the results to an end terminal in a video stream. Interactive models are then lightweight-rendered and blended at the end terminal. An improved 3D warping and hole-filling algorithm is also proposed to improve image quality when the user’s viewpoint changes.
We build three scenes to test image quality and network latency. The results show that our system can render 3D experimental education scenes with higher image quality and lower latency than any other cloud rendering systems.
Our study is the first to use cloud and lightweight rendering for VR experimental education. The results demonstrate that our system provides good rendering experience without exceeding computation costs.
Virtual simulation experiment of the design and manufacture of a beer bottle-defect detection system


Available Online:2020-07-28

Abstract (41) | PDF (4) | HTML (31)
Machine learning-based beer bottle-defect detection is a complex technology that runs automatically; however, it consumes considerable memory, is expensive, and poses a certain danger when training novice operators. Moreover, some topics are difficult to learn from experimental lectures, such as digital image processing and computer vision. However, virtual simulation experiments have been widely used to good effect within education. A virtual simulation of the design and manufacture of a beer bottle-defect detection system will not only help the students to increase their image-processing knowledge, but also improve their ability to solve complex engineering problems and design complex systems.
The hardware models for the experiment (camera, light source, conveyor belt, power supply, manipulator, and computer) were built using the 3DS MAX modeling and animation software. The Unreal Engine 4 (UE4) game engine was utilized to build a virtual design room, design the interactive operations, and simulate the system operation.
The results showed that the virtual-simulation system received much better experimental feedback, which facilitated the design and manufacture of a beer bottle-defect detection system. The specialized functions of the functional modules in the detection system, including a basic experimental operation menu, power switch, image shooting, image processing, and manipulator grasping, allowed students (or virtual designers) to easily build a detection system by retrieving basic models from the model library, and creating the beer-bottle transportation, image shooting, image processing, defect detection, and defective-product removal. The virtual simulation experiment was completed with image processing as the main body.
By mainly focusing on bottle mouth-defect detection, the detection system dedicates more attention to the user and the task. With more detailed tasks available, the virtual system will eventually yield much better results as a training tool for image-processing education. In addition, a novel visual perception-thinking pedagogical framework enables better comprehension than the traditional lecture-tutorial style.
An intelligent navigation experimental system based on multi-mode fusion


Available Online:2020-07-24

Abstract (44) | PDF (1) | HTML (42)
At present, most experimental teaching systems lack guidance of an operator, and thus users often do not know what to do during an experiment. The user load is therefore increased, and the learning efficiency of the students is decreased. To solve the problem of insufficient system interactivity and guidance, an experimental navigation system based on multi-mode fusion is proposed in this paper. The system first obtains user information by sensing the hardware devices, intelligently perceives the user intention and progress of the experiment according to the information acquired, and finally carries out a multi-modal intelligent navigation process for users. As an innovative aspect of this study, an intelligent multi-mode navigation system is used to guide users in conducting experiments, thereby reducing the user load and enabling the users to effectively complete their experiments. The results prove that this system can guide users in completing their experiments, and can effectively reduce the user load during the interaction process and improve the efficiency.
Multimodal interaction design and application in augmented reality for chemical experiment


Available Online:2020-07-24

Abstract (70) | PDF (2) | HTML (59)
Augmented reality classrooms have become an interesting research topic in the field of education, but there are some limitations. Firstly, most researchers use cards to operate experiments, and a large number of cards cause difficulty and inconvenience for users. Secondly, most users conduct experiments only in the visual modal, and such single-modal interaction greatly reduces the users’ real sense of interaction. In order to solve these problems, we propose the Multimodal Interaction Algorithm based on Augmented Reality (ARGEV), which is based on visual and tactile feedback in Augmented Reality. In addition, we design a Virtual and Real Fusion Interactive Tool Suite (VRFITS) with gesture recognition and intelligent equipment.
The ARGVE method fuses gesture, intelligent equipment, and virtual models. We use a gesture recognition model trained by a convolutional neural network to recognize the gestures in AR, and to trigger a vibration feedback after a recognizing a five-finger grasp gesture. We establish a coordinate mapping relationship between real hands and the virtual model to achieve the fusion of gestures and the virtual model.
The average accuracy rate of gesture recognition was 99.04%. We verify and apply VRFITS in the Augmented Reality Chemistry Lab (ARCL), and the overall operation load of ARCL is thus reduced by 29.42%, in comparison to traditional simulation virtual experiments.
We achieve real-time fusion of the gesture, virtual model, and intelligent equipment in ARCL. Compared with the NOBOOK virtual simulation experiment, ARCL improves the users’ real sense of operation and interaction efficiency.
Interaction design for paediatric emergency VR training


Available Online:2020-07-21

Abstract (114) | PDF (5) | HTML (63)
Virtual reality (VR) in healthcare training has increased adoption and support, but efforts are still required to mitigate usability concerns.
This study conducted a usability study of an in-use emergency medicine VR training application, available on commercially available VR hardware and with a standard interaction design. Nine users without prior VR experience but with relevant medical expertise completed two simulation scenarios for a total of 18 recorded sessions. They completed NASA Task Load Index and System Usability Scale questionnaires after each session, and their performance was recorded for the tracking of user errors.
Results and Conclusions
Our results showed a medium (and potentially optimal) Workload and an above average System Usability Score. There was significant improvement in several factors between users’ first and second sessions, notably increased Performance evaluation. User errors with the strongest correlation to usability were not directly tied to interaction design, however, but to a limited ‘possibility space’. Suggestions for closing this ‘gulf of execution’ were presented, including ‘voice control’ and ‘hand-tracking’, which are only feasible for this commercial product now with the availability of the Oculus Quest headset. Moreover, wider implications for VR medical training were outlined, and potential next steps towards a standardized design identified.
Thermal perception method of virtual chemistry experiments


Available Online:2020-07-15

Abstract (84) | PDF (15) | HTML (68)
With the aim of addressing the difficulty in identifying temperatures in virtual chemistry experiments, we propose a temperature-sensing simulation method of virtual chemistry experiments.
We construct a virtual chemistry experiment temperature simulation platform, based on which a wearable temperature generation device is developed. The typical middle school virtual experiments of concentrated sulfuric acid dilution and ammonium nitrate dissolution are conducted to verify the actual effect of the device.
The platform is capable to indicate near real-world experimental situations. The performance of the device not only meets the temperature sensing characteristics of human skin, but also matches the temperature change of virtual chemistry experiments in real-time.
It is demonstrated that this temperature-sensing simulation method can represent exothermic or endothermic chemistry experiments, which is beneficial for students to gain understanding of the principles of thermal energy transformation in chemical reactions, thus avoiding the danger that may be posed in the course of traditional teaching of chemistry experiments effectively. Although this method does not have a convenient enough operation for users, the immersion of virtual chemical experiments can be enhanced.
VR industrial applicationsA singapore perspective


Available Online:2020-07-14

Abstract (79) | PDF (6) | HTML (61)
Virtual Reality (VR) has been around for a long time but has come into the spotlight only recently. From an industrial perspective, this article serves as a proverbial scalpel to dissect the different use cases and commercial applications of VR in Singapore. Before researching the Singapore market, we examine how VR has evolved. At the moment, the global annual budget for VR (and augmented reality) is at an upward trend with a leading growth in market value for the training sector. VR in Singapore has also seen a rapid development in recent years. We discuss some of the Singapore government’s initiatives to promote the commercial adoption of VR for the digital economy of the nation. To address the mass adoption of VR, we present VRcollab’s business solutions for the construction and building industry. 2020 is one of the most important years for VR in history.
Virtual & augmented reality for biological microscope in experiment education


Available Online:2020-07-07

Abstract (84) | PDF (4) | HTML (75)
Mixed-reality technologies, including virtual reality (VR) and augmented reality (AR) , are considered to be promising potential tools for science teaching and learning processes that could foster positive emotions, motivate autonomous learning, and improve learning outcomes.
In this study, a technology-aided biological microscope learning system based on VR/AR is presented. The structure of the microscope is described in a detailed three-dimensional (3D) model, each component being represented with their topological interrelationships and associations among them being established. The interactive behavior of the model was specified, and a standard operating guide was compiled. The motion control of components was simulated based on collision detection. Combined with immersive VR equipment and AR technology, we developed a virtual microscope subsystem and a mobile virtual microscope guidance system.
The system consisted of a VR subsystem and an AR subsystem. The focus of the VR subsystem was to simulate operating the microscope and associated interactive behaviors that allowed users to observe and operate the components of the 3D microscope model by means of natural interactions in an immersive scenario. The AR subsystem allowed participants to use a mobile terminal that took a picture of a microscope from a textbook and then displayed the structure and functions of the instrument, as well as the relevant operating guidance. This flexibly allowed students to use the system before or after class without time and space constraints. The system allowed users to switch between the VR and AR subsystems.
The system is useful for helping learners (especially K-12 students) to recognize a microscope's structure and grasp the required operational skills by simulating operations using an interactive process. In the future, such technology-assisted education would be a successful learning platform in an open learning space.
Development of augmented reality serious games with a vibrotactile feedback jacket


Available Online:2020-07-01

Abstract (114) | PDF (5) | HTML (80)
In the past few years, augmented reality (AR) has rapidly advanced and has been applied in different fields. One of the successful AR applications is the immersive and interactive serious games, which can be used for education and learning purposes.
In this project, a prototype of an AR serious game is developed and demonstrated. Gamers utilize a head-mounted device and a vibrotactile feedback jacket to explore and interact with the AR serious game. Fourteen vibration actuators are embedded in the vibrotactile feedback jacket to generate immersive AR experience. These vibration actuators are triggered in accordance with the designed game scripts. Various vibration patterns and intensity levels are synthesized in different game scenes. This article presents the details of the entire software development of the AR serious game, including game scripts, game scenes with AR effects design, signal processing flow, behavior design, and communication configuration. Graphics computations are processed using the graphics processing unit in the system.
Results /Conclusions
The performance of the AR serious game prototype is evaluated and analyzed. The computation loads and resource utilization of normal game scenes and heavy computation scenes are compared. With 14 vibration actuators placed at different body positions, various vibration patterns and intensity levels can be generated by the vibrotactile feedback jacket, providing different real-world feedback. The prototype of this AR serious game can be valuable in building large-scale AR or virtual reality educational and entertainment games. Possible future improvements of the proposed prototype are also discussed in this article.
A survey on monocular 3D human pose estimation


Available Online:2020-06-18

Abstract (147) | PDF (15) | HTML (97)
Recovering human pose from RGB images and videos has drawn increasing attention in recent years owing to minimum sensor requirements and applicability in diverse fields such as human-computer interaction, robotics, video analytics, and augmented reality. Although a large amount of work has been devoted to this field, 3D human pose estimation based on monocular images or videos remains a very challenging task due to a variety of difficulties such as depth ambiguities, occlusion, background clutters, and lack of training data. In this survey, we summarize recent advances in monocular 3D human pose estimation. We provide a general taxonomy to cover existing approaches and analyze their capabilities and limitations. We also present a summary of extensively used datasets and metrics, and provide a quantitative comparison of some representative methods. Finally, we conclude with a discussion on realistic challenges and open problems for future research directions.
A multichannel human-swarm robot interaction system in augmented reality


Available Online:2020-05-29

Abstract (131) | PDF (2) | HTML (119)
A large number of robots have put forward the new requirements for human-robot interaction. One of the problems in human-swarm robot interaction is how to naturally achieve an efficient and accurate interaction between humans and swarm robot systems. To address this, this paper proposes a new type of human-swarm natural interaction system.
Through the cooperation between three-dimensional (3D) gesture interaction channel and natural language instruction channel, a natural and efficient interaction between a human and swarm robots is achieved.
First, A 3D lasso technology realizes a batch-picking interaction of swarm robots through oriented bounding boxes. Second, control instruction labels for swarm-oriented robots are defined. The instruction label is integrated with the 3D gesture and natural language through instruction label filling. Finally, the understanding of natural language instructions is realized through a text classifier based on the maximum entropy model. A head-mounted augmented reality display device is used as a visual feedback channel.
The experiments on selecting robots verify the feasibility and availability of the system.
Object registration using an RGB-D camera for complex product augmented assembly guidance


Available Online:2020-05-19

Abstract (158) | PDF (8) | HTML (155)
Augmented assembly guidance aims to help users complete assembly operations more efficiently and quickly through augmented reality technology, breaking the limitations of traditional assembly guidance technology which is single in content and boring in way. Object registration is one of the key technologies in augmented assembly guidance process, which can affect the location and direction of virtual assembly guidance information in real assembly environment.
This paper presents an object registration method based on RGB-D camera, which combines Lucas-Kanade (LK) optical flow algorithm and Iterative Closet Point (ICP) algorithm. An augmented assembly guidance system for complex products through this method is built. Meanwhile, in order to compare the effectiveness of the proposed method, we also implemented object registration based on an open source augmented reality SDK Vuforia.
An engine model and a complex weapon cabin equipment are taken as an case to verify this work. The result shows that the registration method proposed in this paper is more accurate and stable compared with that based on Vuforia and the augmented assembly guidance system through this method greatly improves the user's time compared with the traditional assembly.
Therefore, we can conclude that the object registration method mentioned in this paper can be well applied in the augmented assembly guidance system, which can do enhance the efficiency of assembly considerably.