Home About the Journal Latest Work Current Issue Archive Special Issues Editorial Board

2022, Vol. 4 No. 2 Publish Date:2022-4

Previous Next
View Abstracts Download Citations


Reference Manager





Intelligent interaction in mixed reality

2022, 4(2) : 1-2


PDF (33) HTML (383)


Navigation in virtual and real environment using brain computer interface:a progress report

2022, 4(2) : 89-114


Abstract (560) PDF (39) HTML (970)
A brain-computer interface (BCI) facilitates bypassing the peripheral nervous system and directly communicating with surrounding devices. Navigation technology using BCI has developed—from exploring the prototype paradigm in the virtual environment (VE) to accurately completing the locomotion intention of the operator in the form of a powered wheelchair or mobile robot in a real environment. This paper summarizes BCI navigation applications that have been used in both real and VEs in the past 20 years. Horizontal comparisons were conducted between various paradigms applied to BCI and their unique signal-processing methods. Owing to the shift in the control mode from synchronous to asynchronous, the development trend of navigation applications in the VE was also reviewed. The contrast between high-level commands and low-level commands is introduced as the main line to review the two major applications of BCI navigation in real environments: mobile robots and unmanned aerial vehicles (UAVs). Finally, applications of BCI navigation to scenarios outside the laboratory; research challenges, including human factors in navigation application interaction design; and the feasibility of hybrid BCI for BCI navigation are discussed in detail.


Design and evaluation of window management operations in AR headset+smartphone interface

2022, 4(2) : 115-131


Abstract (371) PDF (18) HTML (516)
The combination of an augmented reality (AR) headset and a smartphone can simultaneously provide a wider display and a precise touch input; it can redefine the way we use applications today. However, users are deprived of such benefits owing to the independence of the two devices. There is a lack of intuitive and direct interactions between them.
In this study, we conduct a formative investigation to understand the window management requirements and interaction preferences of using an AR headset and a smartphone simultaneously and report the insights we gained. In addition, we introduce an example vocabulary of window management operations in the AR headset and smartphone interface.
This allows users to manipulate windows in a virtual space and shift windows between devices efficiently and seamlessly.
Designing generation Y interactions: The case of YPhone

2022, 4(2) : 132-152


Abstract (495) PDF (19) HTML (517)
With an increasing number of products becoming digital, mobile, and networked, paying attention to the quality of interactions with such products is also becoming more relevant. Although the quality of such interactions has been addressed in several scientific studies, little attention has been paid to their implementation in real life and everyday contexts.
This paper describes the development of a novel office phone prototype, called YPhone, which demonstrates the application of a specific set of Generation Y interaction qualities (instantaneous, playful, collaborative, expressive, responsive, and flexible) in the context of office work. The working prototype supports office workers in experiencing new types of interactions. It was set out in practice through a series of evaluations.
We found that the playful, expressive, responsive, and flexible qualities incur greater trust than the instantaneous and collaborative qualities. Such qualities can be grouped, although this may differ for different evaluated products, and researchers must be cautious about generalizations.
The overall evaluation was deemed positive, with some valuable suggestions provided regarding its user interactions and features.
Motivation effect of animated pedagogical agent's personality and feedback strategy types on learning in virtual training environment

2022, 4(2) : 153-172


Abstract (353) PDF (13) HTML (682)
The personality and feedback of an animated pedagogical agent (APA) are vital social-emotional features that render the agent perceptually believable. Their effects on learning during virtual training need to be examined.
In this paper, an explanation model is proposed to clarify the underlying mechanism of how these two features affect learners. Two studies were conducted to investigate the model. In Study 1, the effect of the APA's personality type and feedback strategy on flow experience and performance was reexamined, revealing significant effects of the feedback strategy on flow and performance and a marginally significant effect of the personality type on performance. To explore the mechanism behind these effects, a theoretical model is proposed by distinguishing between intrinsic and extrinsic motivation effects. In Study 2, the model was evaluated, and the APA's personality type was found to significantly influence the factors in the path of the extrinsic motivation effect rather than those in the path of the intrinsic motivation effect.
In contrast, the feedback strategy affected factors in the path of the intrinsic motivation effect.
These results validated the proposed model. Further distinguishing the two motivation effects is necessary to understand the respective effects of an APA's personality and feedback features on learning experiences and outcomes.
EyeGaze: Hybrid eye tracking approach for handheld mobile devices

2022, 4(2) : 173-188


Abstract (393) PDF (11) HTML (596)
Eye-tracking technology for mobile devices has made significant progress. However, owing to limited computing capacity and the complexity of context, the conventional image feature-based technology cannot extract features accurately, thus affecting the performance.
This study proposes a novel approach by combining appearance- and feature-based eye-tracking methods. Face and eye region detections were conducted to obtain features that were used as inputs to the appearance model to detect the feature points. The feature points were used to generate feature vectors, such as corner center-pupil center, by which the gaze fixation coordinates were calculated.
To obtain feature vectors with the best performance, we compared different vectors under different image resolution and illumination conditions, and the results indicated that the average gaze fixation accuracy was achieved at a visual angle of 1.93° when the image resolution was 96 × 48 pixels, with light sources illuminating from the front of the eye.
Compared with the current methods, our method improved the accuracy of gaze fixation and it was more usable.