Home About the Journal Latest Work Current Issue Archive Special Issues Editorial Board

TABLE OF CONTENTS

2022,  4 (3):   1 - 3

Published Date:2022-6-20 DOI: 10.3724/SP.J.2096-5796.2022.04.03

Content

Due to the recent rapid development in the 5G technology, the usage of sensor networks especially wireless sensor networks (WSNs) has boosted advances in the augmented reality (AR), supporting decision making in AR environments. Such decision-making needs support and consideration of artificial intelligence (AI) techniques capable of adapting to changes in AR environments for creating systems that evolve autonomously over time. Currently, it is important to apply new information fusion techniques that allow for the processing of information at low and high levels to improve the accuracy of such systems.
AR is one of the key technologies that will facilitate a major paradigm shift in the way users interact with data and has recently been recognized as a viable solution for solving many critical needs and problems. AR can be used to visualize data from hundreds of sensors simultaneously, overlaying relevant and actionable information over user's environment through a headset. WSNs under the umbrella of AI-5G brings intelligence to the AR technology thereby making it much faster, with much more data flow. With easier and more accessible use, for a variety of different functions (besides video gaming), widespread adoption seems likely. In summary, AR in the era of AI-5G is a cool upcoming wave where the vast repositories of data will enable an AR lens into the scenarios in ways that provide near immediate insight at a level of depth unimaginable previously.
Similar to another recent special issue on "Intelligent interaction in mixed reality" , this special issue invited three survey papers and two technical works, gravitating on the latest research findings in AR, AI-5G, and mobile AR technologies for various applications from six different countries including South Korea, Pakistan, Iraq, Japan, Norway, and USA. Starting with the first survey paper "Perceptual quality assessment of panoramic stitched contents for immersive applications: a prospective survey", Hayat Ullah al. stated "The recent advancements in the field of Virtual Reality (VR) and Augmented Reality (AR) have a substantial impact on modern day technology by digitizing each and everything related to human life and open the doors to the next generation Software Technology (Soft Tech). VR and AR technology provide astonishing immersive contents with the help of high quality stitched panoramic contents and 360° imagery that are widely used in the education, gaming, entertainment, and production sector. The immersive quality of VR and AR contents are greatly dependent on the perceptual quality of panoramic or 360° images. In fact, a minor visual distortion can significantly degrade the overall quality. Thus, to ensure the quality of constructed panoramic contents for VR and AR applications, numerous Stitched Image Quality Assessment (SIQA) methods have been proposed to assess the quality of panoramic contents before using in VR and AR. In this survey, we provide a detailed overview of the SIQA literature and exclusively focus on objective SIQA methods presented till date. For better understanding, the objective SIQA methods are classified into two classes namely Full-Reference SIQA and No-Reference SIQA approaches. Each class is further categorized into traditional and deep learning-based methods and their performance for SIQA task has been examined. Further, we shortlist the publicly available benchmark SIQA datasets and evaluation metrices used for quality assessment of panoramic contents. In last, we highlight the current challenges in this area based on the existing SIQA methods and suggest future research directions that need to be target for further improvement in SIQA domain".
In the second survey paper titled "Serious games in science education. a systematic literature review", Mohib et al. critically discusses serious games in science education as follows: "Teaching science through computer games, simulations, and artificial intelligence (AI) is an increasingly active research field. To this end, we conducted a systematic literature review on serious games for science education to reveal research trends and patterns. We discussed the role of Virtual Reality (VR), AI, and Augmented Reality (AR) games in teaching science subjects like physics. Specifically, we covered the research spanning between 2011 and 2021, investigated country-wise concentration and most common evaluation methods, and discussed the positive and negative aspects of serious games in science education and attitudes towards the use of serious games in education in general".
The third survey titled "Privacy-preserving deep learning techniques for wearable sensor-based big data applications" by Hamza et al. explains their contribution as follows: "Wearable technologies have the potential to become a valuable influence on human daily life where they may enable observing the world in new ways, including, for example, using augmented reality (AR) applications. Wearable technology uses electronic devices that may be carried as accessories, clothes, or even embedded in the user's body. Although the potential benefits of smart wearables are numerous, their extensive and continual usage creates several privacy concerns and tricky information security challenges. In this paper, we present a comprehensive survey of recent privacy-preserving big data analytics applications based on wearable sensors. We highlight the fundamental features of security and privacy for wearable device applications. Then, we examine the utilization of deep learning algorithms with cryptography and determine their usability for wearable sensors. We also present a case study on privacy-preserving machine learning techniques. Herein, we theoretically and empirically evaluate the privacy-preserving deep learning framework's performance. We explain the implementation details of a case study of a secure prediction service using the convolutional neural network (CNN) model and the Cheon-Kim-Kim-Song (CHKS) homomorphic encryption algorithm. Finally, we explore the obstacles and gaps in the deployment of practical real-world applications. Following a comprehensive overview, we identify the most important obstacles that must be overcome and discuss some interesting future research directions".
The fourth paper titled "Deepdive: A Learning-Based Approach for Virtual Camera in Immersive Contents" is a technical work about "Deepdive" tool by Irfan et al., who summarizes their contribution as follows: "A 360° video stream provides users a choice of viewing one's own point of interest inside the immersive contents. Performing head or hand manipulations to view the interesting scene in a 360° video is very tedious and the user may view the interested frame during his head/hand movement or even lose it. While automatically extracting user's point of interest (UPI) in a 360° video is very challenging because of subjectivity and difference of comforts. To handle these challenges and provide user's the best and visually pleasant view, we propose an automatic approach by utilizing two CNN models: object detector and aesthetic score of the scene. The proposed framework is three folded: pre-processing, Deepdive architecture, and view selection pipeline. In first fold, an input 360° video-frame is divided into three sub-frames, each one with 120° view. In second fold, each sub-frame is passed through CNN models to extract visual features in the sub-frames and calculate aesthetic score. Finally, decision pipeline selects the sub-frame with salient object based on the detected object and calculated aesthetic score. As compared to other state-of-the-art techniques which are domain specific approaches i.e., support sports 360° videos, our system support most of the 360° videos genre. Performance evaluation of proposed framework on our own collected data from various websites indicate performance for different categories of 360° videos".
The last and 5th technical paper "AR-assisted children book for smart teaching and learning of turkish alphabets" of this issue is about AR based smart teaching and learning by Ahmed et al. [6]. They summarized their contribution as follows: "Augmented reality (AR), virtual reality (VR), and remote-controlled devices are driving the need for a better 5G infrastructure to support faster data transmission. This paper emphasizes that mobile AR is a viable and widespread solution that can easily scale to millions of end-users and educators since it is lightweight, low-cost, and cross-platform. Low-efficiency smart devices and lengthy latency for real-time interactions via regular mobile networks are major barriers to using AR in education. The good news is that the upcoming 5G cellular networks can mitigate some of these issues via network slicing, device-to-device communication, and mobile edge computing. In this paper, we rely on technology to solve some of these problems. The proposed software monitors Image Targets on a printed book and renders 3D objects and alphabet models. In addition, the application considers phonetics. The sound (Phonetic) and 3D representation of the letter are played as soon as an image target is detected. The Turkish alphabet 3D models were created in Adobe Photoshop using Unity3D and the Vuforia SDK. The proposed application teaches Turkish alphabets and phonetics by using 3D object models, 3D letters, and 3D phrases including those letters and sounds".
Despite the above contributions, there are still various remaining topics related to this call that are not covered yet and will probably be studied in the future works. Such studies include "Intelligent Image Stitching for VR Applications", "AR/VR based Road Safety Training Systems for Kids", "Summarization of 360 Videos for Developing Intelligent VR Applications", and "Deep Activity Recognition for 360 Videos" etc. Readers should stay tuned for the arrival and appearance of the aforementioned and other related articles in the future issues of Virtual Reality & Intelligent Hardware Journal.
At the end, the guest editor of this special issue would like to express his sincere gratitude to all the authors who submitted their papers for possible consideration and a special gratitude to the reviewers/journal' staff for their hard work and arranging time to provide feedback to the authors. The guest editor also wishes to express gratitude to Editor-in-Chief "Yong-Tian WANG" for the opportunity to edit this special issue and for giving the authors the opportunity to present their work in the Virtual Reality & Intelligent Hardware Journal.
Muhammad KHAN
31 May 2022

Reference