Virtual Reality & Intelligent Hardware
Virtual Reality & Intelligent Hardware
News » More
On attaining user-friendly hand gesture interfaces to control existing GUIs
Background Hand gesture interfaces are dedicated programs that principally perform hand tracking and hand gesture prediction to provide alternative controls and interaction methods. They take full advantage of one of the most natural ways of interaction and communication. Thus, they propose novel input and show great promise and in the field of HCI. Developing a flexible and rich hand gesture interface is known to be a time-consuming and arduous task. Previous studies in the literature have shown the significance of Finite state-machine (FSM) approach when mapping detected gestures to GUI actions. Methods In our hand gesture interface, we broadened the FSM approach by utilizing gesture-specific attributes, such as distances between hands, distance from the camera, and time of occurrences, to enable the users to perform unique GUI actions. They are obtained from hand gestures detected by the RealSense SDK employed in our hand gesture interface. By means of these gesture-specific attributes, users can activate static gestures and perform them as dynamic gestures. We also provided supplementary features to enhance the efficiency, convenience, and user-friendliness of our hand gesture interface. Moreover, we developed a complementary application for recording hand gestures by capturing hand keypoints, depth and color images for generating hand gesture datasets with ease. Results We conducted a small-scale user survey with fifteen subjects to test and evaluate our hand gesture interface. According to the anonymous feedback from the users, it is determined that our hand gesture interface is adequately easy and self-explanatory to use. In addition, we received constructive feedback about the minor flaws regarding the responsiveness of the interface. Conclusions We proposed a hand gesture interface along with key concepts for attaining user-friendliness and effectiveness on controlling existing GUIs.
COMTIS: Customizable touchless interaction system for large screen visualization
Large screen visualization systems have been widely utilized in many industries. Such systems can help illustrate the working states of different production systems. However, efficient interaction with such systems is still a focus of related research. In this paper, we propose a touchless interaction system based on RGB-D camera using a novel bone-length constraining method. The proposed method optimizes the joint data collected from RGB-D cameras with more accurate and more stable results on very noisy data. The user can customize the system by modifying the FSM in the system and reuse the gestures in multiple scenarios, reducing the number of gestures that need to be designed and memorized. Then we have tested the system in two cases. In the first case, we illustrated a process in which we improve the gesture designs on our system and test the system through user study. In the second case, we utilized the system in mining industry and conducted a user study where users think the system is easy to use.
Survey on lightweighting methods of huge 3D models for online Web3D visualization
Background With the rapid development of Web3D technologies, online Web3D visualization , especially for complex models or scenes, has been a great yet heavy demand. As the serious conflict between Web3D system load and the resource consumption in processing these huge models, the huge 3D model lightweighting methods for online Web3D visualization are reviewed in this paper. Methods Observing the geometry redundance introduced by man-made operations in modeling procedure, several categories of lightweighting related work which aim for reducing the data amount and resource consumption for Web3D visualization are elaborated. Results With comparing perspectives, the characteristics of each method are summarized and within the reviewed methods, the geometric redundance removal which achieves the lightweight goal by detecting and removing the repeated components is an appropriate way for current online Web3D visualization. Meanwhile, the learning algorithm, though not practical at present, is our expected topic. Conclusions Various aspects should be considered in an efficient lightweight method for online Web3D visualization, including characteristics of original data, combination or extended of the existed methods, and even scheduling strategy, cache management, rendering mechanism. Meanwhile, innovation methods, especially the learning algorithm is worth exploring.
Object registration using an RGB-D camera for complex product augmented assembly guidance
Augmented assembly guidance aims to help users complete assembly operations more efficiently and quickly through augmented reality technology, breaking the limitations of traditional assembly guidance technology which is single in content and boring in way. Object registration is one of the key technologies in augmented assembly guidance process, which can affect the location and direction of virtual assembly guidance information in real assembly environment. Methods This paper presents an object registration method based on RGB-D camera, which combines Lucas-Kanade (LK) optical flow algorithm and Iterative Closet Point (ICP) algorithm. An augmented assembly guidance system for complex products through this method is built. Meanwhile, in order to compare the effectiveness of the proposed method, we also implemented object registration based on an open source augmented reality SDK Vuforia. Results An engine model and a complex weapon cabin equipment are taken as an case to verify this work. The result shows that the registration method proposed in this paper is more accurate and stable compared with that based on Vuforia and the augmented assembly guidance system through this method greatly improves the user's time compared with the traditional assembly. Conclusions Therefore, we can conclude that the object registration method mentioned in this paper can be well applied in the augmented assembly guidance system, which can do enhance the efficiency of assembly considerably.
Temporal continuity of visual attention for future gaze prediction in immersive virtual reality
Eye tracking technology gets more and more attention in the field of virtual reality. Specifically, future gaze prediction is crucial in pre-computation for many applications like gaze-contingent rendering, advertisement placement, content-based design, etc. To explore future gaze prediction, it is necessary to analyze the temporal continuity of visual attention in immersive virtual reality. In this paper, we first present the concept of temporal continuity of visual attention. Then we propose a method, i.e. autocorrelation function, to evaluate the temporal continuity. Next, we analyze the temporal continuity in both free-viewing conditions and task-oriented conditions. Specifically, in free-viewing conditions, we perform analysis of a free-viewing gaze dataset and find that the temporal continuity performs well only within a short time interval. In task-oriented conditions, we create a task-oriented game scene and conduct a user study to collect users’ gaze data. We analyze the collected gaze data and find the temporal continuity has similar performance with that in free-viewing conditions. Temporal continuity can be applied to future gaze prediction. If the temporal continuity is good, we can directly utilize users’ current gaze positions to predict their gaze positions in the future. We further evaluate current gaze’s future prediction performances in both free-viewing conditions and task-oriented conditions and find that current gaze can be efficiently applied to the task of short-term future gaze prediction. The task of long-term gaze prediction still remains to be explored.
Edge vector based large graph visualization and interactive exploration
The demand for graph analysis is increasing. High quality and high readability graph layout is important for graph analysis. In the past years, we investigate this topic and propose a unified framework for graph layout and exploration. This framework maintains the readability during layout and interaction process. It controls the edge lengths and directions instead of only lengths. We can model most existing layout constraints, as well as develop new ones. For interactive exploration on the detail of a graph, we extend our framework to a new focus + context fisheye view. Traditional fisheye views for exploring large graphs introduce substantial distortions that often lead to a decreased readability of paths and other interesting structures. We use edge directions as constraints for graph layout optimization allows us not only to reduce spatial and temporal distortions during fisheye zooms, but also to improve the readability of the graph structure. Furthermore, the framework enables us to optimize fisheye lenses towards specific tasks and design a family of new lenses. We implement our framework with GPU parallel computing, which allows us process large graphs with up to 10,000 nodes at interactive rates.