7 Article(s) found
Background Commonly, Species Monitoring is performed in mega-biodiverse environments by using bioacoustics methodologies where the species are more likely to be heard than seen. Furthermore, since bird vocalizations are reasonable estimators of biodiversity, their monitoring is of great importance in the formulation of conservation policies. However, birdsong recognition is an arduous task that requires dedicated training to achieve mastery; this training is costly in terms of time and money due to the lack of accessibility of relevant information in field trips or even on specialized databases. Immersive technology based on virtual reality (VR) and spatial audio may improve Species Monitoring by enhancing information accessibility, interaction, and user engagement. Methods This study used spatial audio, a Bluetooth controller, and a Head-mounted Display (HMD) to conduct an immersive training experience in VR. Participants moved inside a virtual world using a Bluetooth controller while their task was to recognize targeted birdsongs. We measured the accuracy of the recognition and the user engagement according to the User Engagement Scale. Results Experimental results revealed significantly higher engagement and accuracy for participants in the VR-based training system when compared to a traditional computer-based training system. All four dimensions of the user engagement scale received high ratings by the participants suggesting that VR-based training provides a motivating and attractive environment to learn demanding tasks through appropriate design, exploiting the sensory system and the virtual reality interactivity. Conclusions The accuracy and engagement of a VR-based training system were significantly highly rated when tested against traditional training. Future research will focus on developing a variety of realistic ecosystems and their associated birds to increase the information of newer bird species in the training system. Finally, the proposed VR-based training system must be tested with additional participants and for a greater duration to measure information recall and recognition mastery among users.
Background A virtual system that simulates the complete process of orthodontic bracket placement can be used for pre-clinical skill training to help students gain confidence by performing the required tasks on a virtual patient. Methods The hardware for the virtual simulation system is built using two force feedback devices to support bi-manual force feedback operation. A 3D mouse is used to adjust the position of the virtual patient. A multi-threaded computational methodology is adopted to satisfy the requirements of the frame rate. The computation threads mainly consist of the haptic thread running at a frequency of >1000 Hz and the graphic thread at >30 Hz. The graphic thread allows the graphics engine to effectively display the visual effects of biofilm removal and acid erosion through texture mapping. Using the haptic thread, the physics engine adopts the hierarchy octree collision-detection algorithm to simulate the multi-point and multi-region interaction between the tools and the virtual environment. Its high efficiency guarantees that the time cost can be controlled within 1 ms. The physics engine also performs collision detection between the tools and particles, making it possible to simulate paint and removal of colloids. The surface-contact constraints are defined in the system; this ensures that the bracket will not divorce from or embed into the tooth during the adjustment of the bracket. Therefore, the simulated adjustment is more realistic and natural. Results A virtual system to simulate the complete process of orthodontic bracket bonding was developed. In addition to bracket bonding and adjustment, the system simulates the necessary auxiliary steps such as smearing, acid etching, and washing. Furthermore, the system supports personalized case training. Conclusions The system provides a new method for students to practice orthodontic skills.
To reduce serious crashes, contemporary research leverages opportunities provided by technology. A potentially higher added value to reduce road trauma may be hidden in utilising emerging technologies, such as headset-delivered virtual reality (VR). However, there is no study to analyse the application of such VR in road safety research systematically. Using the PRISMA protocol, our study identified 39 papers presented at conferences or published in scholarly journals. In those sources, we found evidence of VR's applicability in studies involving different road users (drivers, pedestrians, cyclists and passengers). A number of articles were concerned with providing evidence around the potential adverse effects of VR, such as simulator sickness. Other work compared VR with conventional simulators. VR was also contributing to the emerging field of autonomous vehicles. However, few studies leveraged the opportunities that VR presents to positively influence the involved road users' behaviour. Based on our findings, we identified pathways for future research.
Background Feature matching technology is vital to establish the association between virtual and real objects in virtual reality and augmented reality systems. Specifically, it provides them with the ability to match a dynamic scene. Many image matching methods, of which most are deep learning-based, have been proposed over the past few decades. However, vessel fracture, stenosis, artifacts, high background noise, and uneven vessel gray-scale make vessel matching in coronary angiography extremely difficult. Traditional matching methods perform poorly in this regard. Methods In this study, a topological distance-constrained feature descriptor learning model is proposed. This model regards the topology of the vasculature as the connection relationship of the centerline. The topological distance combines the geodesic distance between the input patches and constrains the descriptor network by maximizing the feature difference between connected and unconnected patches to obtain more useful potential feature relationships. Results Matching patches of different sequences of angiographic images are generated for the experiments. The matching accuracy and stability of the proposed method is superior to those of the existing models. Conclusions The proposed method solves the problem of matching coronary angiographies by generating a topological distance-constrained feature descriptor.
Background Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia that can cause severe heart problems. Catheter ablation is one of the most ideal procedures for the treatment of AF. Physicians qualified to perform this procedure need to be highly skilled in manipulating the relevant surgical devices. This study proposes an interactive surgical simulator with high fidelity to facilitate efficient training and low-cost medical education. Methods We used a shared centerline model to simulate the interaction between multiple surgical devices. An improved adaptive deviation-feedback approach is proposed to accelerate the convergence of each iteration. The periodical beating of the human heart was also simulated in real time using the position-based dynamics (PBD) framework to achieve higher fidelity. We then present a novel method for handling the interaction between the devices and the beating heart mesh model. Experiments were conducted in a homemade simulator prototype to evaluate the robustness, performance, and flexibility of the proposed method. Preliminary evaluation of the simulator was performed by medical students, residents, and surgeons. Results The interaction between surgical devices, static vascular meshes, and beating heart mesh was stably simulated in a frame rate suitable for interaction. Conclusion Our simulator is capable of simulating the procedure of catheter ablation with high fidelity and provides immersive visual experiences and haptic feedback.
The marching cube algorithm is currently one of the most popular three-dimensional (3D) reconstruction surface rendering algorithms. It forms cube voxels based on an input image and then uses 15 basic topological configurations to extract isosurfaces from the voxels. The algorithm processes each cube voxel in a traversal-based manner, but it does not consider the relationship between the isosurfaces in adjacent cubes. Owing to ambiguity, the final reconstructed model may have holes. In this paper, we propose a marching cube algorithm based on edge growth. The algorithm first extracts seed triangles, grows these seed triangles, and then reconstructs the entire 3D model. According to the position of the growth edge, we propose 17 topological configurations with isosurfaces. The reconstruction results showed that the algorithm can reconstruct the 3D model well. When only the main contour of the 3D model is required, the algorithm performs well. In addition, when there are multiple scattered parts in the data, the algorithm can extract only the 3D contours of the parts connected to the seed by setting the region selected based on the seed.
Background As a novel approach for people to directly communicate with an external device, the study of brain-computer interfaces (BCIs) has become well-rounded. However, similar to the real-world scenario, where individuals are expected to work in groups, the BCI systems should be able to replicate group attributes. Methods We proposed a 4-order cumulants feature extraction method (CUM4-CSP) based on the common spatial patterns (CSP) algorithm. Simulation experiments conducted using motion visual evoked potentials (mVEP) EEG data verified the robustness of the proposed algorithm. In addition, to freely choose paradigms, we adopted the mVEP and steady-state visual evoked potential (SSVEP) paradigms and designed a multimodal collaborative BCI system based on the proposed CUM4-CSP algorithm. The feasibility of the proposed multimodal collaborative system framework was demonstrated using a multiplayer game controlling system that simultaneously facilitates the coordination and competitive control of two users on external devices. To verify the robustness of the proposed scheme, we recruited 30 subjects to conduct online game control experiments, and the results were statistically analyzed. Results The simulation results prove that the proposed CUM4-CSP algorithm has good noise immunity. The online experimental results indicate that the subjects could reliably perform the game confrontation operation with the selected BCI paradigm. Conclusions The proposed CUM4-CSP algorithm can effectively extract features from EEG data in a noisy environment. Additionally, the proposed scheme may provide a new solution for EEG-based group BCI research.