Chinese
Adv Search
Home | Accepted | Article In Press | Current Issue | Archive | Special Issues | Collections | Featured Articles | Statistics

2019,  1 (3):   290 - 302   Published Date:2019-6-20

DOI: 10.3724/SP.J.2096-5796.2019.0012
1 Introduction2 Related work2.1 Co-located collaboration2.2 Multi-view stereoscopic projective display technology2.3 Multi-user motion tracking technology2.4 Interactive tools based on sensor2.5 Console devices supporting virtual scene assembly3 System architecture4 System design and implementation 4.1 Multi-view stereoscopic projective display module 4.2 Multi-user tracking module 4.3 Interactive device module based on MEMS sensor 4.4 Virtual scene assembly and console device module5 User study6 Conclusion and future work

Abstract

Background
Due to the restriction of display mode, in most of the virtual reality systems with multiple people in the same physical space, the program renders the scene based on the position and perspective of the one user, so that other users just see the same scene, resulting in vision disorder.
Methods
To improve experience of multi-user co-located collaboration, in this study, we propose a fire drill system supporting co-located collaboration, in which three co-located users can collaborate to complete the virtual firefighting mission. Firstly, with multi-view stereoscopic projective display technology and ultra wideband (UWB) technology, co-located users can roam independently and watch virtual scenes through the correct perspective view based on their own position by wearing dedicated shutter glasses, thus carrying out different virtual tasks, which improves the flexibility of co-located collaboration. Secondly, we design simulated firefighting water-gun using the micro-electromechanical system sensor, through which users can interact with virtual environment, and thus provide a better interactive experience. Finally, we develop a workbench including a holographic display module and multi-touch operation module for virtual scene assembly and virtual environment control.
Results
The controller can use the workbench to adjust the virtual layout in real time, and control the virtual task process to increase the flexibility and playability of system.
Conclusions
Our work can be employed in a wide range of related virtual reality applications.

Content

1 Introduction
In the fields of fire rescue, emergency handling, and military confrontation, to improve the response capacity of relevant personnel, most methods carry out real situation simulations, so as to ensure the orderly implementation of rescue work when users encounter the real situation. Especially in the case of a fire drill, the common drill is to organize relevant personnel in a relatively open area to generate real flames, and use fire-fighting tools for demonstration teaching. This consumes a lot of manpower, material, and financial resources, and is quite different from a real fire scene. Therefore, the use of a computer simulation of a fire scene, on the one hand, can promote the knowledge of fire control and help people master basic fire-fighting skills. On the other hand, it can enhance the professional fire-fighting personnel proficiency, which has a certain guiding significance for fire decision-making.
Presently, most of the virtual fire drill systems support single-person interaction. However, in real fire scenarios, the site is large and the firefighting task is complex, such that users cannot complete it alone, and multiple users need to work together in a group. Therefore, in the construction and development of the virtual fire drill system, we must consider the simulation and reproduction of the real environment, in which multiple users can be co-located to collaborate, and each user can watch their own stereoscopic video based on their position, which improves the sense of reality.
Although the head-mounted display device supports multi-user co-located collaboration, it can only provide a completely closed display, which makes it impossible for users to see each other in reality and reduces the sense of a collaborative interaction experience. However, the large-screen stereoscopic projection display technology can provide the immersive stereoscopic display effect for many users. Users in front of the projection screen can conduct virtual interactive operations and have direct communication face-to-face, with a strong sense of co-located collaboration. Therefore, large-screen stereoscopic projection remains the main display method of the virtual reality application supporting multi-user co-located collaboration. However, presently, the typical commercial large-screen stereoscopic projection display can only provide a single viewpoint display, and users in the same physical space can only perceive the rendering picture of one of these, which reduces the sense of reality.
Therefore, this study combines several cognitive interaction technologies, using the three projection array method in the literature[1] to achieve the multi-user stereoscopic projective display and proposes a multi-user collaborative environment based on large screen. Moreover, based on the fire drill, we develop a virtual fire drill system supporting three-user co-located collaboration, in which three users can watch their own stereoscopic video by wearing dedicated shutter glasses. At the same time, ultra wideband (UWB) wireless positioning technology is used to track the real-time position of users, which enables every user to move freely and control the corresponding virtual roaming path. To provide effective interactive operation, this study develops a simulated firefighting water-gun, which allows users to carry out firefighting operations in the virtual fire field. In addition, this paper develops a console device supporting virtual scene assembly. Hence, the remote controller can observe a 3D virtual scene and the user's virtual interactive process by the holographic display screen. Furthermore, they can adjust the layout of a scene and task process in real time by touch interaction and other ways, which improves the system flexibility.
2 Related work
2.1 Co-located collaboration
With the increase of virtual task complexity, a single user cannot meet the task requirements, which requires the cooperation of multiple users. Differently from distributed collaboration[2], in the co-located collaborative environment, multiple users are in the same physical space to jointly complete one or more complex tasks, and the communication between users is more direct. Czernuszenko et al. propose the implementation of CAVE, a virtual reality (VR) display system[3]. In CAVE, multiple users work together in the same physical area, but the system only tracks the position of one user, and renders the scene with the viewpoint of the current tracked user, so that the untracked user cannot see the scene corresponding to its actual position, which affects the user's sense of reality. On this basis, Simon[4] proposes a new collaborative interaction paradigm. In single-view stereoscopic projective display systems, multiple users can share the same virtual environment. At the same time, each user finishes the ray casting selection task. To avoid affecting other users’ viewing experience, users cannot move a larger distance. The head-mounted display (HMD) can provide a highly immersive experience to the user, whereas all bystanders (non-HMD users) are excluded from the experience. Hence, ShareVR[5], proposed by Gugenheimer et al., can provide co-located experience for HMD and non-HMD users and allow them collaborate to complete virtual tasks. Moreover, Chen et al.studied the collaborative interaction between two users in CAVE and proposed a new space-mapping algorithm, which allows two co-located users to avoid obstacles and collaborate to complete target selection and other virtual tasks[6,7].
2.2 Multi-view stereoscopic projective display technology
To break through the limitation of single-view in traditional stereoscopic projective device, some studies applied the multi-view stereoscopic projective display technology. Yu et al. introduced a novel two-view VR shooting theater system that allows players to move freely[8]. The system can render multiple 3D images of the same scene in real-time based on the position of the user, and users can enjoy the individual images by wearing a specific refresh rate active shutter glasses. Agrawala et al. realized a two-user stereoscopic display by the CRT projector, whose refresh rate was 144Hz[9]. By special active shutter glasses, users can obtain a monocular image refresh rate of 36Hz. The C1x6 system[10] realizes six-user stereoscopic displays by combining active stereoscopic projection technology and optical polarization. In this system, six projectors are employed simultaneously, and the stereoscopic videos of six users are played successively through a time-share projection mode. These six projectors are divided into two groups to display the left-right eye video of the users, respectively, and stereoscopic vision is formed through the discrimination of optical polarization. At the same time, the corresponding shutter glasses are designed to synchronize the opening and closing of the lenses and the time sequence, such that each user can independently view the exclusive stereo images. Nevertheless, in this way, the modifications of the projectors are extensive, such that the realization is complex. Guan et al. propose two kinds of novel multi-user immersive display systems[1]. The first method is to use a three-projection array. By modifying the color wheel of the projectors, the color image refresh rate of the projection array can reach 360Hz, and the multi-channel stereo video can be played in the same screen area by time-sharing. The second method is to use two projectors with polarizers to realize the multi-view stereoscopic projection display. One projector provides left-eye images for multiple users, while the other provides right-eye images. These six images are output in a specific sequence. In this way, users can observe different images on the same display screen with dedicated shutter glasses. Both of them can provide a more convenient way to construct the environment for the projective multi-user co-located collaborative interaction.
2.3 Multi-user motion tracking technology
To obtain accurate location data, many studies addressed multi-user motion tracking technology. GPS is a global navigation satellite system commonly used among spatial positioning methods. It is not limited in time and place, and can provide reliable location and temporal information. However, in the complex environment close to the ground, indoors, or basement, the weakened GPS signal makes GPS positioning difficult[11]. Therefore, it is necessary to study the accurate indoor location algorithm. Infrared positioning was found to be widely used in the comparison of several indoor location algorithms[12]. Microsoft Kinect assumes this approach. However, because of the limited recognition range of single Kinect, identity loss occurs when users block each other. Therefore, Salous et al. propose a tracking module in CAVE based on geometric distributions of four Kinect devices[13]. Similarly, Sevrin et al. combine the motion trajectory obtained by multiple Kinect to detect and track the position of multiple users[14]. Optical tracking technology[15] is also widely used in indoor positioning, and is capable of providing a high precision. However, with the expansion of the positioning range, investment in the underlying hardware needs to be increased, and the cost is relatively high. The UWB system can accurately track the position of multiple users due to its features of strong penetration, low power consumption, low complexity, and high positioning accuracy[16].
2.4 Interactive tools based on sensor
In the current interactive cinema system, the interactive shooting cinema system is common. Here, most users sit in the fixed innervation seat, which has three or six degrees of freedom, wear 3D glasses, and carry interactive tools such as a simulated electron gun to interact with the virtual scene in real time. Compared with traditional interactive tools, such as a mouse and a keyboard, the interaction based on simulated electron gun and other tools is more real and natural, which enhances the users' sense of immersion. Liu Chao[17] designed a new simulated gun based on the micro-electromechanical system (MEMS) sensor, and calculates its posture by a magnetic and an acceleration sensor. However, due to the noise of acceleration sensor when measuring fast-moving objects, it is not suitable for fast real-time interaction in interactive cinema systems. Qin Pu et al. obtain the real-time posture of the simulated gun by integrating the data of the gyroscope, accelerometer, and magnetic sensor[18]. Moreover, they obtain the aiming position and direction by the established mapping relationship between virtual and real environment and the position data identified by Kinect. At the same time, a recoil simulator is designed and implemented to enhance the user experience and sense of reality.
2.5 Console devices supporting virtual scene assembly
In the application of virtual reality, the virtual scene design and layout completed by modeling are mostly fixed, resulting in the lack of freshness of interactive content. For this purpose, this study develops a console device to realize real-time adjustment of virtual scenes and real-time monitoring of users' status in virtual scenes. Lin et al. designed and developed a new workbench, including two horizontal liquid crystal touch screens and a holographic display with a touch function, with the purpose of realizing the collaborative design of virtual scenes[19]. In the design process, two users design 2D graphics through the horizontal LCD touch screen, and watch the corresponding 3D scene with a holographic display. The layout of the scene could be adjusted in real time through the touch screen operation, which not only ensures the flexibility of the view, but also improves the efficiency of the design. Similarly, Surface, Smartphone and HMD devices[20,21] can be used to expand the home design from graphic design to 3D design. Meanwhile, users' feedback information can be used to adjust the design content in real time to enhance the effectiveness of the design process.
3 System architecture
The virtual fire drill system supporting multi-user co-located collaboration consists of two parts: a co-located drill part and remote control part. Both parts communicate with each other by the Socket. Figure 1 illustrates the system architecture.
In the co-located drill part, there are a server, three-projector array, UWB positioning equipment, dedicated shutter glasses, and Bluetooth/2.4G simulated firefighting water-gun. The three-projector array can project multi-images onto a same projection screen. The three co-located users can watch stereoscopic videos based on their real position by wearing dedicated shutter glasses. At the same time, users control the virtual roaming path by UWB wireless positioning technology. In addition, a simulated firefighting water-gun sends its pose and status information to the server, which supports multi users to conduct a firefighting mission.
In the remote control part, there is the console device supporting virtual scene assembly, consisting of holographic display screen, horizontal touch screen, and HD camera. The holographic display screen is used to portray the 3D fire scene, including the layout and status of the scene and users’ position. The horizontal touch screen and HD camera constitute the image recognition part. The server collects real-time horizontal touchscreen images taken by the HD camera from the top view, and identifies the type and location of the identified images. Further, the horizontal touch screen supports multi-touch interaction.
During the operation of the system, the server of the co-located drill part is responsible for collecting user location data, fire source location, and fire extinguishing information, and transmitting them to the remote control part through the Socket. At the same time, the remote control part will transfer the relevant data of the virtual scene adjustment, including the type and location of identifying images, to the co-located drill server. In this way, the information of the two parts can be synchronized, and the state of the virtual fire scene can be adjusted in real time, according to the pre-designed mapping relationship. Figure 2 shows use case diagram.
4 System design and implementation
In the virtual fire drill system proposed in this paper, three co-located users can collaboratively carry out the virtual fire drill, while the remote controller can make real-time adjustment of the virtual scene. Figure 3 shows the practical application scene.
In the co-located drill part, three users wear UWB wireless positioning tags. The system measures the user's moving position in real time through the UWB technology. After the virtual and real space coordinate mapping, each user can control their own virtual roaming path. The system renders an independent stereoscopic video according to the location and viewpoint of each user, and projects them on the same screen by multi-view stereoscopic projective display technology. Subsequently, users watch their own independent stereo images by dedicated shutter glasses. Thus, the three users can find different fire sources according to their own virtual routes, and use the simulated firefighting water-gun to aim and launch virtual water for fire extinguishing, in order to complete the fire task by collaboration. At the same time, in the remote control part, the controller can watch the process of virtual fire drill in real time through console device supporting virtual scene assembly, and control the process of virtual interaction and task difficulty by adding a virtual fire source to the virtual environment. The specific implementation of each module is described below in detail.
4.1 Multi-view stereoscopic projective display module
The traditional stereoscopic projection equipment only renders the stereoscopic image on the projection screen from a single view, which limits the flexibility of co-located collaboration. Therefore, two flexible display systems of multi-user stereoscopic projection are proposed[1]. The first method is adopted in this paper. By modifying the color wheel of the projectors, the multi-channel stereo video can be played on the same screen area by time-sharing. After the modification, each projection image is composed of various user's different plane images. In each monochrome projection period, three projectors play exactly three plane images of the same user, so as to ensure the correct synthesis of color images. At the same time, the dedicated shutter glasses are designed. Furthermore, the module adopts the Texas Instruments (TI) Digital Light Processing (DLP) Link mode, which is built into the projector. Thus, when the projector plays every frame, a high brightness light pulse, with a duration of 20ms, is projected to the screen as a frame synchronization signal, and the shutter glasses receives it through the photoelectric diode. Therefore, the opening of the Liquid Crystal Display (LCD) lens timing and projection sequence synchronize, ensuring that each user watches an independent video.
The three-projection array system can provide three users with different videos, which mainly relies on GPU's real-time blending and splicing processing of three-channel independent videos. Through the modification of the projector, the image projected by each projector at each moment is no longer composed of R, G, and B, which are three planes of a single user. Instead, one plane is extracted from the three-way video and mixed into it, that is, the mixing process. In addition, it is necessary to carry out image coincidence and splicing processing, and form a search table of image deformation, so as to ensure that the three projectors play the primary color images to achieve pixel coincidence. This means that a geometric correction is carried out for the three vertically placed projectors to make their projection images coincide, thus splicing them into a complete color image. This process can be realized by using Shader programming in the Unity3D platform, and its real-time performance can be guaranteed by GPU.
4.2 Multi-user tracking module
In this study, the multi-user tracking module tracks the multiple users indoor using the UWB wireless positioning technology. On the one hand, this module can avoid occlusion between co-located users. On the other hand, it does not seem to be influenced by the complex light environment. Moreover, this module can be combined with multi-user stereoscopic projective display technology to make sure multiple users can move freely with independent virtual roaming and interaction.
The UWB wireless positioning device tracks the position of moving objects by transmitting and receiving radio waves. The UWB device employed in this study uses Time Difference of Arrival (TDOA) technology to measure the target position. In order to realize triangulation positioning of the moving target, it is necessary to set at least three positioning base stations, including one main base station and two auxiliary base stations. At the same time, users are required to wear UWB wireless positioning tags. At work, the main base station receives coding signal of other auxiliary base stations and all UWB wireless tags. Furthermore, the positioning server calculates the real-time distance from the main station to the auxiliary base station and mobile tags. Because the positions of the two auxiliary base stations are fixed, the two auxiliary base stations are viewed as the reference positions. Finally, this method uses triangulation to determine the spatial coordinates of each moving tag in real time.
In the deployment of the base station of UWB positioning (Figure 4a), the location of the main base station is defined as the origin of indoor coordinate system. We choose the indoor perpendicular intersection line of the two sides at the wall and the ground as the X axis and Z axis, respectively, and the wall of the vertical elevation direction as the Y axis. Two auxiliary base stations are set up at the X axis and Z axis to realize triangulation positioning, and all of them are fixed by the same model tripod and maintain the same height. After accurately measuring the position coordinates of two auxiliary base stations, we use the data as a conditional parameter for triangulation positioning. Experiments show that the use of three base stations can satisfy the requirement of real-time location tracking for more than six users in indoor 50-100m2. Moreover, the space range of motion tracking can be extended by increasing the number of auxiliary base stations.
Because the tracking data generated by UWB wireless positioning technology jitters, measures are taken to obtain stable location data. Firstly, data is denoised by the average filtering algorithm. Subsequently, the stable real-time position is obtained in real space and mapped to virtual space. Thus, co-located users can conduct virtual roaming. At the same time, the system renders the scene based on user real-time position and viewpoint, and projected multi-images on the same screen by multi-user stereoscopic projective display technology. Figure 4b depicts the processing of UWB location data.
4.3 Interactive device module based on MEMS sensor
In the virtual fire drill system, to accurately identify each user aiming information, this paper develops interactive tools based on the MEMS sensor[18], which is the simulated firefighting water-gun. In order to improve the fidelity of interactions, the appearance of the simulated firefighting water-gun is similar to the multi-function water-gun with adjustable flow used by domestic firefighters. The 3D model of the device is generated by CAD, and the solid shell is generated by 3D printing. The simulated firefighting water-gun is made of ABS plastic, which is non-toxic and tasteless, with good heat and impact resistance. Figure 5 shows the simulated firefighting water-gun and its internal structure.
Figure 6 shows the structure of the simulated firefighting water-gun, including the controller, MEMS sensor module, fire equipment switch button, and water flow transmitter switch. All of these three modules are the input of the controller. The MEMS sensor module acquires the posture data of the water-gun and sends it to the controller. The water flow transmitting switch and fire equipment switching button also send the corresponding data to the controller. The controller communicates with the server through the wireless communication module.
The circuit design of the simulated firefighting water-gun[22] is based on the MEMS sensor. MEMS is an independent intelligent system with high integration, which is composed of the controller, various sensors, and micro energy. It can also be used in a variety of multifunctional systems. The controller of the simulated firefighting water-gun is based on the Arduino Nano MCU development board, which is the entry-level AVR (ATmega328). It has a high performance-to-price ratio, good stability, and does not need additional power. Furthermore, its Mini-type B USB interface allows it to be connected directly on the bread board for programming, and it can be simply incorporated into various kinds of special interaction simulation tools due to its small size. The wireless communication module uses 2.4G or Bluetooth, and each has its own characteristics. The 2.4G wireless communication mode integrates the nRF24L01+ chip inside the simulated firefighting water-gun, which is cheap, easy to use, and cost-effective. Through 2.4G wireless transmission technology, the data from simulated firefighting water-gun is transmitted to the receiver, which is connected to the server via USB and converted to TTL serial communication. Only one data line is needed to complete the data bit sequential transmission, with low cost and enough transmission speed to meet the real-time requirements of the system. The Bluetooth mode achieves wireless communication by the Bluetooth module, without the data line, only requiring to complete the pairing operation in advance, whose implementation is simple. This simulated firefighting water-gun recognizes its own posture and pointing data through the built-in MPU9150 sensor chip. The MPU9150 sensor includes a three-axis gyroscope, three-axis accelerometer, and three-axis magnetic sensor, and has high stability as well as a simple circuit implementation. Also, it can effectively reduce the drift error of the gyroscope.
Two main interactive functions can be performed when a user holds a simulated firefighting water-gun. The first is to control the simulated flow, including the size and direction of the flow. The built-in sensor of the simulated water-gun can obtain its own real-time posture to control the direction of the virtual water flow. Similar to the actual water-gun, the switch and velocity of water flow in the virtual scene are controlled by pushing and pulling the rocker of the simulated water-gun. The second is the function of switching the type of fire extinguisher. In order to realize the task of virtual fire extinguishing, three types of fire extinguishers (water extinguisher, foam extinguisher, and dry powder extinguisher) need to be preset for users to choose. Users can switch the type of extinguisher in the virtual scene by operating the button on the handle of the water-gun to match different types of virtual fire sources.
4.4 Virtual scene assembly and console device module
In this study, a virtual scene assembly and console device (Figure 3b) are developed for remote control to monitor and adjust the virtual scene environment in real time. Virtual scene and virtual interactive process data of each user are transferred from the co-located drill part to the remote control part by the network. The controller observes the virtual interactive process in real time through the corresponding display device, changes the layout of the virtual scene, switches the perspective, and adds the virtual fire source by touch screen interaction or image recognition part. Subsequently, the results of the adjustment are transmitted back to the co-located drill part through the network to realize real-time control of the interactive process.
In the remote control part, two kinds of display devices are set for the controller to observe the virtual environment. The first is a holographic projection display screen for the controller to observe the 3D virtual scene in real time. The holographic projection screen is formed by two pieces of glass facing each other at an angle of 45°. Through the holographic projection glass reflective LCD screen display view of the 3D virtual scene, the holographic projection display effect is realized. The second is the flat LCD touch screen, which shows the 2D image of the virtual scene from a top view and adjusts the layout of the virtual scene through the multi-touch function and image recognition.
To make the interaction more intuitive and natural, the image recognition part is designed. The location and the movement of the token are photographed and identified by the industrial camera, and the mapping and transformation of the virtual and real scene coordinates enable the controller to adjust the fire source distribution in real time by positioning and moving the token. To ensure the accuracy and timeliness of identification, tokens should be designed with distinctive and abundant feature point structures. Meanwhile, their colors should have a strong contrast with the background, and contain complex lines, block or surface structures to facilitate image processing and identification. In addition, we need to consider the size of the token. If the token is large, although it is easy to recognize, it does not match the layout of the current scene, which affects the viewing of the whole scene.
In the experiment, it is found that the recognition accuracy is greatly affected by the actual ambient light irradiation when the token is identified by the camera. Therefore, after the environment is configured, all the tokens are shot vertically one by one through the industrial camera, and the images of each token obtained by the camera are taken as the recognition and matching template under the current environment. The experimental results show that this method has a stronger ability to resist ambient light interference than the original digital image.
5 User study
Based on the virtual fire drill system supporting co-located collaboration proposed in this paper, we recruit eight graduate students into two group of four and test the system usability. Each group is divided into three co-located users and one remote controller. The test tasks are as follows: (1) Three co-located users move freely to control virtual roaming and carry out firefighting tasks alone; (2) The controller watches the state of virtual scene in real time and adds fire source; (3) Three co-located users collaborate to put out the fire. After the test task is completed, the experimenter filled out a questionnaire about the system usability and flow experience. Level 1-7 represents the degree of conformity from low to high.
Figure 7 shows the results of the statistical analysis. In terms of system usability, the average values of the whole system, system quality, information quality, and interface quality are all above level 5, reaching a relatively consistent standard. In terms of flow experience[23], the mean values of interest, satisfaction, behavioral intention, and participation are all above level 6, which meet the basic standards. Moreover, users opinion is that the system has a good collaborative effect, and that it makes conducting firefighting tasks more interesting. In addition, the controller is of the opinion that the two display modes are more intuitive. He can not only see the top view of the scene, but also see the three-dimensional effect of the scene. Users generally believe that the system has a certain application value. In summary, the system has good usability, a strong sense of co-located collaboration, and it improves user experience.
6 Conclusion and future work
This study proposes a virtual drill system supporting co-located collaboration, which integrates multi-view stereoscopic projective display technology, UWB positioning technology, and a variety of interactive technology. Moreover, this paper improves the related technology according to problems encountered during actual use. An example is the UWB positioning data filtering and the use of the template method in image recognition. Thus, the complex co-located collaborative interaction system is constructed with a certain application value. This system allows multi-user co-located collaboration in large-screen projective environment by multi-view stereoscopic projective display technology. Furthermore, by combining with UWB positioning technology, it supports multi-user free movement, where users can watch the exclusive stereo video based on their own position. By developing the simulated interactive tool, users can experience a more real experience. In addition, the remote controller can interact with the virtual environment in two ways, making the system more flexible. Because all three users are front of the same projection screen and can see each other, they can communicate directly through dialogue, which effectively improves the sense of reality and experience of co-located collaboration. In addition, the existence of the two roles in the system, i.e., the common experiencers and the controller, improves the user's sense of participation and the flexibility of the system.
The multi-user co-located collaborative working environment based on large-screen stereoscopic projection proposed from this study is also suitable for virtual training in the fields of surgery and military affairs. In addition, the system currently supports three users to view the stereo video. In the future, two sets of projector arrays can be combined with optical polarization for the visualization by six users, so as to realize user capacity enlargement.

Reference

1.

Guan D D, Meng X X, Yang C L, Sun W S, Wei Y, Gai W, Bian Y L, Liu J, Sun Q H, Zhao S W. Two kinds of novel multi-user immersive display systems. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 2018 DOI:10.1145/3173574.3174173

2.

Galambos P, Weidig C, Baranyi P, Aurich J C, Hamann B, Kreylos O. VirCA NET: A case study for collaboration in shared virtual space. In: 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom). Kosice, Slovakia, IEEE, 2012 DOI:10.1109/coginfocom.2012.6421993

3.

Czernuszenko M, Pape D, Sandin D, DeFanti T, Dawe G L, Brown M D. The ImmersaDesk and Infinity Wall projection-based virtual reality displays. ACM SIGGRAPH Computer Graphics, 1997, 31(2): 46–49 DOI:10.1145/271283.271303

4.

Simon A. First-person experience and usability of co-located interaction in a projection-based virtual environment. In: Proceedings of the ACM symposium on Virtual reality software and technology. Monterey, CA, USA, ACM, 2005 DOI:10.1145/1101616.1101622

5.

Gugenheimer J, Stemasov E, Frommel J, Rukzio E. ShareVR. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017 DOI:10.1145/3025453.3025683

6.

Chen W Y, Ladeveze N, Clavel C, Mestre D, Bourdot P. User cohabitation in multi-stereoscopic immersive virtual environ-ment for individual navigation tasks. In: 2015 IEEE Virtual Reality (VR). Arles, Camargue Provence, France, IEEE, 2015 DOI:10.1109/vr.2015.7223323

7.

Chen W Y, Ladeveze N, Clavel C, Bourdot P. Refined experiment of the altered human joystick for user cohabitation in multi-stereocopic immersive CVEs. In: 2016 IEEE Third VR International Workshop on Collaborative Virtual Environments (3DCVE). Greenville, SC, USA, IEEE, 2016 DOI:10.1109/3dcve.2016.7563558

8.

Yu H D, Li H Y, Sun W S, Gai W, Cui T T, Wang C T, Guan D D, Yang Y J, Yang C L. A two-view VR shooting theater system. In: 13th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry. Shenzhen, China, 2014, 223–226 DOI:10.1145/2670473.2670497

9.

Agrawala M, Beers A C, McDowall I, Fröhlich B, Bolas M, Hanrahan P. The two-user responsive workbench. In: Proceedings of the 24th annual conference on Computer graphics and interactive techniques, 1997, 327–332 DOI:10.1145/258734.258875

10.

Kulik A, Kunert A, Beck S, Reichel R, Froehlich B. C1×6: A stereoscopic six-user display for co-located collaboration in shared virtual environments. ACM Transactions on Graphics, 2011, 30: 188 DOI:10.1145/2070781.2024222

11.

Hegarty C J. The global positioning system (GPS)// Springer Handbook of Global Navigation Satellite Systems. Cham: Springer International Publishing, 2017: 197–218 DOI:10.1007/978-3-319-42928-1_7

12.

Xiao J H, Liu Z, Yang Y, Liu D, Han X. Comparison and analysis of indoor wireless positioning techniques. In: 2011 International Conference on Computer Science and Service System (CSSS). Nanjing, China, 2011 DOI:10.1109/csss.2011.5972088

13.

Salous S, Ridene T, Newton J, Chendeb S. Study of geometric dispatching of four-kinect tracking module inside a Cave. In: SharkeyP M, ParetoL, BroerenJ, RydmarkM, eds. Proceedings of the 10th International Conference on Disability. Virtual Reality and Associated Technologies, 2014, 369–372

14.

Sevrin L, Noury N, Abouchi N, Jumel F, Massot B, Saraydaryan J. Preliminary results on algorithms for multi-kinect trajectory fusion in a living lab. IRBM, 2015, 36(6): 361–366 DOI:10.1016/j.irbm.2015.10.003

15.

Mara J, Morgan S, Pumpa K, Thompson K. The accuracy and reliability of a new optical player tracking system for measuring displacement of soccer players. International Journal of Computer Science in Sport, 2017, 16(3): 175–184 DOI:10.1515/ijcss-2017-0013

16.

Zeng Z Q, Liu S, Wang W, Wang L. Infrastructure-free indoor pedestrian tracking based on foot mounted UWB/IMU sensor fusion. In: 2017 11th International Conference on Signal Processing and Communication Systems (ICSPCS). Surfers Paradise, QLD, 2017 DOI:10.1109/icspcs.2017.8270492

17.

Liu C. Firearm Design of Virtual Shooting System Based on MEMS Sensor. Computer Simulation, 2013, 30(9): 415–418(in Chinese)

18.

Qin P, Yang C L, Li H Y, Bian Y L, Wang Q C, Liu J, Wang Y C, Meng X X. Virtual reality shooting recognition device and system using MEMS sensor. Journal of Computer-Aided Design & Computer Graphics, 2017, 29(11): 2083–2090(in Chinese)

19.

Lin C, Sun X W, Yue C L, Yang C L, Gai W, Qin P, Liu J, Meng X X. A novel workbench for collaboratively constructing 3D virtual environment. Procedia Computer Science, 2018, 129: 270–276 DOI:10.1016/j.procs.2018.03.075

20.

Sun X W, Meng X X, Wang Y F, de Melo G, Gai W, Shi Y L, Zhao L, Bian Y L, Liu J, Yang C L. Enabling participatory design of 3D virtual scenes on mobile devices. In: Proceedings of the 26th International Conference on World Wide Web Companion. 2017 DOI:10.1145/3041021.3054173

21.

Gai W, Lin C, Yang C L, BianY L, Shen C A, Meng X X, Wang L, Liu J, Dong M D, Niu C J. Supporting easy physical-to-virtual creation of mobile VR maze games: a new genre. In: 2017 CHI Conference on Human Factors in Computing Systems. Denver, Colorado, 2017, 5016–5028 DOI: 10.1145/3025453.3025494

22.

Wang Q C. The design and implementation of the fire drill system based on the simulation interactive extinguisher. Jinan: Shandong University, 2018

23.

Bian Y L, Yang C L, Gao F Q, Li H Y, Zhou S S, Li H C, Sun X W, Meng X X. A framework for physiological indicators of flow in VR games: construction and preliminary evaluation. Personal and Ubiquitous Computing, 2016, 20(5): 821–832 DOI:10.1007/s00779-016-0953-5