Chinese

2020,  2 (4):   345 - 353   Published Date：2020-8-20

DOI: 10.1016/j.vrih.2020.07.007

Abstract

At present, most experimental teaching systems lack guidance of an operator, and thus users often do not know what to do during an experiment. The user load is therefore increased, and the learning efficiency of the students is decreased. To solve the problem of insufficient system interactivity and guidance, an experimental navigation system based on multi-mode fusion is proposed in this paper. The system first obtains user information by sensing the hardware devices, intelligently perceives the user intention and progress of the experiment according to the information acquired, and finally carries out a multi-modal intelligent navigation process for users. As an innovative aspect of this study, an intelligent multi-mode navigation system is used to guide users in conducting experiments, thereby reducing the user load and enabling the users to effectively complete their experiments. The results prove that this system can guide users in completing their experiments, and can effectively reduce the user load during the interaction process and improve the efficiency.

Content

1 Introduction
As an important and new technology, human-computer interactions and virtual reality are becoming widely used in all fields of life. In the area of education, to better assist teachers and students, many companies have applied human-computer interaction and virtual reality technology to products, and virtual experiment schemes, such as NOBOOK’s virtual experiment system and NetDragon’s 101VR classroom, have been proposed. However, most virtual experimental systems lack guidance for users during the experimental process, preventing users from understanding how to conduct the experiment, thereby increasing the user load and reducing the learning efficiency. Therefore, developing a way to intelligently guide the user to conduct an experiment and remind the user about particular problems during an experiment is significantly important.
To solve the problem of insufficient guidance during an experimental system, in this paper, an intelligent navigation system based on multi-mode fusion is proposed. The system first obtains user information by sensing the hardware devices, intelligently perceives the user’s intention and experimental progress according to the acquired information, and finally carries out multi-modal intelligent navigation for users. The innovative aspects of this paper are as follows: Intelligent multi-mode navigation is used to guide users in conducting experiments, thereby reducing the user load and enabling users to better complete their experiments. It can effectively reduce the user load during an interaction and improve the experimental efficiency.
The rest of this paper is organized as follows: Section 2 introduces previous studies related to the teaching system. Section 3 introduces the system design and implementation. Section 4 describes and analyzes the experimental results. Section 5 introduces the user experience. Finally, Section 6 provides some concluding remarks.
2 Related studies
In the field of education, many experts and scholars are committed to providing better learning and teaching methods for students and teachers.
At present, many educational systems lack guidance for users, which increases the user load. Therefore, to reduce the user load, an intelligent navigation system based on multimodal fusion is proposed. This system can realize multi-modal intelligent navigation for users through an intelligent perception of the user’s intention and experimental progress. This interactive mode, which navigates for the user during the experiment, can guide the user to complete the experiment better, effectively reducing the user load and improving the experimental efficiency.
3 System design and implementation
3.1　Hardware structure design of experimental equipment
Figure 1 shows a photograph of the hardware device. The sensors used by each device and their location are marked on the diagram. The device captures the user actions through multiple sensors and transmits them to the computer software application using the MQTT protocol. At the same time, a Kinect hardware device is needed to capture and transmit the data on human hand displacement to the computer software application.
For the hardware equipment, a Kinect2.0, computer, and experimental simulation equipment for 3D printing (including sensors) are used. For the software environment, Windows10, Unity 2018, Visual Studio 2015, and Baidu voice were applied. Finally, for the programming language, C# is used.
Navigational interactions are designed based on intentional behavior nodes and virtual scene information. An intentional behavior node is the perception description of the user’s multimodal information of the system and the basis of an interaction. The specific interaction design is as follows (Figure 2):
(1) The nodes in the intentional-behavior node set
$N Q$
are filtered, and the node set
$W N Q$
to be executed is obtained (see the details in 3.2.1;
$N Q$
is a set of intention behavior nodes, which are composed of actions, objects, and attributes, and
$W N Q$
is the state before checking whether it can be directly executed).
(2) The intentional behavior node in
$W N Q$
is applied. If it can be executed, it will be; otherwise, a voice will be provided to guide the user operations (see 3.2.2 for details).
(3) The progress of the experiment is monitored and users are guided to conduct the experiment through the navigation system (see the details in 3.2.3)
3.2.1 　 Filtering of intent behavior nodes
Owing to the simultaneous operations required in a chemical experiment, the system supports the perception of a dual-operation intention. Because one or two intent behavior nodes may be filtered, filtering out the nodes that the user actually wants to execute is a prerequisite for an interaction. This is the focus of the filter for two nodes that have the same active object but different intents because there may be cases where they cannot be executed at the same time (the intentional behavior node contains the active object, which is generally the experimental equipment that mainly sends the signals). The processing method is as follows:
(1) The number of elements in
$N Q$
is determined. If there is only one element, it is directly added to the
$W N Q$
node set to be executed. If there are two elements, and if their active objects are the same but with different intents, step 2 is executed; otherwise,
$N Q$
is used as the set of nodes,
$W N Q$
, to be executed.
(2) According to the shortest intention-transformation path method
$S R P ( N o d e )$
(see 3.2.2 for details), whether the active objects of the two elements can reach their new intentions from the current intention is determined. If all of them can be reached directly, step 3 is executed; if not,
$N Q$
is used as the set of nodes,
$W N Q$
, to be executed.
(3) By asking the user to select an intention node to be executed, another node is set as an invalid node.
3.2.2 　 Execution of the intended behavior node
The system is instructed to interact according to the filtered nodes. During the process of node execution, according to the shortest intention transformation path method
$S R P ( N o d e )$
, the path planning of the intention transformation of the nodes in
$W N Q$
(set of nodes to be executed) is carried out. If it can be converted directly, the intent node is executed directly. If not, the user is prompted with the planned shortest intention-transformation path.
The shortest transformation path method
$S R P ( N o d e )$
is based on graph theory. For each object, an intention-transformation graph is created to represent the transitions between different intents. The shortest transformation path table
$S R i n t e n t$
, transformation requirement table
$T P R i n t e n t$
, and necessity intention table
$T K i n t e n t$
(
$T K i n t e n t$
indicates the intention to execute first when the necessary conditions are not met) are saved in the knowledge base.
Figure 3 shows the
$S R P ( N o d e )$
method. First, according to the
$T P R i n t e n t$
table, this method determines whether the node satisfies the intent conversion condition.
If the conversion condition is satisfied, according to the
$S R i n t e n t$
table, the shortest path
$R n o d e$
of the object’s current intention conversion into the new intention is obtained. If the path indicates that the new intent can be reached directly without other operations, then the node is an executable
$N o d e '$
; If additional operations are required before execution, then
$R n o d e$
is the planned shortest intention-transformation path
$I R n o d e$
.
If the transformation condition is not met, the intent that must be executed first (the required intent) is derived from the
$T K i n t e n t$
table. Then, according to the
$S R i n t e n t$
table, the shortest path from the current intention to the necessary intention is spliced with the shortest path from the necessary intention to the new intention to obtain the final planned transformation path,
$I R n o d e$
.
The interactive navigation method is used to monitor the user’s operation and experiment progress in real time, including the voice navigation and visual navigation.
To reduce the load of the users, a method for navigation during an experiment is proposed herein. When the user makes common sense mistakes (such as violating the operational method of the experimental instrument), the user is prompted according to the planned path, which not only tells the user that the operation is unreasonable, but also reduces the risk that the experiment will not be able to continue. For the experimental key knowledge, the system supports exploratory experiment (that is, the experimental phenomenon of wrong operation can be observed), so that students can have a deeper understanding of the key chemistry knowledge. During this process, the system will give feedback and explain the experimental phenomenon, and guide the user to apply the correct operation. In addition, the system will automatically monitor the progress of the experiment and set up a voice navigation at the key nodes to guide the user during an operation. Compared with a traditional method (in which a simple navigation is applied only at the beginning of the experiment), this method of navigation during the experiment can better reduce the user load and reduce the risk that the experiment will not be completed due to unknown system operation methods.
The system presented herein uses a virtual electronic screen to guide the users. The key steps in the experiment are presented on the screen, allowing the user to follow the prompts on the screen during the operation. Visual and voice navigation are used together. Visual navigation is focused more on providing the experimental steps, whereas voice navigation is focused more on the navigation of dynamically generated operations during the experimental process.
4 Experimental results and analysis
4.1　Experimental results
This system mainly uses Unity3D for the design and transmits multi-mode signals to Unity3D for fusion. In this study, the effectiveness of the proposed system is verified through a dilution experiment conducted on concentrated sulfuric acid and a carbonization experiment using sucrose.
Figure 4(a) shows an operational check conducted during the intent transformation. In concentrated sulfuric acid dilution experiments, reagent is required to be contained in the conical flask before the funnel is installed on the conical flask. Therefore, when there is no reagent in the conical flask, the user is required to first add reagent to the conical flask and then install the parting funnel on the conical flask. At this point, the user is intelligently prompted based on the
$S R P ( N o d e )$
method. Figure 4(b) shows the intelligent voice navigation system. In a sucrose carbonization experiment, a glass rod should be used to stir the reagent to accelerate the reaction. Thus, the system prompts the user to stir using a voice navigation.
4 User experience
To evaluate the navigation-based chemical experiment system based on multimodal fusion proposed herein, 41 students from the affiliated primary school of Jinan university, Zhangqiu middle school, Zhangqiu Wuzhong school, and Shenxian experimental high school were invited to participate. In addition, 12 teachers were invited to join, and a total of 53 people were tested (an experiment on diluting concentrated sulfuric acid was selected).
Figure 5 shows users using NOBOOK’s virtual experiment system (referred to as the NOBOOK system) and the navigation chemical experiment system based on multi-mode fusion proposed in this paper (referred to as the proposed system). The NOBOOK system interacts based on a mouse or touch screen and applies navigation only at the beginning of the experiment. The proposed system uses an interaction based on the simulation equipment, voice, and vision, and conducts an intelligent navigation during the entire experiment process. It allows the students to conduct an experiment, and the teachers to demonstrate the experiment, in person. Through a questionnaire and description, users can evaluate the experience of different systems.
Figure 6 shows a statistical graph of the user experience of both experimental systems. The data for each indicator in the figure is the average after the statistics. Each item is scored from 0 to 5 points. Among them, the lower the score is for the first four items (i.e., mental requirements, physical requirements, degree of frustration, and difficulty of independently completing the experiment), the better the experience of the system; the higher the score of the remaining items, the better the system experience.
Compared with the NOBOOK virtual experiment, the navigation system proposed in this paper effectively reduces the users’ mental requirements (users need to remember how to use the system), physical demands, degree of frustration, and difficulty in completing the experiment independently. The user’s learning efficiency is effectively improved, and the user feels more relaxed.
4 Conclusion
To solve the problem of an insufficient interaction and guidance found in most experimental systems, an intelligent navigation system based on multi-mode fusion is proposed. The system first obtains user information by sensing hardware devices, intelligently perceives the user intention and experimental progress according to the acquired information, and finally carries out multi-modal intelligent navigation for users.
As the innovative aspects of this paper, a multi-mode intelligent navigation is applied to guide users to conduct experiments,to reduce the interactive load of the users and to enable them to effectively complete the experiments.
The experiment results prove that the navigation interactive system proposed herein can effectively reduce the mental requirements, physical demands, degree of frustration, and difficulty of completing an experiment independently. It effectively improves the user’s learning efficiency, and makes the user feel more relaxed. Therefore, the proposed system can effectively reduce the user load and improve the learning efficiency.

Reference

1.

Lesta L, Yacef K. An intelligent teaching assistant system for logic. In: Intelligent Tutoring Systems. Berlin, Heidelberg, Springer, 2002, 421–431 DOI:10.1007/3-540-47987-2_45

2.

Zhao T L, Jia L, Lu Y F, Han S P, Li C L. An automatic pronunciation teaching system for Chinese to learn English. In: IEEE International Conference on Robotics, Intelligent Systems and Signal Processing. Changsha, Hunan, China, IEEE, 2003, 1157–1161 DOI:10.1109/rissp.2003.1285754

3.

Huang F, Zhou Y, Yu Y, Wang Z Q, Du S D. Piano AR: a markerless augmented reality based piano teaching system. In: 2011 Third International Conference on Intelligent Human-Machine Systems and Cybernetics. Zhejiang, China, IEEE, 2011, 47–52 DOI:10.1109/ihmsc.2011.82

4.

Özyurt H, Baki A. Design and development of an innovative individualized adaptive and intelligent e-learning system for teaching-learning of probability unit: details of UZWEBMAT. Expert Systems with Applications, 2013, 40(8): 2914–2940 DOI:10.1016/j.eswa.2012.12.008

5.

Yan H, Hu H Y. Research and realization of ISIC-CDIO teaching experimental system based on RFID technology of web of things. Journal of Bionanoscience, 2013, 7(6): 696–702 DOI:10.1166/jbns.2013.1172

6.

GAO J, Zhang Z, Song Q, Ding Y, Li Q, Lin Y. Design and practice of visualized teaching system in polymer chemistry. Polymer Bulletin, 2013, 35(2): 94–98

7.

Sun Q, Liu S, Sunaoka K, Hiki S. Visual displays of the voice pitch pattern for the CAI self-teaching system to discriminate Chinese tones. Journal of the Acoustical Society of America, 2012, 131(4): 060007 DOI:10.1121/1.4887505

8.

Xie J. Design of electronic fault principle and maintenance teaching system for missile equipment. In: China Conference on System Simulation Technology And its Application. 2014

9.

Lin H I, Lin Y H. A novel teaching system for industrial robots. Sensors (Basel, Switzerland), 2014, 14(4): 6012–6031 DOI:10.3390/s140406012

10.

Luan F. Development of hydraulic transmission virtual simulation teaching system based on Unity3D. 2015

11.

Li J, Su Z, Huang Y, Gou X. Adaptive network teaching system design based on learning analysis. Modern Education Technology, 2016, 26 (6): 113–118

12.

Hsiao H S, Chang C S, Lin C Y, Chen B, Wu C H, Lin C Y. The development and evaluation of listening and speaking diagnosis and remedial teaching system. British Journal of Educational Technology, 2016, 47(2): 372–389 DOI:10.1111/bjet.12237

13.

Chen G, Chen N. Motion simulation in virtual basketball shooting teaching system. International Journal of Online Engineering (IJOE), 2016, 12(2): 55–57 DOI:10.3991/ijoe.v12i02.5049

14.

Wang B, Li Y, Yang L. Design of micro-course teaching system for electronic majors based on JSP. Henan Science And Technology, 2017 (5): 15–18

15.

Lou M Y. A virtual reality teaching system for graphic design course. International Journal of Emerging Technologies in Learning (IJET), 2017, 12(9): 117 DOI:10.3991/ijet.v12i09.7492

16.

Deng L M. Design of English teaching system for human-computer dialogue based on cloud computing. In: 2018 International Conference on Intelligent Transportation, Big Data & Smart City (ICITBS). Xiamen, China, IEEE, 2018, 283–286 DOI:10.1109/icitbs.2018.00079

17.

Feng J, Zhang D, Li W, Dong L. Design of an auxiliary teaching system based on virtual reality. China Science And Technology Information, 2018 (1): 57–58

18.

Liu T, Min P, Xiao H. Design and implementation of elementary school mathematics teaching system for arithmetic based on JAVA. Computer and Digital Engineering, 2018, 46(4): 655–658

19.

Li Y X, Zhang D, Guo H X, Shen J Y. A novel virtual simulation teaching system for numerically controlled machining. The International Journal of Mechanical Engineering Education, 2018, 46(1): 64–82 DOI:10.1177/0306419017715426

20.

Gao B, Liu Z, Huo K, Jiao F. Development of experimental teaching system for maintenance technology of high-speed emu based on virtual simulation. Experimental Technology and Management, 2020, 37 (3): 139–142