Chinese
Adv Search
Home | Accepted | Article In Press | Current Issue | Archive | Special Issues | Collections | Featured Articles | Statistics

2020,  2 (4):   354 - 367   Published Date:2020-8-20

DOI: 10.1016/j.vrih.2020.07.002
1 Introduction2 Methods 2.1 Content design of the virtual simulation experiment 2.2 Virtual simulation-experiment development process 2.3 Main methodology of the virtual bottle simulation 3 Results 3.1 3D equipment models of the detection system 3.2 Functional design and implementation 3.2.1   User-interface design and implementation 3.2.2   Realization of the power control and transmission device 3.2.3   Camera adjustment 3.2.4   Realization of the elimination function 3.3 Mouse interaction functions 3.4 Image processing and defect detection 3.5 Simulation construction and detection-system operation 4 Discussion

Abstract

Background
Machine learning-based beer bottle-defect detection is a complex technology that runs automatically; however, it consumes considerable memory, is expensive, and poses a certain danger when training novice operators. Moreover, some topics are difficult to learn from experimental lectures, such as digital image processing and computer vision. However, virtual simulation experiments have been widely used to good effect within education. A virtual simulation of the design and manufacture of a beer bottle-defect detection system will not only help the students to increase their image-processing knowledge, but also improve their ability to solve complex engineering problems and design complex systems.
Methods
The hardware models for the experiment (camera, light source, conveyor belt, power supply, manipulator, and computer) were built using the 3DS MAX modeling and animation software. The Unreal Engine 4 (UE4) game engine was utilized to build a virtual design room, design the interactive operations, and simulate the system operation.
Results
The results showed that the virtual-simulation system received much better experimental feedback, which facilitated the design and manufacture of a beer bottle-defect detection system. The specialized functions of the functional modules in the detection system, including a basic experimental operation menu, power switch, image shooting, image processing, and manipulator grasping, allowed students (or virtual designers) to easily build a detection system by retrieving basic models from the model library, and creating the beer-bottle transportation, image shooting, image processing, defect detection, and defective-product removal. The virtual simulation experiment was completed with image processing as the main body.
Conclusions
By mainly focusing on bottle mouth-defect detection, the detection system dedicates more attention to the user and the task. With more detailed tasks available, the virtual system will eventually yield much better results as a training tool for image-processing education. In addition, a novel visual perception-thinking pedagogical framework enables better comprehension than the traditional lecture-tutorial style.

Content

1 Introduction
Digital image processing is an important course in the electronic-information field. The experiments in this course are a very important link in the teaching process, in which comprehensive design experiments play an important role in cultivating students' comprehension quality and innovation ability. Currently, image-processing experiments for undergraduate students mainly involve using demonstration software with a simple operation interface, or coding-based experiments with a programming toolbox. These relatively simple confirmatory or comprehensive experiments also play a certain role in the cultivation of students' abilities.
In the image-processing teaching phase, different scholars have explored many methods and means to boost their students' experimental perception. Strong mathematical-analysis foundations and necessary programming skills may be difficult for students to obtain from lectures. To alleviate student apprehension, software tools should enable most students to grasp concepts through abundant visual representations and experimental exercises, rather than extracting the concepts from lectures[1].
The Matlab image-processing toolbox[2-6] contains different numbers and types of experiments that build upon each other. By manually triggering corresponding events or initializing simple parameters, the experimental results were demonstrated in a single phase, which was simple and intuitive for the trainees and students. However, it requires students to master coding skills, based on the algorithm principle, or using programming rules. Because it focuses on simplifying complicated details, it can easily ignore global ideas and authentic assignments referring to real production scenarios, such as generating questions, discussing ideas, making predictions, and designing experiments[7].
Some types of experimental system allow trainees to show their results in real time, or allow students to quickly observe the experimental results. Gong et al. discussed a comprehensive digital image-processing design experiment using functional calculations in Matlab[8]; an image-encryption algorithm, based on a fractional Merlin transform with narrow knowledge limitations, was realized in this experiment. Ma et al. conducted experiments using a combination of LabVIEW and Matlab[9]. However, the experiments were not comprehensive and lacked design-experiment visualizations.
Yuan et al. proposed web-based experiments about medical image processing in which students processed the images using a cloud web server over the internet[10]. The images were processed through the web server and the results were displayed on a web interface at the client side. However, these methods were limited by the network bandwidth and the server's processing ability. The students did not directly learn the image-processing code, and the experiments were not conducive to mastering the material. Such shortcomings deprive students of higher-order thinking opportunities and sophisticated problem-solving strategies because of the insufficient integration of a real work atmosphere and environment with the practice settings.
In recent years, virtual reality (VR) technology has developed rapidly. Because of its immersive, interactive, and imaginative characteristics[11], VR has been widely utilized in education, safety training, medical treatments, military aerospace, industrial simulations, and other fields. Virtual simulation experiments are especially of interest in the educational field. In the higher-education reform process, experimental lectures have gradually changed from a focus on experimental-scenario construction to a hierarchical experimental lecture system and platform to cultivate the students' personal abilities[12].
The diversity of educational technology, hierarchical information technology, and VR-based teaching modes play a positive role in cultivating students' innovative spirit and practical ability and increase their thirst for learning[13,14]. Kvasznicza designed a VR-based motor lecture system that visually explained a motor's mechanical structure and working principle, and the dynamic change of the current and voltage[15]. Sakamoto et al. developed and designed chemical experiments based on the famous Unity 3D VR framework[16]. The students could operate instruments and add chemicals with an HTC handle. This setup not only reduced the risk of chemical experiments, but also showed the major and microscopic chemical reactions in 3D form. It enabled students to more intuitively understand the reaction process. Wang et al. built a virtual simulation experiment of a thermal power unit to assist the students' practice[17].
These studies created online virtual platforms, which provided various simulations that identified alternative concepts in structural geometry and showed detailed operations. Moreover, interactive questionnaires that recognize key points in the textbook can reinforce these concepts in student minds.
However, thus far, there are no innovative virtual-imitation projects that originate from complex engineering problems, and are suitable for electronic-information students to learn by themselves. In addition, no such projects have been designed in an experimental course that enables students to operate the training.
Beer bottle-defect detection is an expensive complex automation system that integrates light, machinery, electricity, information processing, control, and other elements. Because of its high cost, risk, and other factors, it cannot be directly implemented in school laboratories for teaching. In practice, such a system is only viewed from afar without close contact. Its working process and principles are basically invisible and untouchable.
Virtual reality technology makes it possible for this system to be utilized in lectures. In the virtual simulation experiment of a beer bottle-defect detection system, students play the roles of designer and producer. Firstly, they systematically learn and understand its composition and working principles. Secondly, they design and construct a similar detection system, based on the module functions provided by the simulation system. They focus on image recognition to detect various types of glass-bottle defects. With such a system, students are able to master key knowledge, train abilities related to multiple courses, and develop the ability to solve complex engineering problems. The proposed framework not only extends the idea of “conceptual reconstruction” with the visual interaction, but also facilitates the experimental skills and learning behavior of student, which can be transferred to a real workspace.
2 Methods
2.1 Content design of the virtual simulation experiment
From the user's point of view, this virtual simulation experiment includes an experiment introduction, principle introduction, model library, typical image-processing algorithm introduction, interactive-model operation, simulation results after the model operation, and learning and practice functions. In the assessment case, the user is equivalent to a designer and a producer. Using the principles and methods learned previously, and after the assessment requirements are initially set, the user should design and manufacture a specific task-detection system that meets the indicators' requirements. This process should mainly include the transmission mechanism, control mechanism, rejection device, multi-step image processing, defect detection, selection and layout of the light, camera, and lens, and selection and parameter settings of the central processor (Figure 1).
From the developer's perspective, it is necessary to complete models of various devices in the system and build various items, such as a basic virtual design-studio environment, operation interface, menu items, and toolbars. The following elements should also be included: principle introduction, function introduction, algorithm introduction, interactive operation, learning and practice functions, image-processing simulation function, task setting, model selection, layout and placement, error-judgment process, evaluation, simulation operation, result-display evaluation, and performance management or other functions.
2.2 Virtual simulation-experiment development process
The main parts of the virtual detection system, such as the frame structure, components, composition, and working principle, have been constructed and determined according to a prototype of the beer bottle–defect detection system. Meanwhile, the functional requirements were analyzed and determined. After importing specific 3D models that were previously generated by the 3DS MAX software, the virtual scenarios were created and prepared to be developed by the related staff. Using the visual-script blueprint of the Unreal Engine 4 (UE4) game engine, as well as necessary C++ programming, the experimental-function and interactive-operation development of the UI (interactive user interface) were created, based on the image-processing flow of the beer-bottle detection.
Procedures, e.g., image filtering, image enhancement, bottleneck positioning, defect segmentation, and image erosion, are initially prepared for resources that will later be imported into the UE4 software. The above events in UE4 can be triggered by a mouse click. Meanwhile, UE4's UI interface can display the processing flow. When the UI button is clicked, the corresponding processing results are given.
Figure 2 presents a detailed top-down view of the complete scenario. It shows the location of each virtual instrument. Procedures, e.g., the structural schematic diagram of the detection assembly line, provide the fundamental basis for students to understand and construct scenes.
2.3 Main methodology of the virtual bottle simulation
Image segmentation is an important tool that segments image pixels into non-overlapping homogeneous regions. Intensity, shape, color, position, texture, and homogeneity are good features for selecting a homogeneous region[18].
A genetic algorithm is an adaptive global-optimization technique that is inspired by biological procedures. A stochastic parallel search is utilized to find the optimum solution, so the process is unlikely to drop into local optima. The main steps are listed as follows:
(1) Code a genetic representation of the solution domain;
(2) Randomly initialize the population size to allocate the entire range of possible solutions;
(3) Iteratively check the individual solutions through a fitness-based evaluation;
(4) Select a better solution, based on the fitness of each solution;
(5) Iteratively generate the second-generation population of the above solutions through a mixture of crossover and mutation operators until a termination condition is reached, e.g., a minimum criterion.
Otsu's method (OTSU) is a binarization algorithm that exhaustively determines the minimum threshold among the intra-class variance. Consequently, it is similar to a one-dimensional, discrete analog of Fisher's Discriminant Analysis. Its procedure includes the following steps:
(1) Compute the statistic distribution
P ( i )
of each intensity level, e.g.,
f ( x )
is an
M × N
(pixel) image.
P ( i ) = f ( x , y ) = i f ( x , y ) / M × N
(2) Compute the cumulative
k
-level (
k [ 0,1 , 2 . . . , l - 1 ] ,
) sum
P 1 ( k )
, mean
m 1 ( k )
, global intensity mean
m G
, and the between-class variance,
σ B ( k ) = P 1 ( k ) ( m 1 ( k ) - m G ) 2 + P 2 ( k ) ( m 2 ( k ) - m G ) 2
.
P 1 ( k ) = i = 0 k p i
(3) Search through all possible
k
thresholds for the best one
k *
.
k * = a r g m a x   { σ B ( k ) }
3 Results
3.1 3D equipment models of the detection system
Following the composition of a real detection system[19,20], the equipment model of a beer bottle-detection system mainly includes transmission equipment, power-supply equipment, cameras, light sources, mechanical grabbers, work boxes, etc. These models are constructed using 3DS MAX, as shown in Figure 3. To construct the model library, FBX format-based models are imported into UE4, where the 3D model and virtual design room can be constructed, as shown in Figure 4.
3.2 Functional design and implementation
3.2.1   User-interface design and implementation
The user interface (UI) is mainly used for friendly interactions between people and the system using certain logical functions (Figure 5). The Unreal Motion Graphics UI Designer (UMG) in UE4 is used to design UI elements, e.g., widget blueprint control, the canvas panel in the blueprint editor, and any interface-related graphics, based on specific functions. ‘Widgets’ are the core of UMG. They are pre-made functions that allow the user to arrange the user interface. Login interfaces are designed for the teacher and students, according to the needs of the virtual experiment. Through an account login and password, the ‘preview’ and ‘assessment’ functional interfaces are presented for the students to learn the experimental principles and operate the virtual equipment. The user clicks a button on the interface to activate the corresponding function. Meanwhile, image-processing procedures in UE4 are formulated, based on the previous workflow shown in Figure 1.
3.2.2   Realization of the power control and transmission device
The Actor Class was established in UE4, and a model monomer is added to it in the form of a component to functionally develop the model. This allows the Actor Class function to be easily called multiple times during the development[21]. In the proposed framework, the power supply is a significant part of the complete system. UE4 is utilized to create a new power blueprint class, and the OnClicked() event is added to the button component of the power-box model. The MultiGate() process-control function switches the power on and off when the power-switch button is clicked with the mouse. When the switch button is first clicked, the button turns on, and a TRUE value is passed to the blueprint BOOL of the transmission device. Then, the power button turns red, simulating an active power supply, and the conveyor belt starts to run. A second click of the button returns it to its original state, simulating that the power is off, and stopping the conveyor belt (Figure 6).
Based on the virtual-experiment requirements, a widely used linear-transmission device was designed using the modeling software. The Actor Class was built in UE4, and the conveyor model was added to it. The Tick() event with a Delta Seconds variable was added to the Script. The rotation speed v of the belt is calculated as follows:
v = D e l t a   S e c o n d s   *   n   *   G e t F o r w a r d V e c t o r   ( )
where Delta Seconds is a variable in Tick(), * means multiplication, n is an integer variable, and GetForwardVector() acquires the direction vector of the forward movement.
The AddActorWorldOffset() function is added to the conveyor belt, V is assigned to the DeltaLocation parameter, and the conveyor belt rotates according to speed v. The physical properties of the conveyor belt are set, e.g., friction force and gravity. When the conveyor belt is working, the object class array contacting the conveyor belt is obtained and traversed using the GetOverLappingActors() function. Then, the contacting object class is determined and the object moves with the conveyor belt.
3.2.3   Camera adjustment
If the industrial-camera menu in the UI interface is clicked, the menu function keys of the CCD camera and CMOS camera will pop up. If the camera menu is clicked, a camera model will be generated in the scene. The camera model can be placed by dragging the mouse to the corresponding position. The CCD camera and CMOS camera are each equipped with two lenses with different initial parameters. Clicking lens 1 or lens 2 will switch between them. The lens-adjustment button in the UI interface can be used to adjust the focal length. In the lower-right corner of the interface, a small window is set to display the change of the camera angle of view. It can be seen that the focal length is proportional to the imaging variations (Figure 7).
The focus is adjusted through the MakePostProcessSettings() function. The change of the focal length is self plus or self minus 2mm. If the maximum and minimum focal-length values have been initialized, the focal length after adjustment is EndDistance = StartDistance ± 2.
Once the camera system is established, the EndDistance value can be assigned to the focal-distance variable of the MakePostProcessSettings() function to simulate the change of vision with the change of focal length. When 'lens stretch' is clicked, the initial focal-length value is added to itself; when 'lens shorten' is clicked, the focal-length value is reduced by itself to simulate the change of the camera's visual-field definition.
3.2.4   Realization of the elimination function
Beer bottles can be generally divided into four categories: normal bottle, bottle with a defective mouth, bottle with a defective bottom, and bottle with a defective body. On the detection assembly line, when the bottle image is processed and the system checks for defects, a normal bottle continues to move along. When the system detects a bottle with a defect, it sends an instruction through the controller, and the defective bottle is removed by the removal-module device.
When the removal module receives the removal order from the controller, the mechanical arm seizes and removes the defective bottle at the proper moment. The mechanical arm grabs the bottle using the AttachToComponent() function. It uses the time axis to set the moving path and moves according to the moving path and the moving time, following the timeline time axis. After the arm has traveled a certain distance, it drops the defective bottle away from the conveyor and returns (Figure 8).
3.3 Mouse interaction functions
The key to dragging objects with the mouse is to transform the two-dimensional coordinates of the mouse screen to the three-dimensional coordinates of the world space using the ConvertMouseLocationToWorld Space() function. The LineTraceByChannel() function causes the mouse to emit rays in the scene, and the rays are detected.
The control-character camera is obtained through the GetPlayerCameraManager() function, and the world position of the camera is obtained through GetWorldLocation(). Starting from the camera position, a ray is emitted along the direction of the mouse click. If the ray collides with the clicked object, the LineTraceByChannel() function will return TRUE and execute the trigger event. Then, the end position, OutHitLocation, of the ray collision is assigned to the variable NewLocation of SetActorLocation(). The object detected by the ray is obtained through Out HitHit Result → Hit Actor. Finally, when the left mouse button is pressed, the object is grabbed. When the left mouse button is released, the object is released.
3.4 Image processing and defect detection
When a bottle passes through the detection module, it is necessary to detect and identify the bottle to determine whether it has a defect. This paper takes bottle-mouth detection as an example to explain the process. The bottle mouth-detection process mainly includes image filtering, bottle-mouth positioning, image segmentation, image erosion, and defect extraction.
The image-acquisition process cannot avoid noise interference. Therefore, it is necessary to filter the image. The image-filtering process suppresses and eliminates the image noise to improve the image quality. Here, median filtering is used to process the image.
In the glass bottle-detection process, the position of the beer-bottle mouth will cause the image shape to vary during the transmission process, because of conveyor-belt fluctuations, etc. Therefore, before detecting bottle-mouth defects, it is necessary to locate the bottle mouth. In this algorithm, we use the basic property that the straight line of a vertical bisector must pass through the center of a circle. Then, we use the Hough transform to detect the circle, and locate the bottle mouth. The process is shown in Figure 9.
After the center of the bottle mouth is identified, the detection area can be determined. OTSU and the genetic algorithm are used for bottle mouth-defect segmentation. The genetic algorithm searches for a global optimal solution by simulating natural evolution, and the optimal threshold is determined using OTSU to segment the image.
After segmentation, the edge of the bottle-mouth image has a high contrast; however, there are gaps, so the line of the bottle-mouth edge is not continuous. The small discontinuous edge is eroded through the erosion operation. After morphologically processing the image, the defects in the original image are labeled by white points in the image, while the normal bottle-mouth image is labeled by black pixels.
We set threshold value S for the defect area in the figure. When the area is greater than S, there is a defect, and the product is disqualified. Using this detection idea, the process of detecting the defect area of an
M × N
binary graph is as follows:
(1) Detect the number of white pixels in the image; the initial number is T = 0;
(2) Traverse the gray value of the output image, line by line;
(3) When the gray value is detected as a white area, increment T;
(4) When T is greater than threshold S, the bottle is judged to be defective.
After preprocessing, positioning, image segmen-tation, and morphological processing, the number of pixels in the bottle-mouth defect area is counted. If T is greater than S, the bottle mouth is known to be defective (Figure 10).
3.5 Simulation construction and detection-system operation
The main function of this part is to assess the learners or users and evaluate the depth of their learning and ability training. With the help of the previous workflow and the introduction to the detection system's working principle, the user obtains experimental-equipment models by clicking the UI menu, and then drags them with the mouse to build a complete detection system. As the system runs, images are captured by the cameras, and then processed and evaluated to determine whether or not the bottle is defective.
The learners need to place the power supply, transmission device, different work boxes, and removal device in the correct positions. Simultaneously, they need to select the correct type of light source, camera, and lens with the correct parameters, and then attempt different image-processing algorithms, according to the above steps, to detect and identify whether the glass bottle is defective. If there are defects, the bottle should be removed. Establishing the image-acquisition module, processing the image, and correctly realizing the detection are the key assessment factors.
Establishing the image-acquisition module of the detection system mainly involves setting up the proper detection cameras and light sources along the transmission device. The designed detection sequence is bottle-bottom detection, followed by bottle-mouth detection, and finally bottle-body detection (Figure 11).
(1) Bottle-bottom detection: The user selects the bottle bottom-detection camera, the bottle-bottom light source in the light-source menu, and the working box containing the mechanical arm for bottle-bottom detection. Finally, the user drags the mouse to combine them. When the system runs and the bottle passes through the bottle bottom-detection module, the camera is triggered to capture the image that will be processed for the result.
(2) Bottle-mouth detection: The user drags the ring-LED light source to the top of the light-shielding box to shine light on the bottle mouth. The user selects the camera and drags it to the top of the light source. Then, the user selects and drags the working box for bottle-mouth detection, which contains the mechanical arm. When the system runs the bottle mouth-detection module, the camera is triggered to provide feedback.
(3) Bottle-body detection: The user selects the cameras for bottle-body detection and installs three cameras beside the assembly line at 120° intervals. The glass bottle is photographed by the three cameras without rotation to capture a complete picture of the bottle body.
After building a complete detection assembly line, if there is a major error, the system will stop. The user needs to identify the error and correct it before the system will restart. If the system is running, the position and brightness of the light source, as well as the focal length, shutter, and camera focus will be adjusted, based on the shape and size of the given glass bottle, to obtain a clear and accurate exposure image for subsequent processing and defect detection. Figure 12 shows the results of the simulated shooting, processing, and detection, which are displayed in the virtual environment.
Forty junior students from the Department of Electronics Engineering, Shandong University of Science and Technology, P. R. China, evaluated a traditional image-processing experiment and the virtual simulation experiment. Through the proposed virtual tool, the students were able to learn about the beer-bottle production cycle. In addition, they observed the production details (such as defect detection) and the role of proper operation. This gives the students a feeling of being in a virtual factory environment.
To investigate the students' degree of interest in both experimental lectures, questionnaires were designed that divided the degree into three levels: interested, general, and not interested. From the questionnaire data in Figure 13, it is evident that most of the students preferred the virtual-experiment teaching to traditional teaching. The virtual simulation experiment displays realistic scenes, which enable the students to operate the devices in the virtual scene with simple interactive devices. This can help students to better understand the experimental content. Meanwhile, it seems that the students prefer the new technological equipment for learning.
4 Discussion
In this paper, a model-based virtual simulation experiment about beer bottle-defect detection, which includes interactive design and simulated operation, was proposed. Even if the users did not understand the basic principles of image-processing theory or the detection-system algorithm, they could still experience the real design and manufacturing process in such a system. This proposed framework exercised the students’ ability to analyze and solve complex problems, and extended the operation from classroom teaching to real workflow practice. A bottle mouth-defect detection system was developed in virtual practice to obtain the desired results.
Currently, the proposed framework focuses on constructing VR scenarios, which provide a natural immersive environment with low-cost devices such as a mouse and keyboard. This method allows the individual students to feel similar situations to a large extent.
Subsequently, we will add interaction modes such as the HTC oculus or other VR devices. We also believe that some details will need to be improved in the future. For example, detecting defects in the bottle body and bottom, residual liquid in the bottle, and types and quantities of beer bottle will be possible with more effective image-processing algorithms. In addition, to enable the students to learn about hardware for control theory (such as programmable logic controllers), support for network-based operations, behavior records, and data management can be developed for additional benefits.

Reference

1.

Lamb R, Antonenko P, Etopio E, Seccia A. Comparison of virtual reality and hands on activities in science education via functional near infrared spectroscopy. Computers & Education, 2018, 124: 14–26 DOI:10.1016/j.compedu.2018.05.014

2.

Zhang G H, Ren M. Design of image processing experimental system based on Matlab GUI. Journal of Hebei North University (Natural Science Edition ), 2018, 34(5): 24–28 DOI:10.3969/j.issn.1673-1492.2018.05.005

3.

Lu T, Deng H L, Wang T, Cheng H, Cheng L Q, Liu L Q, Xue F. Design and implementation of image processing system. Computer Engineering & Software, 2020, 41(1): 74–78 DOI:10.3969/j.issn.1003-6970.2020.01.016

4.

Guo Y Z, Li J Y, Wan B X, Zhang W W. Design of virtual experiment platform about "digital image processing" course. Techniques of Automation & Applications, 2018, 37(12): 168–170

5.

Zhang G C, Wan S P, He J R. Development & design of digital image processing system based on Matlab GUI. Computer Engineering & Software, 2019, 40(11): 123–127 DOI:10.3969/j.issn.1003-6970.2019.11.027

6.

Wang W C, Li J, Wang R L, Wu X J, Sun X Y. Design and development to f simulation platform for digital image processing based on Matlab GUI. Experimental Technology and Management, 2019, 36(2): 141–144 DOI:10.16791/j.cnki.sjg.2019.02.034

7.

Raes A, Vanneste P, Pieters M, Windey I, van den Noortgate W, Depaepe F. Learning and instruction in the hybrid virtual classroom: an investigation of students' engagement and the effect of quizzes. Computers & Education, 2020, 143: 103682 DOI:10.1016/j.compedu.2019.103682

8.

Gong L H, Zhu Q B, Zhou Z H, Zhou N R. Comprehensive design experiment for digital image processing based on Matlab. Experimental Technology and Management, 2018, 35(11): 48–53 DOI:10.16791/j.cnki.sjg.2018.11.011

9.

Ma H L, Zha S L, Wu Y H, Zhu Z, Jiang J L. Experimental teaching research of digital image processing based on LabView and Matlab. Journal of Anqing Normal University(Natural Science Edition), 2018, 24(3): 107–109 DOI:10.13757/j.cnki.cn34-1328/n.2018.03.026

10.

Yuan R, Luo M, Sun Z, Shi S Y, Xiao P, Xie Q G. RayPlus: a web-based platform for medical image processing. Journal of Digital Imaging, 2017, 30(2): 197–203 DOI:10.1007/s10278-016-9920-y

11.

Zhao Q P. Overview of virtual reality. Chinese Science: Information Science, 2009, 39(1): 2–46 DOI:CNKI:SUN:PZKX.0.2009-01-003

12.

Guo Q, Zhang L, Zhang F Y, Liu S D, Sun N L. Research on ultra-fast laser experiment simulation system based on virtual reality. Experiment Science and Technology, 2018, 16(6): 124–128 DOI:10.3969/j.issn.1672-4550.2018.06.030

13.

Towey D, Walker J, Austin C, Kwong C F, Wei S. Developing virtual reality open educational resources in a sino-foreign higher education institution: challenges and strategies. In: 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE). Wollongong, NSW, Australia, IEEE, 2018, 416–422 DOI:10.1109/tale.2018.8615167

14.

Huang Y L, Zhai X, Ali S, Liu R Q. Design and implementation of traditional Chinese medicine education visualization platform based on virtual reality technology. In: 2016 8th International Conference on Information Technology in Medicine and Education (ITME). Fuzhou, China, IEEE, 2016, 499–502 DOI:10.1109/itme.2016.0119

15.

Kvasznicza Z. Teaching electrical machines in a 3D virtual space. In: 2017 8th IEEE International Conference on Cognitive Info communications (CogInfoCom). Debrecen, Hungary, IEEE, 2017, 000385–000388 DOI:10.1109/coginfocom.2017.8268276

16.

Sakamoto M, Hori M, Shinoda T, Ishizu T, Akino T, Takei A, Ito T. A study on applications for scientific experiments using the VR technology. In: 2018 International Conference on Information and Communication Technology Robotics (ICT-ROBOT). Busan, South Korea, IEEE, 2018, 1–4 DOI:10.1109/ict-robot.2018.8549868

17.

Wang J S, Zhao Y Y, Zhong D P, Yan J J. Construction and practice of virtual simulation experiment teaching for thermal power unit. Higher Engineering Education Research, 2019, S1:201–203

18.

Schalkoff R J. Digital image processing and computer vision. New York, Wiley, 1989

19.

Huang B, Ma S L, Wang P, Wang H J, Yang J F, Guo X Y, Zhang W D, Wang H Q. Research and implementation of machine vision technologies for empty bottle inspection systems. Engineering Science and Technology, an International Journal, 2018, 21(1): 159–169 DOI:10.1016/j.jestch.2018.01.004

20.

Liu H J, Wang Y N. Development of glass bottle inspector based on machine vision. In: 2008 10th International Conference on Control, Automation, Robotics and Vision. Hanoi, Vietnam, IEEE, 2008, 785–790 DOI:10.1109/icarcv.2008.4795617

21.

EPIC. UE4[EB /OL]. https://docs.unrealengine.com/latest/CHN/index.Html, 2018