Home About the Journal Latest Work Current Issue Archive Special Issues Editorial Board

TABLE OF CONTENTS

2021,  3 (4):   261 - 273

Published Date:2021-8-20 DOI: 10.1016/j.vrih.2021.08.001

Abstract

Background
A virtual system that simulates the complete process of orthodontic bracket placement can be used for pre-clinical skill training to help students gain confidence by performing the required tasks on a virtual patient.
Methods
The hardware for the virtual simulation system is built using two force feedback devices to support bi-manual force feedback operation. A 3D mouse is used to adjust the position of the virtual patient. A multi-threaded computational methodology is adopted to satisfy the requirements of the frame rate. The computation threads mainly consist of the haptic thread running at a frequency of >1000Hz and the graphic thread at >30Hz. The graphic thread allows the graphics engine to effectively display the visual effects of biofilm removal and acid erosion through texture mapping. Using the haptic thread, the physics engine adopts the hierarchy octree collision-detection algorithm to simulate the multi-point and multi-region interaction between the tools and the virtual environment. Its high efficiency guarantees that the time cost can be controlled within 1 ms. The physics engine also performs collision detection between the tools and particles, making it possible to simulate paint and removal of colloids. The surface-contact constraints are defined in the system; this ensures that the bracket will not divorce from or embed into the tooth during the adjustment of the bracket. Therefore, the simulated adjustment is more realistic and natural.
Results
A virtual system to simulate the complete process of orthodontic bracket bonding was developed. In addition to bracket bonding and adjustment, the system simulates the necessary auxiliary steps such as smearing, acid etching, and washing. Furthermore, the system supports personalized case training.
Conclusions
The system provides a new method for students to practice orthodontic skills.

Content

1 Introduction
Malocclusion typically refers to deformities of the teeth, jaw, and craniofacial region caused by congenital genetic factors or acquired environmental factors. It affects not only oral health and function but also appearance and mental health[1]. According to a survey, up to 49% of people in China suffer from malocclusion[2]. Due to its extremely high incidence, malocclusion is listed by the World Health Organization (WHO) as one of the three major oral health problems. Orthodontics aims to improve the arrangement of teeth by resolving abnormal teeth morphology and eliminating malocclusion to achieve dental aesthetics. Currently, the main technical method for orthodontic treatment of malocclusion is the fixed correction technology. Figure 1 shows examples of patients wearing orthodontics.
The improvement in people's living standards has led to a significant increase in the number of people that undergo orthodontic treatment. While this trend offers increased opportunity for the development of orthodontic treatment, it may also negatively impact staff-patient relationships[3]. Owing to the relatively low doctor-patient ratio in orthodontics, an area with strong clinical operability[4], there is an urgent need for new training methods for efficient practical-skill training.
Bracket bonding is an important part of the clinical skill training for orthodontic students[5,6]. The accuracy of the bracket position directly affects the outcome of the tooth alignment[7]. Because of the variation in crown anatomy and positioning errors, it is difficult to obtain an ideal bracket position in clinical practice without sufficient training. In addition, the complete procedure of bracket bonding can be relatively complex, comprising 10 procedures including dental biofilm removal, bonding with a ring, placing a mouth opener, acid etching, washing, drying, coating with an adhesive base, bracket bonding, adjusting bracket position, scraping off excessive adhesive primer around the bracket, and light curing. At present, the main training method is simulated bonding training on a plaster model. After the training, the teacher visually evaluates and guides the students to correct the brackets that were not well positioned[8]. This training method is inefficient as it leads to material loss and results in ineffective training because of the differences in the shape and force sense between the plaster model and real teeth. Further, during this training process, only certain steps, such as bracket bonding, can be simulated and trained, and it is difficult to realize training of the complete process.
With the development of computer technology, force feedback oral surgery simulators as a new skill training method has become widely used for implant, restoration, and periodontal training[9-13]. However, in the field of orthodontics, there is still no virtual simulation system that can train students on the complete clinical procedures of bracket bonding. This is mainly because of the following complexities posed by orthodontic simulation: (1) The traditional force feedback virtual simulation system can only simulate a single operation type, but the brackets bonding process is more complicated, requiring simulated cotton swab application, locator measurement, and tweezer clamps. Further, it involves many types of operations, such as fetching and placing, and the task types are more extensive. (2) At present, the interactive environments of virtual simulation systems are limited to rigid bodies interacting with other rigid bodies or with soft bodies. However, bracket bonding requires the interaction of rigid bodies with colloids (e.g., biofilm removal, smearing, scraping, etc.) and fluids (e.g., washing), and the simulation thereof is more difficult. (3) Current commercial surgical simulators only cater for force feedback from the tip of the tool, while an ideal orthodontic virtual simulation system requires accurate detection of collisions between the entire tool body and various oral tissues including teeth, gums, tongue, and cheeks. This is mainly because the tools tend to make contact with a variety of oral tissues during the operation because of the constricting space of the oral cavity. Additionally, collision detection between the tools and colloids is required. These factors introduce new challenges to the collision-detection algorithm.
Based on the characteristics of orthodontic surgery, we constructed a surgical simulator that can simulate the complete bracket bonding procedure for training purposes. The main contributions of our study are as follows: First, we construct a virtual simulation system for bracket bonding training, which can simulate complete bracket placement procedures with force feedback. Second, we successfully rendered biofilm scaling and acid etching of teeth based on the texture blending method as well as the interaction of colloid coating and removal through particle simulation. Finally, a hierarchy octree structure was introduced to solve for collision detection between the entire tool and the tissues in the oral cavity. With this method, collisions can be detected within 1ms, meeting the frequency requirement of 1000Hz for force rendering.
2 Related work
2.1 Virtual dental simulation training system
In recent years, with the rapid development of computer technology, an increasing number of dental institutions have introduced virtual simulation systems for the practical-skill training of beginners to equip them with sufficient experience before conducting surgeries on real patients. The simulation system plays an important role in many areas, such as preoperative planning. Early virtual dental simulation systems were typically used for designing practice, such as the CT-image-based oral implant preoperative planning system developed by Verstreken et al., which mainly uses a mouse to interact with the user interface[14]. With the development of the virtual reality technology, the construction of dental simulation systems capable of providing force feedback, as shown in Figure 2, has gradually become a research focus[15,16]. The dental simulation system developed by Dangxiao Wang of Beihang University can simulate various surgeries, including periodontal[17], tooth extraction[18], restoration[19], and implantation[20]. This highlights the importance of supporting multi-point and multi-region interactions between the tools and teeth during training. To date, the system has been adopted by many dental schools.
Virtual simulation systems have also been widely used in the field of orthodontics[21]. Nowadays, virtual simulation systems are mainly used in orthodontic multimedia teaching. Peng et al. used a virtual simulation system to teach orthodontic bracket adhesion, allowing students to intuitively understand the adjustment methods available to place brackets in different positions and to experience an accurate representation of the overall tooth movement and root control[22]. Ji developed a virtual orthodontic simulation system that can simulate the orthodontic movement of the teeth and the ability to adjust the deformed teeth manually to achieve occlusion[23]. In addition to visual perception and process training, force and movement perception, which are especially important in orthodontic bracket placement training, are a key component that is provided by the virtual simulation system[24]. Although virtual simulation systems with force feedback have been used widely in many other dental fields[25,26], the development and adoption of virtual simulation systems with force feedback in orthodontics are still in its infancy[27].
Rao et al. developed a virtual simulation system for bracket placement training that supports the use of force clues to prompt user operation, and the user can experience the real force of clamping and bonding the bracket[28]. The system was developed based on real clinical operating standards and supports safety training in an immersive environment. The system interface is shown in Figure 3 where the prompt box on the tooth represents the standard position of the brackets. When the user places the brackets, the graphical interface will evaluate the correctness of the user's operation by changing the color of the prompt box. The experimental results show that the system can provide training to a certain extent, helping users master the correct way of placing brackets. However, the system failed to simulate the complete bracket placement procedure. In addition, the training feedback on bracket adjustment was not sufficiently detailed, as the deviation between the bracket position and the standard position could not be measured accurately. Moreover, it could not accurately simulate the interaction between the tools and the teeth, gums, tongue, and other oral tissues in the oral cavity.
2.2 Orthodontic bracket placement process
Compared with other oral surgeries, the process of orthodontic bracket placement is more complicated. During the procedure, various instruments and materials such as rigid bodies, soft bodies, and colloids are involved. According to the clinical operating standards, the placement of orthodontic brackets can be divided into 10 steps, and each step has clear operating requirements and goals(Figure 4).
These steps are as follows:
(1) Cleansing of the tooth surface: Before placing the brackets, the tooth surface needs to be cleaned with a slow turbine handpiece and a polishing cup.
(2) Bonding with a ring: Before placing the brackets, it is necessary to bond the band ring or buccal tube that prevents the brackets from slipping off during wear.
(3) Acid etching: An opener is placed and an etching agent is applied to the tooth surface in sequence.
(4) Rinsing: A water spray gun for 20s is used to thoroughly rinse off the acid and suction the saliva.
(5) Blow dry the moisture: The moisture-proof cotton roll is placed in the mouth and blow dry Amiens. The tooth surface tends to be chalky after acid etching and blow drying.
(6) Adhesive primer application: The adhesive primer is applied on the tooth surface for the adhesion of the bracket.
(7) Bracket bonding. The doctor sits at a 12 o'clock position to the patient and glues the brackets in place, in the following order: mandibular posterior teeth, maxillary posterior teeth, mandibular anterior teeth, and upper anterior teeth.
(8) Bracket position adjustment: As shown in Table 1, there are detailed clinical regulations on the position of the brackets. The doctor needs to use the bracket locator to measure the distance between the bracket and each tooth crown and adjust the position of the bracket according to the measurement result.
Clinical regulations for the bracket positions
Maxillary arch U1 U2 U3 U4 U5 U6 U7
A (+)1.0mm 6.0 5.5 6.0 5.5 5.0 4.0 2.0
B (+)0.5mm 5.5 5.0 5.5 5.0 4.5 3.5 2.0
C average 5.0 4.5 5.0 4.5 4.0 3.0 2.0
D (-)0.5mm 4.5 4.0 4.5 4.0 3.5 2.5 2.0
E (-)1.0mm 4.0 3.5 4.0 3.5 3.0 2.0 2.0
Mandibular arch L1 L2 L3 L4 L5 L6 L7
A (+)1.0mm 5.0 5.0 5.5 5.0 4.5 3.5 3.5
B (+)0.50mm 4.5 4.5 5.0 4.5 4.0 3.0 3.0
C average 4.0 4.0 4.5 4.0 3.5 2.5 2.5
D (-)0.5mm 3.5 3.5 4.0 3.5 3.0 2.0 2.0
E (-)1.0mm 3.0 3.0 3.5 3.0 2.5 2.0 2.0
Notes: U1-U7 represent the upper teeth, and L1-L7 represent the lower teeth. A represents the regulations for different tooth sizes. Values are presented as the distance between the bracket and tooth crown.
(9) Scraping off excess adhesive around the brackets: After the brackets are bonded, the excess glue is scraped off with a probe.
(10) Light curing. The last step is to use a UV lamp to cure the gel at the bottom of the bracket for enhancing the adhesion of the bracket and securely adhering it to the surface of the tooth.
3 Hardware
The hardware of the system is shown in Figure 5, including the hardware shell, bi-manual haptic devices, observation window, touch screen for control operation, foot pedal, 3D mouse, and graphics workstation. The observation window allows the user to observe the virtual operating scene. The virtual patient is initially in the 12-o'clock position, and the user can use the 3D mouse to adjust the position of the virtual patient.
The hardware shell of the system is used to place all the above-mentioned components. It is also possible to adjust the height of the shell to reach a comfortable position for different users. The system uses two force feedback devices; thus, it can simulate the clinical bi-manual operations of orthodontic treatment. The foot pedal can control the working state of the virtual tools. For example, the user can step on the foot pedal to start the polishing cup when removing the soft dental biofilm. In other words, once the pedal is depressed, the polishing cup vibrates at a high frequency to remove the dental biofilm.
4 System design and implementation
Our virtual simulation system was developed using the C++ programming language, and the Unity engine was utilized to construct the user interface. The core part of the software system is the force-rendering algorithm based on a six-degrees-of-freedom configuration optimization algorithm. The configuration optimization algorithm was developed by the State Key Laboratory of Virtual Reality and Technology of Beihang University[29], and it can realistically simulate multi-point and multi-region non-embedding interactions between the virtual tools and virtual environment.
The overall architecture of the system is shown in Figure 6. To improve computing efficiency, the virtual simulation system adopts a multi-threaded programming method by performing haptic rendering and graphic rendering in the haptic and graphic threads, respectively. The haptic thread is the main thread running at a frequency of >1000Hz, meeting the requirements of force stability to rendering frequency. The main task of the haptic thread is to calculate the optimized tool configuration in real time according to the collision information between the surgical tools and the virtual environment. The modules involved in the haptic thread include collision detection, optimization, and adjustment of the bracket positions. The graphic thread is a sub-thread whose main task is to display the virtual environment. During each loop, the haptic thread transmits collision information to the graphic thread to change the surface appearance of the teeth and materials as well as the positions of virtual tools and auxiliary tools, such as brackets.
4.1 Graphics engine
Using the graphics engine, a real surgical operating environment is simulated by setting the lighting and viewing angles. The system uses Unity 3D as the rendering engine and realizes various other graphics rendering effects for orthodontics based on its graphics programming interface.
4.1.1 Rendering of removing soft dental biofilm
In the preparatory stage of the orthodontic procedure, the soft biofilm attached to the surface of the teeth needs to be removed. The soft biofilm is removed using a slow turbine handpiece with a polishing cup. This system adopted a texture mixing method to simulate the removal of the soft biofilm.
This system used a tooth texture and soft biofilm texture with alpha channel to render the tooth appearance. Using the alpha value for blending, the blending formula was calculated according to Equation (1).
C r g b = ( 1 - D a ) T r g b + D a D r g b
where
C r g b
is the blending result,
D
is the texture of the biofilm,
T
is the texture of the tooth, and
r g b a
are the four color channels. When the system detects a collision between the polishing cup and tooth surface, UV coordinates
< u ,   v >
of the collision point are obtained through the collision detection interface of Unity 3D . Through the Command Buffer interface, the calculation was performed in the texture space of the soft biofilm texture. For all pixels within Euclidean distance
d
from the UV coordinate of the collision point, the value of the alpha channel was set to 0 to achieve biofilm removal and show the tooth surface. The effect was calculated using Equation (2).
D a = D a < u ,   v > - < u ' ,   v ' > d 0 < u ,   v > - < u ' ,   v ' > < d
Where
< u ' ,   v ' >
is the texture coordinates of the collision pixel, and d is set to 0.01, initially. The removal effect is shown in Figure 7.
4.1.2 Blow-dry effect rendering after acid etching
After etching, the acid-etched area must be rinsed and blow-dried, and the drying operation needs to show a chalky color in the area where the etching agent was originally applied. This system was expanded based on the soft biofilm texture mixing principle by adding texture after timing. The blow-dried texture blend was calculated using Equation (3).
C r g b = ( 1 - T i m e ) T r g b + T i m e D r g b
where
C r g b
is the result of mixing,
T r g b
is the tooth texture,
D r g b
is the color displayed after drying, and
T i m e
is the value of the texture after the drying time. During the drying process, the system updates the texture based on time. Initially, each pixel of the texture is 0, and the teeth show their original color. As drying progresses, the corresponding area pixels will change over time and are modified by drying speed
v
. The value of each pixel is calculated using Equation (4).
T i m e t + 1 = T i m e t < u ,   v > - < u ' ,   v ' > d T i m e t + d t v < u ,   v > - < u ' ,   v ' > < d
where dt is the system-update time step. Because Time must be maintained between 0 and 1, the system will detect and constrain the pixel value to 1 after each update; the effects of acid etching and drying are shown in Figure 8.
4.2 Physics engine
The physics engine calculates the magnitude and direction of the collision force through information such as the speed and direction of the movement of the force feedback device. It sends the force feedback information back to the operator through the force feedback device. Collision detection is key in a simulation system that uses force feedback. The task of collision detection in this system is to detect any collisions between virtual surgical instruments and teeth and calculate the corresponding feedback force based on this.
4.2.1 Collision detection between virtual tools and teeth
The system adopts the hierarchical collision-detection algorithm based on the sphere octree, which can achieve collision detection within 1ms. The spherical tree structure is shown in Figure 9.
The collision detection of the virtual surgical instruments and teeth was implemented using a hierarchical sphere tree, as shown in Figure 10. First, we performed collision detection on parent sphere
S 0
and all sub-spheres of
S 1
; collision detection was then performed on all sub-spheres of
S 1
and
S 0
, and all colliding sphere queues for
Q 0
and
Q 1
were obtained. Then, the same collision detection was performed on all sub-sphere pairs in the queue, and sub-sphere pairs <
S 0 '
,
S 1 '
> that collided in the next layer were obtained. This process was repeated until collision detection was completed on the spheres of the leaf nodes of the sphere tree.
Based on the collision detection results of the sphere tree, the force feedback was calculated and fed back to the user in real time.
4.2.2 Collision detection between virtual tools and particles
We simulated the adhesive and acid etching using a particle-based fluid simulation method. To simulate the particles efficiently, we used the uFlex add-on in the Unity3D engine. The collision between the virtual surgical tools and particles was mainly used to simulate adhesive removal. The system uses a cylinder to simulate the probe tip and utilizes collision detection between the point and cylinder to remove the particles colliding with the probe tip. The collision can be detected by calculating projection distance
d
of the particle on the cylinder axis, and determining whether it is within length of the cylinder
l
, that is, whether
0 d l
is satisfied, and distance to the axis
d '
is less than radius
r
of the cylinder. The adhesive primer removal effect is shown in Figure 11.
4.3 Force feedback calculation
Because the penalty-based force calculation model has the disadvantage of being unstable, the system adopted a six-degree-of-freedom force feedback calculation method based on pose constraints. Force feedback was determined by graphic pose
q g
and physical pose
q h
of the virtual surgical tool. The physical pose was the pose directly obtained from the force feedback device, and the graphical pose was the optimal pose calculated according to the constraints; it was also the pose of the tool used for rendering. The graphic pose obtained by the solution meets the optimization conditions, as described in Equation (5).
m i n q g 1 2 ( q g - q h ) T K ( q g - q h ) s . t . C 1 ( q g ) 0      C 2 ( q g ) 0          . . . . . .      C n ( q g ) 0
where
K
is the stiffness matrix for each degree of freedom, and
C n
is the collision constraint obtained from the sphere tree collision detection, which allows the graphic pose to stay on the surface of the tooth. Force feedback in orthodontics is the collision force feedback between the virtual surgical tool and tooth. Collision detection is performed between the virtual tool sphere tree model and the tooth sphere tree in the graphic pose. A new graphic pose was solved by the constraint solver; thus, even if the physical pose was embedded in the object, the graphic pose could still be constrained on the surface. The process was updated at a frequency of 1000Hz to ensure the stability and smoothness of the force feedback. The calculation is shown in Figure 12, and the force was calculated using Equation (6).
F = k ( q g - q h )
4.4 Simulation of the interaction with brackets
After applying the adhesive primer, users need to bond the brackets to the teeth and use the probe to adjust the position of the brackets to meet the process requirements. The system sets a plane on the tooth tip. During the adjustment of the bracket, the system emits a ray from the center of the bracket to intersect the plane and calculates the distance from the bracket to the crown of the tooth.
The system uses a physically based method to simulate the adjustment of the bracket position. For each frame of force feedback, the system performs collision detection between the sphere at the tip of the probe and the sphere tree of the bracket. After collision is detected, penalty force
F
of the bracket is calculated according to the depth of the tip inserted into the bracket. Simultaneously, penalty momentum
τ
of the bracket is calculated according to the position of the collision point and the direction of the momentum is along the z-axis of the bracket. According to the force and momentum of the bracket, the position and rotation pose are updated using Equations (7) to (10), and the bracket pose matrix is modified. The update of the bracket position is shown in Figure 13.
x t + 1 = x t + v · d t
v = F M d t
θ t + 1 = θ t + 1 + ω · d t
ω = τ m d t
where
x
,
v
,
θ
, and
ω
are the posture, linear velocity, angle, and angle velocity of the bracket, respectively;
F
and
τ
represent the penalty force and torque, respectively, and
M
and
m
are the mass and momentum of the bracket, respectively. The displacement vector can be calculated using
d = v · d t
, and consequently transform matrix
D
of the bracket as well as rotation matrix
R
can be calculated with Rodrigues' rotation formula. The method for updating the posture of the bracket uses Equations (11) to (14).
D = 1 0 0 0 0 1 0 0 0 0 1 0 d x d y d z 1
r o t = c o s θ 1 0 0 0 1 0 0 0 1 + ( 1 - c o s θ ) z x z x z x z y z x z z z y z x z y z y z y z z z z z x z z z y z z z z + s i n θ 0 - z z z y z z 0 - z x - z y z x 0
R = r o t 0 0 1
T t + 1 = R T t D
In the above formula,
T
is the transform matrix of the bracket, which can transform the coordinates of the bracket from the model coordinate system to the world coordinate system, and it is updated for every frame;
d = [ d x ,   d y ,   d z ]
is the transform vector of the bracket, and
z = [ z x ,   z y ,   z z ]
represents the direction of the z-axis.
After the bracket posture is updated, detachment and insertion into the tooth surface could occur. The system corrects its pose after updating the posture. First, the axis is corrected. The system utilizes the normal direction of the tooth surface along the z-axis of the bracket and uses Schmidt's orthogonalization to update the direction along the x- and y-axes, as shown in Figure 14. Second, the system corrects the floating or embedding errors of the bracket. The system projects the tooth spheres to the normal direction of the bracket and sorts the queue of projection two-tuples
< n e a r ,    f a r >
in the normal direction to obtain the interval of each sphere, as shown in Figure 15. In the system,
n e a r
is the smaller value of the projection coordinate, whereas
f a r
is the larger value. After sorting, the value of maximum projection
f m a x
of all the spheres is obtained, which represents the projection position of the tooth surface in the normal direction. Finally, the bracket is moved
f m a x
to the tooth surface along the z-axis. The process is updated every frame so that the brackets can be adjusted continuously on the tooth surface. The effect of the bracket interaction is shown in Figure 16.
When the probe collides with the bracket, penalty force
F
of the bracket is reversely added to the force feedback calculation result of the virtual tool to ensure that the tool still receives feedback during the adjustment of the bracket.
4.5 Personalized training cases
To be applicable for various cases in actual surgery, the virtual surgery simulation system should have the capability of presenting individualized cases. Figure 17 shows two different cases. It shows that the teeth of case 1 are relatively standard, and the teeth of case 2 have obvious characteristics of malocclusion, which indicates that the simulation system supports multiple training models and can realize diversified operation training.
5 Conclusion and future work
Based on the Unity software, this study built a virtual simulation system for orthodontic bracket placement and realized the simulation of the complete orthodontic bracket placement process. The system can help users to train in a real simulation environment with force feedback, which considerably reduces the training cost and time of dentistry students. The system supports the use of different case CT data to reconstruct a new virtual model, which provides a good foundation for personalized training.

Reference

1.

Hou P Y. Orthodontics. Beijing: Science Press, 2011

2.

Iliadi A, Koletsi D, Eliades T. Forces and moments generated by aligner-type appliances for orthodontic tooth movement: a systematic review and meta-analysis. Orthodontics & Craniofacial Research, 2019, 22(4): 248–258 DOI:10.1111/ocr.12333

3.

Du H Y, Jia Y X, Zhang Y D, Liu Y. Trajectory planning of archwire bending robot. China Mechanical Engineering, 2010, 21(13): 1605–1608(in Chinese)

4.

Zhang D L, Zhou C H, Bai Y X. CN, CN201220370809.X

5.

Mcnamara J A . Ordinary orthodontics: starting with the end in mind. World Journal of Orthodontics, 2000, 45–54

6.

Alrbata R. Accurate bracket positioning as a prerequisite for ideal orthodontic finishing. International Journal of Orthodontic Rehabilitation, 2017, 8(1): 3 DOI:10.4103/2349-5243.200223

7.

Bai D, Zhao Z H. Advanced strategy with positive control in orthodontics. Beijing: People's Medical Press, 2015

8.

Brown M W, Koroluk L, Ko C C, Zhang K, Chen M Q, Nguyen T. Effectiveness and efficiency of a CAD/CAM orthodontic bracket system. American Journal of Orthodontics and Dentofacial Orthopedics, 2015, 148(6): 1067–1074 DOI:10.1016/j.ajodo.2015.07.029

9.

Perry S, Bridges S M, Burrow M F. A review of the use of simulation in dental education. Simulation in Healthcare, 2015, 10(1): 31–37 DOI:10.1097/sih.0000000000000059

10.

Bakr M, Massey W, Alexander H. Evaluation of simodont haptic 3D virtual reality dental training simulator. International Journal of Dental Clinics, 2013, 5(4):1-6

11.

Anderson P, Ma M H, Poyade M. A Haptic-based virtual reality head and neck model for dental education. Virtual, Augmented Reality and Serious Games for Healthcare 1, 2014 DOI:10.1007/978-3-642-54816-1_3

12.

Wang D, Li T, Zhang Y, Hou J. Survey on multisensory feedback virtual reality dental training systems. European Journal of Dental Education, 2016, 20(4): 248–260 DOI:10.1111/eje.12173

13.

Ma Y Q, Li Z K. Computer aided orthodontics treatment by virtual segmentation and adjustment. In: 2010 International Conference on Image Analysis and Signal Processing. Zhejiang, China, IEEE, 2010, 336–339 DOI:10.1109/iasp.2010.5476100

14.

Verstreken K, Van Cleynenbreugel J, Martens K, Marchal G, van Steenberghe D, Suetens P. An image-guided planning system for endosseous oral implants. IEEE Transactions on Medical Imaging, 1998, 17(5): 842–852 DOI:10.1109/42.736056

15.

Kusumoto N, Sohmura T, Yamada S, Wakabayashi K, Nakamura T, Yatani H. Application of virtual reality force feedback haptic device for oral implant surgery. Clinical Oral Implants Research, 2006, 17(6): 708–713 DOI:10.1111/j.1600-0501.2006.01218.x

16.

Sohmura T, Hojo H, Nakajima M, Wakabayashi K, Nagao M, Iida S, Kitagawa T, Kogo M, Kojima T, Matsumura K, Nakamura T, Takahashi J. Prototype of simulation of orthognathic surgery using a virtual reality haptic device. International Journal of Oral and Maxillofacial Surgery, 2004, 33(8): 740–750 DOI:10.1016/j.ijom.2004.03.003

17.

Wang D X, Zhang Y R, Hou J X, Wang Y, Lv P, Chen Y G, Zhao H. iDental: a haptic-based dental simulator and its preliminary user evaluation. IEEE Transactions on Haptics, 2012, 5(4): 332–343 DOI:10.1109/toh.2011.59

18.

Wang D X, Tong H, Shi Y J, Zhang Y R. Interactive haptic simulation of tooth extraction by a constraint-based haptic rendering approach. In: 2015 IEEE International Conference on Robotics and Automation (ICRA). Seattle, WA, USA, IEEE, 2015, 278–284 DOI:10.1109/icra.2015.7139012

19.

Wu J, Wang D X, Wang C C L, Zhang Y R. Toward stable and realistic haptic interaction for tooth preparation simulation. Journal of Computing and Information Science in Engineering, 2010, 10(2): 021007 DOI:10.1115/1.3402759

20.

Zhao X, Zhu Z, Cong Y, Zhao Y, Zhang Y, Wang D. Haptic rendering of diverse tool-tissue contact constraints during dental implantation procedures. Front Robot AI, 2020, 7: 35 DOI:10.3389/frobt.2020.00035

21.

Barab S A, Hay K E, Barnett M, Squire K. Constructing virtual worlds: tracing the historical development of learner practices. Cognition and Instruction, 2001, 19(1): 47–94 DOI:10.1207/s1532690xci1901_2

22.

Peng C, Song J L. Application of virtual reality technology in the teaching of advanced orthodontists . Journal of Changchun Education Institute, 2014, 30(11): 94–95(in Chinese)

23.

Ji F. Research and development of virtual orthodontic system. Xi'an: Xi'an University of Science and Technology, 2006 (in Chinese)

24.

Roy E, Bakr M M, George R. The need for virtual reality simulators in dental education: a review. The Saudi Dental Journal, 2017, 29(2): 41–47 DOI:10.1016/j.sdentj.2017.02.001

25.

Yoshida Y, Yamaguchi S, Kawamoto Y, Noborio H, Murakami S, Sohmura T. Development of a multi-layered virtual tooth model for the haptic dental training system. Dental Materials Journal, 2011, 30(1): 1–6 DOI:10.4012/dmj.2010-082

26.

Kwon H B, Park Y S, Han J S. Augmented reality in dentistry: a current perspective. Acta Odontologica Scandinavica, 2018, 76(7): 497–503 DOI:10.1080/00016357.2018.1441437

27.

Gandedkar N H, Vaid N R, Darendeliler M A, Premjani P, Ferguson D J. The last decade in orthodontics: a scoping review of the hits, misses and the near misses! Seminars in Orthodontics, 2019, 25(4): 339–355 DOI:10.1053/j.sodo.2019.10.006

28.

Rao G K L, Mokhtar N B, Iskandar Y H P. An integration of augmented reality technology for orthodontic education: Case of bracket positioning. In: 2017 IEEE Conference on e-Learning, e-Management and e-Services (IC3e). Miri, Malaysia, IEEE, 2017, 7–11 DOI:10.1109/ic3e.2017.8409230

29.

Wang D X, Zhang X, Zhang Y R, Xiao J. Configuration-based optimization for six degree-of-freedom haptic rendering for fine manipulation. In: 2011 IEEE International Conference on Robotics and Automation. Shanghai, China, IEEE, 2011, 906–912 DOI:10.1109/icra.2011.5979754