Home About the Journal Latest Work Current Issue Archive Special Issues Editorial Board

TABLE OF CONTENTS

2021,  3 (4):   274 - 286

Published Date:2021-8-20 DOI: 10.1016/j.vrih.2021.08.002

Abstract

Background
Compared with traditional thoracotomy, video-assisted thoracoscopic surgery (VATS) has less minor trauma, faster recovery, higher patient compliance, but higher requirements for surgeons. Virtual surgery training simulation systems are important and have been widely used in Europe and America. Augmented reality (AR) in surgical training simulation systems significantly improve the training effect of virtual surgical training, although AR technology is still in its initial stage. Mixed reality has gained increased attention in technology-driven modern medicine but has yet to be used in everyday practice.
Methods
This study proposed an immersive AR lobectomy within a thoracoscope surgery training system, using visual and haptic modeling to study the potential benefits of this critical technology. The content included immersive AR visual rendering, based on the cluster-based extended position-based dynamics algorithm of soft tissue physical modeling. Furthermore, we designed an AR haptic rendering systems, whose model architecture consisted of multi-touch interaction points, including kinesthetic and pressure-sensitive points. Finally, based on the above theoretical research, we developed an AR interactive VATS surgical training platform.
Results
Twenty-four volunteers were recruited from the First People's Hospital of Yunnan Province to evaluate the VATS training system. Face, content, and construct validation methods were used to assess the tactile sense, visual sense, scene authenticity, and simulator performance.
Conclusions
The results of our construction validation demonstrate that the simulator is useful in improving novice and surgical skills that can be retained after a certain period of time. The video-assisted thoracoscopic system based on AR developed in this study is effective and can be used as a training device to assist in the development of thoracoscopic skills for novices.

Content

1 Introduction
According to the "2018 Global Cancer Statistics" report published by the official journal of the American Cancer Society, lung cancer is still the most common malignant tumor in incidence (11.6%) and mortality (18.4%)[1], and it is also the most common malignant tumor in China. Due to its large population base, the number of new cases and deaths from lung cancer in China far exceeds that of other countries, and the disease burden of lung cancer puts large strains on the healthcare system[2,3]. Surgical treatment remains the gold standard for the treatment of lung tumors. Among these surgical treatments, thoracoscopic surgery, also called video-assisted thoracic surgery (VATS), is a minimally invasive surgery that has been widely used in thoracic surgery. Its working principle is to use modern TV camera technology and advanced diagnosis and treatment technologies to make 1-3 small, 2cm wide incisions in the chest cavity. A camera transmits the scene in the chest cavity to the TV screen and doctors can then remove diseased tissues by watching the TV screens and operating surgical devices based on the projected image[4-6].
There are two major principles of thoracoscopic surgery: first, to maximize the removal of tumor tissue, and second, to maximize the preservation of the patient's normal lung function. Thus, quality of life and survival time are both equally important factors. This places high demands on the surgeons' surgical skills, aptitudes, and experience[7]. At present, the surgical training methods of most hospital college interns and residents are still taught through animal surrogates or autopsies, surgical video viewing, and surgical field observations. There are, however, many shortcomings in these training methods, such as scarcity in appropriate test tissue and differences in the visual-haptic perception between living animals and human tissues that reduces the generalizability of skills trained in this way. As individualized, minimally invasive surgery planning and "precision medicine" are gradually being applied to the field of clinical oncology, there is a coincident, urgent need for the development of an integrated, immersive virtual surgery simulation platform that integrates surgery training, planning, and rehearsal.
As an emerging research direction in "precision medicine", the immersive surgical training system for augmented reality (AR) is an intersecting research field that integrates optical engineering, medicine, computer technology, biomechanics, and many other disciplines[8,9]. As an essential result of the combination of AR technology and clinical medical research, the AR-oriented surgical training system is also an area of active development in the current medical field. Compared with virtual reality surgical training systems, the AR surgical training systems have more realistic surgical training environments, a visuo-haptic experience closer to that of human factors engineering, and a more natural immersive interactive perception. It is particularly important for doctors who use virtual surgical simulation environments to improve their real surgical skills training, preoperative surgical planning, surgical rehearsal, and remote-assisted surgery abilities[10,11]. In developing these technologies, the key determinants of immersion are how to achieve immersive visualizations that accurately model human soft tissue organs based on medical image data, and how to provide trainers with natural and realistic surgical force feedback perception in the tactile interaction of surgical operations. The present work takes the augmented reality-oriented thoracoscopic surgery training system as its research focus, specifically attending to the visual and tactile modeling of soft tissue cutting operations during lobectomy. We propose a soft tissue cutting model based on the cluster constraint-extended position dynamics algorithm, and a tactile feedback model with both kinesthetic and pressure multi-tactile interaction points, and apply it to the AR interactive VATS surgical training platform. This research not only lays the foundation for the development of new integrated diagnosis and treatment technologies for lung tumors, but also has significant application value in enhancing the precision and minimizing the invasiveness of surgery for lung tumors.
2 Methods
The AR-based visual-haptic modeling design includes three pipelines, as shown in Figure 1. The first is the visual rendering pipeline, which includes constructing a physical model of the soft lung tissue that can be cut by force. The AR rendering (blue process) is mapped to the geometric model. The second pipeline in the design is the VATS operation. During its operation, we construct a multi-touch tactile model with both kinesthetic and pressure senses (green process). Finally, the third pipeline is the structure of the AR interactive surgery training environment, including the acquisition of the panoramic surgery environment and the integration of the integrated training system (red process). The detailed algorithm flow is depicted in Figure 1.
2.1 Theoretical derivation of the cluster-based XPBD
The soft tissue cutting physical model based on the cluster-based extended position-based dynamics (XPBD) algorithm is critical for this research. It determines the visual effect of the lung tissue deformation during cutting as part of the VATS surgery training process. It is one of the core components of AR surgical training systems. Combining the advantages of mesh-based cutting modeling and particle-based cutting modeling, the core of our algorithm is as follows. By constructing new cluster-based constraints, the Lagrangian calculation is introduced into the concept of the position-based dynamics (PBD) algorithm, thus solving the computational complexity problem of the traditional PBD algorithm, whose performance depends on the number of iterations and time steps. In particular, in the cutting operation process, global matrix reconstruction after topological structure changes requires enormous calculations in the PBD algorithm. PBD can be considered as a semi-implicit integration algorithm using the Stomer-Verlet method. Through different constraint projections, each constraint function with a quality-weighted update is implemented through local linearization. The primary step of the PBD constraint solver is to calculate the position increment of each constraint[12]:
Δ x = k j s j M - 1 C j x i
where
i
is the iteration index, j is the constraint index,
k [ 0,1 ]
is the constraint stiffness, and the conversion coefficient s is obtained by the first-order Newton method of the constraint function:
s j = - C j x i C j M - 1 C j T
Our expanded PBD starts from Newton's equation of motion and is affected by the force from the energy potential
U ( x )
:
M x ¨ = - U T x
Here,
x = [ x 1 ,   x 2 . . . ,   x n ]
and
T
is the system state. We perform implicit time discretization of the horizontal position of the motion equation, where the superscript n represents the time-step index:
M x n + 1 - 2 x n + x n - 1 Δ t 2 = - U T x n + 1
According to the constraint function
= [ C 1 ( x ) ,   C 2 ( x ) ,   ,   C m ( x ) ]
,
T  
can be expressed as:
U x = 1 2 C x T α - 1 C x
We decompose the force into direction and scalar components according to the introduction of Lagrangian multipliers:
λ e l a s t i c = - α ˜ - 1 C x
Here,
λ e l a s t i c
=[
λ 1 ,   λ 12 ,   ,   λ n
] is the vector of the constraint multiplier, we fold the time step into the flexibility matrix and define
α ˜ = α / ( Δ t 2   )
, and substitute
λ
in the expression to obtain a new discrete constraint equation of motion:
M x n + 1 - x ˜ - C x n + 1 T λ n + 1 = 0
C x n + 1 + α ˜ λ n + 1 = 0
where
x ˜ = 2 x n - x n - 1 = x n + Δ t ν n
, which is the predicted position or the inertial position. To solve this nonlinear system, we designed a fixed-point iteration based on Newton's method. We omitted the time step superscript
( n + 1 )  
to emphasize that each iteration is represented by the subscript
  ( i   + 1 )
. Additionally, we skip the geometric stiffness and Hessian constraints and introduce a local error
O   ( Δ t 2 )
. This approximation will change the convergence rate, but it will not change the global error, nor will it change the method of solving fixed-point iterations. Based on these approximations, our updated linear sub-problem is given by:
M - C T x i C x i α ˜ = - 0 h x i ,   λ i
We simultaneously considered Schur supplementation to obtain the following simplified system in terms of unknown
Δ λ
:
C x i M - 1 C x i T + α ˜ Δ λ = - C x i - α ˜ λ i
Finally, through calculation, the location update can be directly obtained:
Δ x = M - 1 C x i T Δ λ
In terms of the solver, the Gauss-Seidel method is combined with the PBD algorithm. We take a constraint equation with an index of
j
, so that we can directly calculate the Lagrangian multiplier change:
Δ λ j = - C j x i - α ˜ j λ i j C j M - 1 C j T + α ˜ j
This equation is the core of our cluster-based XPBD algorithm. During constraint solving, we first calculate the Δλj of a single regulation, and then update both the system position and multiplier to obtain a model of the deformation of the software after cutting.
2.2 Algorithm implementation
The algorithm achieves part of the research plan. During the simulation process for the surgical cutting of a single layer of soft tissue (mesh-based), the triangle edge collision detection between the blade edges and the surface of the tissue model generates a series of intersecting mesh tops,along with the scalpel edge sectioning surface. For position, we index the intersecting triangle mesh unit and its adjacent vertices. The subdivided grid boundary is projected onto a plane at a certain distance from the cutting plane. The grid geometry and topology information model are updated at the same time to generate the grid subdivisions, generate a new triangle mesh, and obtain a more refined boundary. Finally, the topological structure of the original mesh is decomposed and changed in the final step of the virtual surgical surface cutting process, the cutting algorithm implementation process is demonstrated in Figure 2.
In the volumetric-based simulation process of surgical cutting of bulk objects, because the volume mesh is composed of tetrahedral elements, a straight line is selected as the surgical instrument's hidden shape. The scanning volume becomes the scanning surface. The volume mesh dissection algorithm is similar to surface cutting in that it finds the intersection point and subdivides the original primitive. There are two types of intersection lines between the tetrahedral mesh and scanned surface: edge intersection lines and surface intersection lines. Finally, the tetrahedron is divided into small units according to its intersection state. After polishing the edges, particles and clusters were added to realize the deformation and physical stimulation after cutting, the detailed internal structure and implementation of volumetric- based soft tissue model is demonstrated in Figure 3.
2.3 AR interactive VATS surgery training platform
Figure 4 shows the developed VATS-AR simulator. Surgical instruments and force feedback devices were connected through a 3D printed linker. The operator holds the surgical instrument to allow the three axes of the power feedback device to perform the corresponding transformation operations. When the virtual surgical instrument's clip interacts with the virtual object, the computer calls the force feedback device through the OpenHaptic plugin (Geomagic, USA) to provide the corresponding driving force[13], thereby providing the operator with a real, tactile sense. HTC VIVE and Logitech cameras were used to provide the AR display methods[14,15]. The 3D model of the connector is shown in Figure 4. When the operator closes the surgical instrument, such as when grabbing a virtual object, the green button is triggered, and the virtual clip is closed. When the operator opens the surgical instrument, such as when releasing a virtual object, the red button is activated, and the virtual clip is opened.
2.4 Training modules of the VATS platform
Based on the visual touch rendering theory algorithm from previous research, combined with the lobectomy in the multi-field VATS observed in the Department of Thoracic Surgery of the First People's Hospital of Yunnan Province, the project chose the three-port standardized VATS operation as the training method. We propose an AR panoramic immersive surgical training environment program to improve the immersion of surgical training. By recording the essential VATS operation and its real operating environment during the actual operation, and through AR near-eye displays, we can provide the surgeon with multi-sensory and highly immersive training surroundings. We used panoramic video of the upper right lobe lobectomy recorded in the Thoracic Surgery Department of the First People's Hospital of Yunnan as the background for the operation. We designed a thoracoscopic resection operation frame, divided into several scenes. Each scene contained various elements according to the definition of the stage, and the component types distinguished the features relevant for that scene. The visuo-haptic interaction in the background was connected to the asynchronous rendering of the visual thread and the haptic thread through preset collision detection. Components triggered presets in the surgical scene to interact with the AR environment and render the preset surgical operation steps in real time at different surgical operation nodes (Figure 5).
We have completed the construction of the lightweight C++ class library "ARTK" framework based on the marching cube surface reconstruction algorithm and incorporated CT, MRI, and PET-CT medical images through library function calls of VTK, ITK, and ARToolKit AR 3D visualization, analysis, and reconstruction[16-18]. The CAD appearance design drawing and part of the prototype structure of the AR-based VATS training system have been completed, as shown in Figure 6. At the same time, the project team followed the thoracic surgeon of the First People's Hospital of Yunnan Province in the early stage to participate in VATS surgery recording, live surgery, training, and three-dimensional surgery scene production. We participated in 12 thoracic surgery operations in total, producing and training content for AR surgery. The system was built and has since accumulated rich clinical experience.
3 Results
To conduct our objective evaluation, after a 2-week rest for the novice group, the scores from the retention test (the average score of two trials) were compared with the baseline score (the average score of the first and second trials) and post-test (the average score of the 29th and 30th trials), respectively. The Wilcoxon signed-rank test method was used to compare differences within groups[19]. The scores of novices and experts at baseline or post-test were compared between groups using the Mann-Whitney U test method[20]. We used p < 0.05 to indicate whether there were statistically significant differences between the two data groups, and found no such difference. The experimental data were analyzed using SPSS 20.0 (IBM Corp., Armonk, New York, USA) software[21] (Table 1).
Demographic data of the thoracoscopic surgical trained
Group A (Novices) Group B (Experts)
Number 24 6
Age/ years 27.6 (25-31) 47.8 (44-55)
Postgraduate year of training 5 (3-8) 14 (12-30)
Male/ % 79.2 83
Right-handed/ % 95.8 100
Box trainer experienced <7 >25
VR game experienced 8/24 1/6
HMD experienced 6/24 1/6
In this study, a cluster-based PBD deformation simulation algorithm was used to simulate the soft tissue deformation process. Liver tissue (original data is lung CT data) was used as the simulation object. The gravity center coordinates were used to generate the coupling between the geometric model and the deformed object. The surface mesh contained 14128 triangles and 7469 vertices, whereas the volume mesh contained 3833 tetrahedrons and 1164 vertices.
As shown in Figure 7, the first line shows mesh-based deformation. We set the modulus of elasticity and Poisson's ratio to simulate the deformation of the model by applying tension and pressure to the surgical instruments. The second-and third-line simulation results demonstrate the volumetric deformation in Figure 7.
Table 2 shows the face and content validation of the four simulators with respect to vision, touch, and scene authenticity by six experts after the experiment. The five-point Likert scale was used to evaluate the performance of the simulator, with "1" representing "very good" and "5" representing "very poor"[22]. The box plot corresponding to the subjective evaluation of the simulator is shown in Figure 6. With respect to vision, the score of the AR simulator (3.56 ± 0.98) was higher than that of the Box simulator (2.39 ± 1.04) and IVR simulator (2.78 ± 1.00) (p =0.021. With respect to haptic perception, the score of the Box simulator (4.11 ± 0.90) was significantly better than the other three simulators (p =0.032), but there were no statistically significant differences between VR, AR, and IVR (p = 0.023). With respect to scene authenticity, the score of the IVR simulator (3.28 ± 1.18) was better than that of the other three simulators (p =0.03), but there were no statistically significant differences between Box, VR, and AR (p =0.04 ).
Face and content validation of the AR-based VATS simulator
Questionnaires Box VR AR MR
Mean ± SD Mean ± SD Mean ± SD Mean ± SD
Visual perception 2.39 ± 1.04 3.17 ± 1.25 3.56 ± 0.98 2.78 ± 1.00
Haptic perception 4.11 ± 0.90 2.61 ± 0.98 2.67 ± 0.97 2.44 ± 0.86
Scene Authenticity 3.28 ± 1.18 2.78 ± 1.06 3.44 ± 0.78 4.28 ± 0.89
Notes: Subjective ratings: 1=strongly disagree, 5=strongly agree (higher is better); VR, virtual reality; AR, augmented reality; MR, mixed reality.
Novice doctors conducted the experiments after two weeks of training. A comparison between the training data and initial data is shown in Figure 8. In the blood vessel clamping and cutting module, the trajectory of the surgical clip in MR training was the shortest, but the AR training required the shortest time, and the error of clamping and cutting the blood vessel was the smallest. Compared with Box training, only some evaluation items of the three training methods reached equivalently high training level. The most prominent of these was the rope module. Judging from the experimental time and the number of rope drops, both parameters were lower than Box training. This can be observed from the histogram. Through training, it was observed that some indicators of these 24 doctors are improving.
By observing the specific improvements of VR and AR, we can see that both forms of training improved the performance. AR simulation training, specifically, can better combine virtual and real environments to provide doctors with a novel environment with real sensory effects, although it does not reach the same heights of immersion as MR. For example, in AR there is no auditory perception[23]. Compared with Box, VR simulation training can be used without restrictions and can automatically record relevant experimental data. VR simulations have certain advantages in objective evaluation, but compared with AR and MR simulations, it lacks both visual immersion and tactile perception.
4 Discussion
In this study, an AR-based thoracoscopic surgery training system was developed. A five-point Likert scale, Wilcoxon signed-rank test[24], Mann-Whitney U test, and CUSUM method were used to analyze the subjective evaluation of four simulators by the expert group, and the objective operational skills of a novice group and an expert group in the Box, VR, AR, and IVR simulators for three simulation tasks, to judge which simulator could best improve novice skills[25]. With respect to face and content validity, a five-point Likert scale was used to analyze the simulator's visual, tactile, and scene authenticity indexes. The results show that the advantages of the simulators were different for different indexes. For vision, the AR simulator was better than the IVR and Box simulators. This may be because the AR technology presents a 3D image to the observer. The Box simulator and the IVR simulator obtain scene information through real and virtual cameras, respectively, and present them on a real and virtual display screen, without a sense of three-dimensional space. The tactile sense of the Box simulator was the strongest among the four simulators because a computer renders the tactile of virtual objects for the other three simulators, it is difficult to achieve real-time rendering power at present, but the interactive objects in the Box simulator are based on fundamental physical models. With respect to scene authenticity, the IVR simulator was superior to the other three simulators. This is because the scene in the IVR simulator was a real operating room scene captured by a 360 panoramic camera in the thoracic surgery of the hospital, giving users a sense of immersion.
With respect to construct validity, there were no differences between novices and experts in the VR, AR, and MR simulators at baseline[26], however experts evaluated it as slightly higher than novices overall. Experts were significantly better than novices in the Box simulator, because the Box simulator was previously popular in the market, and most of our experts would have participated in Box simulator training. After a certain number of training trials (novices: 30 trials; experts: 5 trials), the novices' skills were significantly better than those of the experts at the post-test assessment, indicating that the four simulators were effective in improving the novice's operation skills.
Comparison of the experimental data of the novice group, training group, and expert group in the rope module is depicted in Figures 8 and 9. The novice group improved their skills throughout the training process, with some novice doctors reaching a level of operation equivalent to some experts. With respect to time, AR required the shortest time for the expert group. In this module, the time required for the Box simulator was similar to that of other training methods[27-29]. From the perspective of the moving distance of the surgical clip, the moving length of the surgical clip in VR was shorter, whereas the working space of the AR was distributed. The training method improved the surgical skills of novice doctors[30]. As shown in Figure 9, the movement trajectory of the surgical clip of the novice doctor after training was close to that of the expert, And the performance in the MR simulation was the best. By directly linking patient-specific data, such as 3D anatomical models, to complex surgical scenarios, MR environments can provide a rich source of information to guide the internal movements of humans and surgical robots[31].
5 Conclusion
To simulate the viscoelasticity, nonlinearity, and incompressibility of soft tissue deformation in real time, a new method integrating the viscoelastic mass-spring approach (MSM) and position-based dynamics (PBD) was proposed, which we refer to as the cluster-based PBD. The main contributions of this study are as follows:
(1) A new cluster-based PBD method was developed based on PBD and biomechanical properties. In this method, the spring force and external force of the MSM are combined with the constraint force produced by the PBD constraint function to modify the motion of the mass points. This method successfully controls complex deformation behavior through various constraints. Although the cost of the time step is slightly higher than that of traditional PBD methods, it has improved authenticity and stability.
(2) The deformations of the lung model were simulated to show the different structures and deformations of the organ. Experiments demonstrated that this method can simulate soft tissue deformation during liver surgery and meet the needs of virtual surgery soft tissue deformation.
(3) The applicability of this new method was demonstrated via augmented surgery simulation.

Reference

1.

Bray F, Ferlay J, Soerjomataram I, Siegel R L, Torre L A, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer Journal for Clinicians, 2018, 68(6): 394-424 DOI:10.3322/caac.21492

2.

World Health Organization. Latest global cancer data. World Health Organization, 2018

3.

Guo Y M, Zeng H M, Zheng R S, Li S S, Barnett A G, Zhang S W, Zou X N, Huxley R, Chen W Q, Williams G. The association between lung cancer incidence and ambient air pollution in China: a spatiotemporal analysis. Environmental Research, 2016, 144: 60-65 DOI:10.1016/j.envres.2015.11.004

4.

Novellis P, Bottoni E, Voulaz E, Cariboni U, Testori A, Bertolaccini L, Giordano L, Dieci E, Granato L, Vanni E, Montorsi M, Alloisio M, Veronesi G. Robotic surgery, video-assisted thoracic surgery, and open surgery for early stage lung cancer: comparison of costs and outcomes at a single institute. Journal of Thoracic Disease, 2018, 10(2): 790-798 DOI:10.21037/jtd.2018.01.123

5.

Sanchez-Lorente D, Guzman R, Boada M, Carriel N, Guirao A, Molins L. Is it appropriate to perform video-assisted thoracoscopic surgery for advanced lung cancer? Future Oncology, 2018, 14(6s): 29-31 DOI:10.2217/fon-2017-0388

6.

Chen S, Geraci T C, Cerfolio R J. Techniques for lung surgery: a review of robotic lobectomy. Expert Review of Respiratory Medicine, 2018, 12(4): 315-322 DOI:10.1080/17476348.2018.1448270

7.

Kent M, Wang T, Whyte R, Curran T, Flores R, Open Gangadharan S., video-assisted thoracic surgery, and lobectomy robotic: review of a national database. The Annals of Thoracic Surgery, 2014, 97(1): 236-244 DOI:10.1016/j.athoracsur.2013.07.117

8.

Qin Z, Tai Y, Xia C, Peng J, Huang X, Chen Z, Li Q, Shi J. Towards virtual VATS, face, and construct evaluation for peg transfer training of box, VR, AR, and MR trainer. Journal of Healthcare Engineering, 2019, 6813719 DOI:10.1155/2019/6813719

9.

Herron J. Augmented reality in medical education and training. Journal of Electronic Resources in Medical Libraries, 2016, 13(2): 51-55 DOI:10.1080/15424065.2016.1175987

10.

Barsom E Z, Graafland M, Schijven M P. Systematic review on the effectiveness of augmented reality applications in medical training. Surgical Endoscopy, 2016, 30(10): 4174-4183 DOI:10.1007/s00464-016-4800-6

11.

Loukas C. Surgical simulation training systems: box trainers, virtual reality and augmented reality simulators. International Journal of Advanced Robotics and Automation, 2016, 1(2): 1-9 DOI:10.15226/2473-3032/1/2/00109

12.

Wu J, Westermann R, Dick C. A survey of physically based simulation of cuts in deformable bodies. Computer Graphics Forum, 2015, 34(6): 161-187 DOI:10.1111/cgf.12528

13.

Maciel A, Liu Y Q, Ahn W, Singh T P, Dunnican W, De S. Development of the VBLaST: a virtual basic laparoscopic skill trainer. The International Journal of Medical Robotics + Computer Assisted Surgery: MRCAS, 2008, 4(2): 131-138 DOI:10.1002/rcs.185

14.

Pan J J, Chang J, Yang X, Liang H, Zhang J J, Qureshi T, Howell R, Hickish T. Virtual reality training and assessment in laparoscopic rectum surgery. The International Journal of Medical Robotics + Computer Assisted Surgery, 2015, 11(2): 194-209 DOI:10.1002/rcs.1582

15.

Kibsgaard M , Thomsen K K , Kraus M. Simulation of surgical cutting in deformable bodies using a game engine. In: Proceedings of the 9th International Conference on Computer Graphics Theory and Applications. Lisbon, Portugal, SCITEPRESS-Science and and Technology Publications, 2014 DOI:10.5220/0004670403420347

16.

Paulus C J, Untereiner L, Courtecuisse H, Cotin S, Cazier D. Virtual cutting of deformable objects based on efficient topological operations. The Visual Computer, 2015, 31(6/7/8): 831-841 DOI:10.1007/s00371-015-1123-x

17.

Zhu B, Gu L X. A hybrid deformable model for real-time surgical simulation. Computerized Medical Imaging and Graphics, 2012, 36(5): 356-365 DOI:10.1016/j.compmedimag.2012.03.001

18.

Pons-Moll G, Romero J, Mahmood N, Black M J. Dyna. ACM Transactions on Graphics, 2015, 34(4): 1-14 DOI:10.1145/2766993

19.

Lee J H, Kim H, Kim J H, Lee S H. Soft implantable microelectrodes for future medicine: prosthetics, neural signal recording and neuromodulation. Lab on a Chip, 2016, 16(6): 959-976 DOI:10.1039/c5lc00842e

20.

Pan J J, Yan S Z, Qin H, Hao A M. Real-time dissection of organs via hybrid coupling of geometric metaballs and physics-centric mesh-free method. The Visual Computer, 2018, 34(1): 105-116 DOI:10.1007/s00371-016-1317-x

21.

Berndt I, Torchelsen R, Maciel A. Efficient surgical cutting with position-based dynamics. IEEE Computer Graphics and Applications, 2017, 37(3): 24-31 DOI:10.1109/mcg.2017.45

22.

Niebe S, Erleben K. Numerical methods for linear complementarity problems in physics-based animation. Synthesis Lectures on Computer Graphics and Animation, 2015, 7(1): 1-159 DOI:10.2200/s00621ed1v01y201412cgr018

23.

Wang H M, O'Brien J, Ramamoorthi R. Multi-resolution isotropic strain limiting. ACM Transactions on Graphics, 2010, 29(6): 1-10 DOI:10.1145/1882261.1866182

24.

Müller M, Chentanez N, Kim T Y, Macklin M. Strain based dynamics. Proc. ACM SIGGRAPH/Eurographics Symp. Comput. Animat., 2014, 149-157

25.

Bender J, Koschier D, Charrier P, Weber D. Position-based simulation of continuous materials. Computers & Graphics, 2014, 44: 1-10 DOI:10.1016/j.cag.2014.07.004

26.

Balasubramanian R, Santos V J. The human hand as an inspiration for robot hand development. Cham: Springer International Publishing, 2014 DOI:10.1007/978-3-319-03017-3

27.

L'Orsa R, MacNab C J B, Tavakoli M. Introduction to haptics for neurosurgeons. Neurosurgery, 2013, 72: A139-A153 DOI:10.1227/neu.0b013e318273a1a3

28.

Kim M, Kim J, Lee Y, Lee D. On the passivity of mechanical integrators in haptic rendering. In: 2017 IEEE International Conference on Robotics and Automation (ICRA). Singapore, IEEE, 2017, 446-452 DOI:10.1109/icra.2017.7989058

29.

Ang Q Z, Horan B, Najdovski Z, Nahavandi S. Grasping virtual objects with multi-point haptics. 2011 IEEE Virtual Reality Conference, 2011, 189-190 DOI:10.1109/vr.2011.5759462

30.

Jeon S, Harders M. Haptic tumor augmentation: exploring multi-point interaction. IEEE Transactions on Haptics, 2014, 7(4): 477-485 DOI:10.1109/toh.2014.2330300

31.

Chen L, Day T W, Tang W, John N W. Recent developments and future challenges in medical mixed reality. In: 2017 IEEE International Symposium on Mixed and Augmented Reality(ISMAR). Nantes, France, IEEE, 2017, 123-135 DOI:10.1109/ismar.2017.29