Use of 3-Dimensional (3D) Airway Modeling and Virtual Reality (VR) for Diagnosis, Communication, and Surgical Planning of Complex Airway Stenosis
Submitted by:
Youngjun Cho
University of South Florida Morsani College of Medicine
Presenter(s):
Youngjun Cho
Abstract
Background: In recent years, technology has drastically advanced to accurately and efficiently visualize pathologies in human anatomical structures. Computed tomography (CT) segmentation is a method of generating 3-dimensional (3D) models of anatomical structures of interest. Virtual reality (VR) establishes an excellent environment for clinicians to visualize and directly interact with 3D CT image volumes and segmented anatomical models. Surgical planning for complex airway stenosis poses important challenges for airway surgeons. While soft tissue stenosis can often be managed endoscopically, framework deformities might need open surgical intervention for laryngotracheal reconstruction.
Objective: We hypothesized that a method to segment human airways from clinical cases and import them into a case presentation environment in VR could be developed and used to modelize and visualize complex airway stenosis for efficient surgical planning.
Methods: One normal and two pathological airways modeled from head, neck, and chest CT scans at a slice thickness of 0.625 mm were processed. A multidisciplinary team composed of airway surgeons, VR engineers, and radiologists collaborated to create a clinically relevant, anatomically accurate VR rendering and explanatory Narrations of the 3 clinical cases.
Results: Segmentation and post-processing was completed in the Mimics Innovation Suite v24 from Materialise. Structures were segmented from the level of bifurcation of common carotid arteries to the level of bifurcation of the main bronchi, including cartilaginous and bony airway structures, vessels, and soft tissues. The segmented laryngeal framework and surrounding anatomical structures were then post-processed into 3D image volumes and imported into syGlass, a VR software. Direct visualization and free manipulation of these 3D airway models within the VR environment provided improved geometrical and anatomical details compared to traditional 2-dimensional (2D) CT. This allowed medical image data to be more readily understandable to non-radiologists and surgeons alike, as only using 2D data limits the details that can be gleaned from the dataset. Then, specialized presentation and active learning tools developed for scientific communication using the VR environment permitted creation of Narrations to explain pathological cases. Users could visualize superimposed segmented models and VR scans, and explore either or both by altering opacity settings and using a variety of tools including measuring, sectioning, and resectioning.
Conclusion: The method to segment human airways from clinical cases used in this paper shows a potential in the use of 3D airway modeling and VR in clinical practice for description and surgical planning of complex airways. Intricate collaboration between VR engineers, radiologists, 3D modeling experts and airway surgeons was key to create a clinically relevant and radiologically accurate model. Further work is needed to validate use of these models in clinical practice and patient education.
Objectives
Objective: We hypothesized that a method to segment human airways from clinical cases and import them into a case presentation environment in VR could be developed and used to modelize and visualize complex airway stenosis for efficient surgical planning.
N/A
N/A