Visual SLAM for bronchoscope tracking and bronchus reconstruction in bronchoscopic navigation

Wang, Cheng ✉; Oda, Masahiro; Hayashi, Yuichiro; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

English Conference paper (Chapter in Book) Scientific
    Identifiers
    We present a new scheme for bronchoscopic navigation by exploiting visual SLAM for bronchoscope tracking. Bronchoscopic navigation system is used to guide physicians by providing 3D space information about the bronchoscope during bronchoscopic examination. Existing bronchoscopic navigation systems mainly used CT-video or sensor for bronchoscope tracking. CT-video based tracking estimates the bronchoscope pose by registration of real bronchoscope images and virtual images generated from computed tomography (CT) images, which requires lots of time. Sensor based tracking calculates the bronchoscope pose based on information from sensor, which is easily influenced by examination tools. We improve the bronchoscope tracking by using visual simultaneous localization and mapping (VSLAM), which can overcome the aforementioned shortcomings. VSLAM is an approach to estimate the camera pose and reconstruct surrounding structure around a camera (called map). We use the adjacent frames to increase the points used for tracking, and use VSLAM for bronchoscope tracking. Tracking performance of VSLAM were evaluated with phantom and in-vivo videos. Reconstruction performance of VSLAM was evaluated by root mean square (RMS) value, which is calculated using aligned reconstructed points and segmented bronchus from pre-operative CT volumes. Experimental results showed that the successfully tracked frames in the proposed method increased more than 700 frames compared with the original ORB-SLAM for six cases. The average RMS in phantom case between estimated bronchus from SLAM and bronchus shape from segmented bronchus was 2.55 mm.
    Citation styles: IEEEACMAPAChicagoHarvardCSLCopyPrint
    2025-04-25 18:34