摘要: | 氣管內視鏡超音波細針抽吸(endobronchial ultrasound-guided transbronchial needle aspiration, EBUS-TBNA),是一種成熟技術,它使用超音波掃描氣道和周圍的結構,結合內視鏡、超音波,以及細針抽吸功能,相對以往病理檢體取得必須經由縱膈腔鏡、電腦斷層導引穿刺術,或胸腔鏡手術,EBUS-TBNA能達到高準確性病灶診斷,以快速、即時性、無輻射、降低氣胸與血胸發生率以及無手術麻醉風險條件下,提供ㄧ個更快速、簡便、安全、準確的病灶檢體取得方式,以便進行診斷。由於此技術需要具有相關經驗資深的醫師,來判斷超音波影像中病灶與淋巴結的位置,並進行穿刺抽吸取得檢體。若病灶體積小於一公分或是位置鄰近血管,會增加切片位置判斷之困難度,以及出血之風險。因此,本研究預期能透過卷積神經網絡訓練,具有自動判讀惡性超音波影像之能力。此為回溯性(Retrospective study)研究,以遷移式學習(Transfer learning)方式,進行模型訓練及參數調整、監督式學習(Supervised learning)來訓練深度卷積神經網絡(Convolutional Neural Network, CNN)。回溯自2019年05月 至 2021年12月期間,於臺北醫學大學附設醫院經篩檢後共收案205例的EBUS-TBNA超音波影像及其切片後病理報告。模型訓練分別以下兩組方式進行:感興趣位置(region of interests , ROIs)共311張,與整張超音波扇型影像(Full image)共309張。分別各取80%影像作為訓練集(training set),20%影像為測試集(test set),並對訓練集做資料增量(data augmentation),使用InceptionV3、ResNet101、VGG19三個預訓練模型(pre-training model),進行5折交叉驗證訓練(5-fold cross validation),最後進行測試集效能評估。在訓練集方面,ROIs組增量前於InceptionV3、ResNet101、VGG19的準確率最優數據分別為77.2 %、78.1%與77.5%;ROIs增量後分別為82.7%、83.5%與90.1%。 Full image組增量前,三個模型準確率分別為76.4%、77.3%與74.8%;增量後則分別為81.1 %、83.9%與82.6%。在測試集方面,ROIs增量後的判讀準確性較增量前稍微增加,平均值分別為增量前81.3%,增量後82.0%;Full image增量後的判讀準確性較增量前顯著增加,Full image準確性分別為增量前64.3%,增量後71.3%。透過人工智慧裡深度學習的分析,結果皆有潛力準確分析肺部、縱膈腔腫塊與淋巴結之支氣管鏡超音波影像內的惡性腫瘤於處理後的ROIs或Full image。本論文透過卷積神經網絡訓練後,能自動判讀惡性腫瘤超音波影像,引導臨床醫師準確判讀病灶正確位置,針對異常病灶影像進行穿刺抽吸切片檢查,除了可以提高腫瘤與淋巴結切片之準確率,也可以避免或降低出血之發生率。 Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a well-established technique that combines ultrasound to bronchoscope to scan the airway and surrounding structures. Fine needle aspiration can be done on detected lesion to perform pathological study. It becomes minimally invasive important technique to obtain the mediastinal tissue compared with mediastinoscopy, computed tomography-guided puncture, or thoracoscopic surgery. The tissue obtained by EBUS-TBNA can achieve high-accuracy diagnosis rate. It is a rapid, immediate, minimally invasive radiation-free procedure with advantage of low pneumothorax and hemothorax rate and no need for general anesthesia. Currently, EBUS can provide a faster, easier, safer, and more accurate way to obtain tumor specimens for diagnosis. However, this technique must be performed by a well-experienced bronchoscopist to detect the location of the tumor and the lymph nodes in the ultrasound image, and perform puncture and aspiration to obtain the specimen. If the lesion size is less than one centimeter or is closed to blood vessels, there is increased in the difficulty of focusing the target lesion and increased the risk of bleeding due to surrounding large vessel. Therefore, this study is expected to have the ability to automatically interpret malignant ultrasound images through convolutional neural network training. This is a retrospective study, using transfer learning for model training and parameter adjustment, and supervised learning for training a deep convolutional neural network (CNN). From May 2019 to December 2021, the EBUS-TBNA ultrasound images and post-section pathological reports of 205 patients were received in Taipei Medical University Affiliated Hospital. Model training was performed in the following two groups: 311 regions of interests (ROIs) and 309 full images. 80% of the images were taken as the training, 20% of the images were used as the test set, and data augmentation was performed on the training set, using pre-training models as InceptionV3, ResNet101, and VGG19 for 5-fold cross-validation training, and finally for the test set performance evaluation. In terms of training set, the best accuracy data of InceptionV3, ResNet101, and VGG19 before the ROIs group augmentation were 77.2%, 78.1% and 77.5% respectively; after the ROIs image augmentation, they were 82.7%, 83.5% and 90.1% respectively. Before the augmentation of the full image group, the accuracy rates of the three models were 76.4%, 77.3%, and 74.8%, respectively; after the image augmentation, they were 81.1%, 83.9%, and 82.6%, respectively. In terms of the test set, the interpretation accuracy of the ROIs after the image augmentation was slightly higher than that before the augmentation, and the average was 81.3% before the augmentation and 82.0% after the image augmentation. The interpretation accuracy of the full image after the image augmentation was significantly higher than that before the augmentation. The accuracy of the full image was 64.3% before the augmentation and 71.3% after the image augmentation. Through the deep learning analysis in artificial intelligence, the results have the potential to accurately analyze the processed ROIs or full images of malignant tumors in the endobronchial ultrasound images of lung lesion, mediastinal lesion and lymph nodes. This paper uses artificial intelligence-automatic interpretation of ultrasound images of malignant lesion to guide clinicians to accurately interpret the correct location of lesion. By doing so, most favorable malignant lesion can be targeted and needle aspiration can be done. It helps to improve diagnostic accuracy and avoid or reduce bleeding rate. |