摘要: | 加護病房使用了許多生命維持設備來治療患者。在呼吸衰竭患者的重症醫療中,機械通氣是一種重要的治療方式。使用機械通氣時,需要一個連接設備在患者和呼吸機之間進行連接。在臨床工作中,氣管內管(ETT)是目前使用最多的設備。對於使用氣管插管供呼吸機使用的患者,確認氣管內管的位置至關重要。氣管內管必須位於氣管內並且也必須位於適當的位置。不適當的位置(例如太低或太高)將導致某些不利的患者照護甚至患者傷害。目前,胸部X光片(CXR)仍是確認氣管內管位置的首選方法。 CXR是在許多臨床實踐中使用的快速,方便且容易獲得的工具。但是,臨床醫生需要一些專業知識和經驗才能準確確定氣管內管的位置。由於過多的的臨床負擔,臨床醫生可能無法及時檢查CXR結果,這可能會導致一些可預防的患者傷害。人工智能(AI)可以為這一臨床難題提供解決方案。人工智能是一個新興領域,具有許多有前途的應用。人工智能輔助的CXR閱讀是近來最發達的研究主題之一,並且已經有許多臨床應用。因此,我們的研究主旨在開發一種用於在CXR上確定氣管內管位置的AI演算法。我們收集了在2019/01至2020/06期間在臺北醫學大學附屬醫院(TMUH)住院的ICU患者獲得的CXR圖像,共4293個CXR的JPEG影像檔。CXR圖像由三位資深加護病房專責醫師專家進行了檢視,並根據氣管內管的位置標記了“ 正確(CO)”或“ 不正確(INCO)”。這被用作我們研究中的基本事實(ground truth)。然後當責的加護病房專責醫師還切出(cropping)了一個感興趣的區域(ROI),包括四個基本的解剖位置,即右鎖骨的頭部,左鎖骨的頭部,隆突和氣管內管的尖端。我們使用Python代碼編寫一個計算機程序,以使用預訓練的模型(VGG16,INCEPTION3,RESNET, DENSENET169)執行轉移學習,以監督式學習(Supervised learning)的學習方法,訓練AI模型,以區分氣管插管在CXR上的位置。模型擬合後,我們使用幾種方法評估每個模型的性能,包括測試準確性,接收器工作特性(ROC)曲線,ROC曲線下面積(AUR),混淆矩陣。大多數模型在ROI圖像上表現出更好的性能,因為這些圖像包含較少的噪聲。但是,大多數模型顯示嚴重過度擬合且性能不佳,AUROC約為50-60%。使用ROI圖像對帶有張量投影層的VGG16和VGG16進行預訓練的模型顯示,使用AUROC的最佳結果分別為92%和82%。對於這個不滿意的結果,我們認為巨大的圖像噪聲和不充分的超參數調整被認為是導致本研究性能不佳的主要原因。TPL在我們的研究中提供了降維效果並提供了出色的性能。這項研究表明了使用轉移學習方法開發電腦輔助診斷(CAD, computer aided diagnosis)系統用於胸部X光片ETT位置的可行性,以及用在開發其他類型醫學影像判讀應用的可行性。 There are many life supporting devices used in the Intensive Care Unit for patient treatment. Mechanical ventilation is an important treatment modality in critical care for respiration failure patients. A connecting device is necessary to connect between the patient and the ventilator machine when using mechanical ventilation. In clinical practice, the endotracheal tube (ETT) is the most used device at present. It is crucial for patients who have endotracheal intubation for ventilator use to confirm the endotracheal tube's position. The endotracheal tube must locate inside the trachea and also at a proper position. Inappropriate location, such as too low or too high, will cause some adverse patient care or even patient injury. At present, chest X-ray (CXR) is still the first-line choice to confirm endotracheal tube position. CXR is a fast, convenient, and readily available tool used in many clinical practices. However, it takes some expertise and experience for the clinician to precisely determine the endotracheal tube's location. The clinicians may not timely check the CXR result due to the overwhelming clinical load which may lead to some preventable patient injuries. The Artificial intelligence (AI) may provide resolution for this clinical dilemma. AI is an emerging field with many promising applications. AI-assisted CXR reading is one of the most developed research topics in recent days and already has many clinical applications. Therefore, our study aims to develop an AI algorithm for endotracheal tube position determination on the CXR. CXR image obtained from ICU patients admitted to the Taipei Medical University Hospital (TMUH) during 2019/01 to 2020/06 and 4,293 JPEG files CXR were retrieved. The CXR images were reviewed by three senior intensivists and labeled with "CORRECT(CO)" or "INCORRECT(INCO)" according to the position of the endotracheal tube. This is used as the ground truth in our study. We also cropped out a region of interest (ROI), including the four essential landmarks, the head of the right clavicle, the head of the left clavicle, the carina, and the tip of the endotracheal tube. We write a computer program with Python code to perform transfer learning using pre-trained models (VGG16, INCEPTION3, RESNET, DENSENET169) to train the AI model a supervised learning method to differentiate endotracheal tube's position on the CXR. After model fitting, we evaluate the performance of each model using several methods, including Test accuracy, Receiver operating characteristic (ROC) curve, Area under ROC curve (AUR), Confusion matrix. Most models manifest better performance with ROI images since these images contain less noise. However, most models showed severe overfitting and poor performance with AUCOR about 50-60%. The pre-trained models of VGG16 and VGG16 with Tensor Projection Layer using ROI image revealed the best result with AUROC is 92% and 82%, respectively. For this unsatisfactory result, we consider that tremendous image noise and inadequate hyperparameter tuning are considered the primary cause of our study's poor performance. TPL provides a dimension reduction effect and provides exemplary performance in our study. This study shows the feasibility of using transfer learning methods to develop the computer aided diagnosis (CAD) system for ETT position assessment on X-rays and other potential applications on other types of medical image interpretation. |