A contextual detector of surgical tools in laparoscopic videos using deep learning.
Ganesh Sankaranarayanan Ph.D.
Namazi, B., Sankaranarayanan, G. and Devarajan, V. (2021). “A contextual detector of surgical tools in laparoscopic videos using deep learning.” Surg Endosc Feb 8. [Epub ahead of print].
BACKGROUND: The complexity of laparoscopy requires special training and assessment. Analyzing the streaming videos during the surgery can potentially improve surgical education. The tedium and cost of such an analysis can be dramatically reduced using an automated tool detection system, among other things. We propose a new multilabel classifier, called LapTool-Net to detect the presence of surgical tools in each frame of a laparoscopic video. METHODS: The novelty of LapTool-Net is the exploitation of the correlations among the usage of different tools and, the tools and tasks-i.e., the context of the tools’ usage. Towards this goal, the pattern in the co-occurrence of the tools is utilized for designing a decision policy for the multilabel classifier based on a Recurrent Convolutional Neural Network (RCNN), which is trained in an end-to-end manner. In the post-processing step, the predictions are corrected by modeling the long-term tasks’ order with an RNN. RESULTS: LapTool-Net was trained using publicly available datasets of laparoscopic cholecystectomy, viz., M2CAI16 and Cholec80. For M2CAI16, our exact match accuracies (when all the tools in one frame are predicted correctly) in online and offline modes were 80.95% and 81.84% with per-class F1-score of 88.29% and 90.53%. For Cholec80, the accuracies were 85.77% and 91.92% with F1-scores if 93.10% and 96.11% for online and offline, respectively. CONCLUSIONS: The results show LapTool-Net outperformed state-of-the-art methods significantly, even while using fewer training samples and a shallower architecture. Our context-aware model does not require expert’s domain-specific knowledge, and the simple architecture can potentially improve all existing methods.