| Type | Semester project |
| Split | 50% implementation, 30% theory, 20% literature review |
| Knowledge | Programming skills: Python. Experience in deep learning libraries and action recognition is a plus. |
| Subjects | Feature extraction, workflow segmentation, time series analysis. |
| Supervision | Luyin Hu, Soheil Gholami |

In the field of surgical training, mastering the skill of anastomosis—a critical and widely practiced procedure in surgery—is a key indicator of a trainee’s proficiency. The quantitative assessment of this intricate task, however, presents a significant challenge. Leveraging action recognition in microscopic videos offers a novel and objective approach to evaluate the learning outcomes of surgical skills. By analyzing detailed movements captured during anastomosis, this technique provides a quantitative framework to measure skill acquisition and a deeper understanding of how surgical expertise develops.
Approach
Reviewing spatial and temporal feature extraction methods for video analysis.
Developing a framework for automatic action recognition and workflow segmentation for the customized surgical video dataset.
Assessing the framework’s efficacy on public surgical action recognition datasets.
Expectation
- The student is comfortable with analyzing microsurgery videos that contains blood and tissues.
- The student knows Python and learns (or knowing in advance is a plus) deep learning methods for action recognition.
References
[1] Nwoye, C. I. et al., 2020. Recognition of instrument-tissue interactions in endoscopic videos via action triplets. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part III 23 (pp. 364-374). Springer International Publishing.