Use of deep learning to objectively assess robotic surgical performance
Want to share this project? click to copy the link to this project
Kyle Lam, Clinical Research Fellow
Principal Investigator: Mr Sanjay Purkayastha, Clinical Senior Lecturer in Bariatric Surgery
Department of Surgery and Cancer, St Mary’s Campus, Imperial College London
I am currently looking for 1 student to help with this study. You will be actively involved in the recruitment, day-to-day running of this experiment and if you would like the write up of this experiment. Substantial contributions to this project will result in authorship in any output that results from this work.
• Able to spend time in St Mary’s campus
• Interest in surgery
• Any knowledge of machine learning, computer vision or coding experience
• Previous involvement in recruitment for experiments
• Previous research experience
Study Background (REC: 20IC6136):
With the arrival of new robotic platforms and the increasing uptake of robotic surgery, there is a pressing need to be able to rapidly and objectively assess surgeons. Acquisition of surgical trainee competencies has historically been based on case volume and assessment by supervising colleagues. More recently, there has been a move to validated objective rating scales such as the Objective Structured Assessment of Technical Skill. OSATS has also been adapted to minimally invasive platforms eg in laparoscopic surgery (GOALS) and robotic surgery (GEARS, R-OSATS). These all share the same issues in that they require considerable resources, both necessitating expert surgeons to assess and also by being time-consuming procedures. Rating can also be highly subjective and prone to limited interrater reliability.
A potential solution to these problems could be to automate this process using machine learning. Machine learning offers a solution which would not necessitate expert surgeon assessment, be rapid and be limited only by computer power. One potential machine learning approach explored in the literature is to use automated performance metrics. During robotic surgery, kinematic tracing data (eg instrument travelling distance, moving velocity, acceleration and deceleration), system events data (eg camera movement, third instrument swap, energy application, master clutch use) and instrument grip force can be recorded with devices such as dVLogger (Intuitive Surgical, USA).
An alternative approach is to use a computer vision based model. Video clips of live surgery or benchtop surgical procedures have been successfully processed using deep learning models in order to determine the type of procedure and determine the level of expertise of the surgeon. The majority of the work has been performed on an existing data set, the Johns Hopkins University–Intuitive Surgical Gesture and Skill Assessment Working Set (JIGSAWS), created prior to 20156. The aim of the study is to develop a new larger video dataset which can build upon previous work by increasing the number of benchtop procedures, number of surgeons, range of expertise and examine longitudinal surgical performance. Through this, the aim is to develop and validate more accurate deep learning models for assessment of surgical performance.
To determine if deep learning can be used to accurately predict level of surgical expertise and performance of surgeons performing robotic benchtop tasks
• To establish a database of videos which can act as a platform for development and validation of deep learning models to predict surgical expertise and performance in robotic benchtop tasks
• To determine differences in robotic skill acquisition between surgical novices and laparoscopic experts