Schematic of process for classifier design

Guest blog: 21st century surgery is digital

Ronan Cahill, Digital Surgery Unit, Mater Misericordiae University Hospital, Dublin, Ireland and UCD Centre for Precision Surgery, Dublin, Ireland.

Niall Hardy, UCD Centre for Precision Surgery, Dublin, Ireland.

Pol MacAonghusa, IBM Research, Dublin, Ireland.

Twitter @matersurgery Email: ronan.cahill@ucd.ie

Cancerous tissue behaves differently from non-cancerous tissue. Every academic oncology paper ever written tells us this. The appearances of any cancer primary (or indeed secondary lesion) result from biological and molecular processes that are the hallmarks of malignancy including dysregulated cell function and composition, host-cancer stromal and inflammatory response and angiogenesis. However, we surgeons haven’t really yet been able to exploit this knowledge during surgery in a way that helps us make a better operation. Instead, our learning and research about oncological cellular processes has predominantly advanced through basic science geared more towards perioperative prognostication and/or adjuvant therapy stratification. Wouldn’t it be great if realisation of cancer microprocesses could usefully inform decision-making intraoperatively?

We’ve just published an initial report in the BJS showing this very thing – that it is indeed possible to ‘see’ cancer by its behaviour in real-time intraoperatively. We’ve used Artificial Intelligence (AI) methods in combination with near-infrared fluorescence laparoendoscopy to judge and classify neoplastic tissue nature through the observation of differential dye diffusion through the region of interest in comparison with that happening in normal tissue being viewed alongside it. Through our understanding of biophysics (flow parameters and light/dye interaction properties), a lot of information can be drawn out over short periods of times via advanced computer vision methodology. With surgical video recording in the region of 30 frames per second, big data generates over the time frame of a few minutes.  While the gross signal shifts are discernible even without AI, smart machine learning capabilities certainly mean their interrogation becomes really usable in the provision of classification data within moments. What’s more, while we’ve focused initially on colorectal cancer, the processes we are exploiting seem common across other solid cancers and using other camera-based imaging systems. By combining with the considerable amount of knowledge we already have accrued regarding tissue biology, chemistry, physics as relate and indeed surgery, our AI methods are giving explainable and more importantly interpretable recommendations with confidence using a smaller dataset than that demanded by deep learning methodologies.

This though is just an early exemplar of what’s becoming possible through ‘Digital Surgery’, a concept that seems far more likely to transform contemporary surgical practice than our current general surgery “robotic” systems, hulking electromechanical tools entirely dependent on the user – a rather 20th century concept! Indeed, there is sophisticated technology everywhere in today’s operating theatres – surgeons sure don’t lack technical capability. Yet often despite vaulting costs, advance of real, value-based outcomes has been disappointingly marginal in comparison over the last two decades. The key bit for evolved surgery is instead going to be assisting surgeons to make the best decision possible for each individual patient by providing useful, discerning information regarding the surgery happening right now, and somehow plugging this case circumstances directly into the broad knowledge bank of expertise we have accrued as a profession (and not just be dependent on any single surgeon’s own experience).

To do this we need to realise the importance of visualisation in surgical procedures versus manual dexterity.  All surgery is performed through the visual interpretation of tissue appearances and proceeds via the perception-action cycle (‘sense, predict, act, adjust’). This is most evident during minimally invasive operations where a camera is used to display internal images on a screen but applies of course to open procedures as well. As all intraoperative decisions are made by the surgeon, the entire purpose of surgical imaging has been to present the best (‘most visually appealing’) picture to the surgeon for this purpose. Experiential surgical training is for the purpose of developing the ‘surgical eye’, that is learning how to make qualitative intraoperative judgments reliably to a reasonable standard. We haven’t however gotten the most out of the computer attached to the camera beyond image processing where we have concerned ourselves with display resolutions and widths. 

Imagine instead if some useful added interpretations of images could be made without adding extra cognitive burden to the surgeon, perhaps with straightforward on-screen prompts to better personalise decisions? This would be particularly exciting if these data were not otherwise easily realisable by human cognition alone and could be immediately and directly relevant to the person undergoing the operation. Every operation is in effect a unique undertaking, informed by probabilities accruing through individual and collected prior experience for sure but a new thing in and of itself for which the outcome at the time of its performance is unknown. How this individual patient differs from others and most especially how might an adverse outcome be avoided is a crucial thing to flag before any irreversible surgical step that commits an inevitable future. 

Right now, we are in a golden age of imaging. This is intricately linked to advances in computer processing and sharing power along with AI methods. This means we can harvest great additional information from the natural world around us across the spectrum of enormous (radio waves spanning the universe) to tiny (high resolution atomic imaging) distances and apply methods to help crystalise what this means to the observer. While a lot of AI is being directed at the easier and safer areas of standard patient cohort datasets, increasingly it’s possible to apply computer intelligence to the data rich surgical video feeds being generated routinely during operations to present insights to the surgeon. While early first steps at the moment relate to rather bread and butter applications such as instrument or lesion recognition and tracking as well as digital subtraction of smoke or anonymization protocols to prevent inadvertent capture of operating rooms teams when the camera is outside the patient, soon the capability to parse, segment and foretell likely best next operative steps will be possible at scale.

At present, the biggest limitation is that surgery lacks large warehoused archives of annotated imagery because operative video is a more complex dataset to scrutinise than the narrower image datasets available in specialities such as radiology, pathology and ophthalmology. Thanks to advances in computing, this is changing. Surgical video aggregation to enable building of representative cohorts is increasingly possible and, by combining with metadata and surgical insights, its full value can begin to be realised. GDPR frameworks provide structure and surgeons are increasingly understanding of the value of collaborating in research, education and practice development. However, while certain siloed sites focused around specific industry projects are already manifesting, the key area for greatest general advance lies within the surgical community combining broadly to construct appropriately developed and secured, curated video banks of procedures that can then be made accessible to entities from regulators and standard bodies, academia and indeed corporations capable of advancing surgery. This gives by far the greatest chance of the best of surgical traditions carrying through the 21st century while our weak spots are fortified for better surgery in the public interest.

Further reading: 
Artificial intelligence indocyanine green (ICG) perfusion for colorectal cancer intra-operative tissue classification.
 Cahill RA, O’Shea DF, Khan MF, Khokhar HA, Epperlein JP, Mac Aonghusa PG, Nair R, Zhuk SM.Br J Surg. 2021 Jan 27;108(1):5-9. https://doi.org/10.1093/bjs/znaa004 PMID: 33640921 

The age of surgical operative video big data – My bicycle or our park? Cahill RA, MacAonghusa P, Mortensen N. The Surgeon 2021 Epub ahead of press https://doi.org/10.1016/j.surge.2021.03.006

Ways of seeing – it’s all in the image. Cahill RA. Colorectal Dis. 2018 Jun;20(6):467-468. https://doi.org/10.1111/codi.14265 PMID: 29864253

Leave a Reply