In October 2020, for a previous editorial in this journal, the term ‘artificial intelligence’ was searched in the National Library of Medicine via PubMed. At that time, the search yielded 110,855 publications.1 It is remarkable that performing that same search only a year later yields almost 28,500 further citations, with the total now – as of 2 October 2021 – being 139,304 publications. The potential impact of artificial intelligence (AI) has been realised by even its staunchest critics, and the possibility of a wonder thinking machine, ‘the Master Algorithm’ as computer scientist Pedro Domingo called it, is no longer just science fiction.2
Despite the recent explosion of interest in AI, the reality is that progress has been moving along for at least 180 years. The first published algorithm ever specifically tailored for implementation on a computer was published by Ada Lovelace – the disciple of Charles Babbage and daughter of the great philhellene Lord Byron – in 1843. Today, philosophers and futurists such as Nick Bostrom and Ray Kurzweil speak not of AI, but of superintelligence, far exceeding the capacity and might of the human brain.3 After all, Gary Kasparov did lose at chess to IBM’s Deep Blue.
For clinicians – in particular electrophysiologists and arrhythmia experts – the power of AI has become equally apparent. There is now emerging evidence that AI may support diagnostics in electrophysiology by automating common clinical tasks or aiding complex tasks using deep neural networks that are superior to currently implemented computerised algorithms.4 Soon, AI simulations of the circuit of monomorphic ventricular tachycardia may be used to guide catheter ablation, or even stereotactic radioablation for a vast number of patients.5 Combining data obtained from several diagnostic modalities using AI might elucidate pathophysiological mechanisms of new, rare, or idiopathic cardiac diseases, aid the early detection or targeted treatment of cardiovascular diseases or allow for screening of disorders currently not associated with the ECG.4
Is all this the future or just wishful thinking? Computer scientist and inventor Eric Larson has seriously challenged the notion of any supercomputer exceeding the human brain.6 This is not only a problem of Aristotelian deduction versus induction versus inference: it seems to be a matter of logistics too. Rebecca Goldin, writing for the Genetic Literacy Project in response to President Obama’s 2013 announcement of a broad new research initiative to understand the human brain, provides perspective:7
“The human brain is estimated to have approximately 86 billion neurons (8.6 x 1010), each neuron with possibly tens of thousands of synaptic connections; these little conversation sites are where neurons exchange information. In total, there are likely to be more than a hundred trillion neuronal synapses – so a computer recording a simple binary piece of information about synapses, such as whether it fired in a time window or not, would require 100 terabytes. The amount of storage needed to store even this very simple information every second over the course of one day for one person would more than 100,000 terabytes, or 100 petabytes. Supercomputers these days hold about 10 petabytes. And this quick calculation doesn’t account for the changes in connectivity and positioning of these synapses occurring over time. Counting how these connections change just after a good night’s sleep or a class in mathematics amounts to a whopping figure (and many more bytes than the estimated 1080 atoms in the universe). The wiring problem seems intractable in its magnitude.”
The evolution of quantum computers promises exponentially more computing power. Nevertheless, the fundamental problem remains: how can we imitate – let alone supersede – the human brain when we do not actually know how it works in all its complex, everchanging functions in the time domain? The concept of singularity, by connotation with the Big Bang and Stephen Hawking’s quest for the unified theory of physics, has always been appealing to the human intellect. Whether this motivation results in a future human-made super-intelligent machine and what the consequences of such an endeavour would be, remain to be seen. Ian McEwan’s most recent book reminds us of this dilemma.8 To understand the importance of human complexity, in its simplest form, is perhaps illustrated by the case of the residents of Scunthorpe in northeast England who, in the late 1990s, could not make an AOL account owing to the company’s profanity algorithm detecting a certain ‘inappropriate’ word within the city’s name! It is hard to teach an algorithm to contextualise like the human brain can.9
In contemporary medicine – and especially clinical electrophysiology and arrhythmia management – AI is promising, but in the context of continuous validation of its diagnostic and predictive accuracy. As stated in an elegant review in Arrhythmia & Electrophysiology Review, before the implementation of AI algorithms in clinical practice, trust in the algorithms must be established.4 Perhaps one day machines and supercomputers will exceed human intelligence. Until then, electrophysiologists must rely on the best computer known today: their brain. This is what creates superintelligence after all.
Demosthenes G Katritsis
Editor-in-Chief, Arrhythmia & Electrophysiology Review
Hygeia Hospital, Athens, Greece
Johns Hopkins School of Medicine, Baltimore, MD, US