Future of Radiology -AI and Deep Learning 

The purpose of this write up is to educate the reader in brief about artificial 

intelligence in radiology and its current and predicted future applications 

and answer the often asked question of “Will AI Replace Radiologists ?”


The question of whether Machines Can Think is about as relevant as the  

question of whether Submarines Can Swim -Edsger Dijkstra, 1984


Why do we need AI in Radiology ? 

Radiologists interpret images based on visual acuity, search patterns,

pattern recognition and this gets better with training / experience. 

However, all the information available in the images may not be 

viewed, the finding may be missed, there could be misinterpretation of 

the findings. There are distractions faced by Radiologists in their daily 

practice, also the experience amongst radiologists vary. There is literature 

evidence on the rates of missed breast cancer on mammograms and lung 

cancer nodules on radiographs and CT scans. 


What is AI ? 

Artificial Intelligence (AI) has been around since 70 years and Clinical 

Decision Support Systems (CDS) have been since 1970s. Enthusiasm and 

commitment to AI is growing. Smart companies are betting their future on 

AI and funding for startups using AI is expected to increase Global 

revenue for AI in the future. 


A PubMed search using the words ‘Artificial Intelligence’ has 95,541 

published articles while using the words “Artificial Intelligence in 

Radiology’ has 6490 articles at the time of this write up.  


Artificial Intelligence (AI):

The term Artificial Intelligence was coined by John McCarthy in 1956 and 

is essentially means ‘Capacity of Machines / technologies to imitate or 

better intelligent human behavior based on specific set of instructions’. An 

example of this would be the Deep Blue Computer in the 90’s whose 

famous opponent was the Chess Grandmaster Garry Kasparov. 


Machine Learning (ML): 

ML enables computers to learn- and also ‘learn how to learn’ using 

Algorithms (set of rules to be followed in calculations by a computer) to 

parse data, learn from data, and then make a determination or prediction 

about something in the world. 


So rather than hand coding software routines with a specific set of 

instructions to accomplish a particular task, the machine is trained’ using 

large amounts  of  data and algorithms that give it the ability to learn  how 

to perform the task. An example of this would be Spam call and spam 

Email alerts. Based on the initial input given by the user, the computer 

analyses subsequent email or phone calls are spam or not. 


There are four types of ML based on how much information is fed to the 

Machine. In Supervised ML, the entire data is labeled / annotated and fed 

to the machine, in Unsupervised ML, entire raw data is fed to the computer 

and given to the computer to analyze. Semi-supervised learning has a mix 

of both while in reinforcement learning the computer is initially given 

unsupervised data and then gradually given supervised data to build up. 


Ground Truth: is the cornerstone of ML. The computer will base all its 

calculation on the basis of what we tell or label a particular structure e.g 

normal brain, abnormal pancreas, subdural bleed etc. The computer will 

remember the structure and the label in its database. Any 

subsequent data that is fed to the computer which resembles the ground 

truth will be correctly identified by the program. In radiology, this requires 

a large number of well labeled images, which is labor intensive (A CT scan 

requires about 4 hours to label and feed the machine) and expensive. 


Deep Learning (DL): 

Deep Learning is a technique to implement Machine Learning. It is 

inspired by the Neural Networks in our brain, eg in our brain is the neuron 

which has an afferent neuron, synapse and efferent neuron, in AI these are 

called Convolutional Neural Network (CNNs-Huge Neural Network in 

Layers) which have an input layer, hidden layer and output layer 

arranged in three dimensions.


The Artificial Neural Network must be “Trained’ using training data 

sets from which the network ‘learns’. Deeper the network i.e more layers 

of training, the better the performance of the network   Deep Learning

 CNN’s learn over time with more data. 


Image Net: 

Interest in AI grew since 2012 when Image Net (a repository of millions of 

images of daily objects) Image recognition using AI surpassed humans  

error i.e error rate less than humans in identifying daily objects superseded 

by the year 2016 as shown in Figure 1. 


So the way we train Deep Learning is by feeding the system (Input) of 

images. Say for example we feed millions of images of penguins. Then test 

it with a new image of a penguin and see if it can identify the penguin 

(Output). Or test it with an image of a dog and see if it correctly identifies 

it to not be a penguin. What happens if the penguin wears a sweater, or 

there is image of a car close to a penguin or we show images of back side 

of penguin-the algorithm would not be able to identify it initially, however 

“It will Learn” over time with more such data-reinforced ML. Similarly, if 

we feed images of simple cyst of kidney to the CNN, and then add varying 

level of complexities i.e hemorrhage, septa, calcifications, solid nodule -

the CNN will learn to recognize these patterns over time.


What are the applications of AI in Radiology ? 

  1. Lesion detection and characterization.
  2. Assess response to treatment.
  3. Radiomics -Texture analysis - predict prognosis and response to treatment.
  4. Imaging biobanks-imaging biomarkers -predict risk of disease / treatment response / infrastructure to train AI models. 
  5. Radiologists learn to protect patients from weaknesses of AI
  6. Incredible asset to underserved regions, and could serve as a valued assistant for a subspecialty radiologist. 


Few of the current applications include 

  • Differentiation between pulmonary infections and fibrosis using CAD on 

CT and using this for risk stratification and prognostication

  • Discrimination of breast cancer with microcalifications on 

Mammography by deep learning 

  • Detection of high grade small bowel obstruction on conventional 

radiography with CNN’s

  • Fully automated Deep Learning system for Bone Age Assessment 
  • Deep Learning for Automated Classification of Pulmonary Tuberculosis 

using CNN’s 

  • Automated Critical Test Findings Identification of Hemorrhage, Mass 

effect, Hydrocephalus and Suspected acute infarct on non contrast CT of 

the Head. 

  • AI to detect Pneumonia on chest X-Ray. 
  • Research is underway at Johns Hopkins - Felix Project I and II for 

developing and applying Deep Learning algorithms to screen for 

pancreatic neoplasms usingCT and MRI Imaging. Goal is to develop a 

program that runs through all abdominal CT’s to alert the radiologist to 

an abnormal pancreas and a suspected mass. 


Will AI Replace Radiologists ? 


With the initial hype of AI in 2015-17, there were numerous articles with 

scientists and scholars predicting that AI will displace much of the work of 

radiologists and anatomical pathologists, and improve diagnostic 

accuracy. While some others said Radiologists would be replaced by AI 

sooner than their executive assistants. One AI pioneer and Turing Award 

winner mentioned “We should stop training Radiologists now’. 

However, as happens with an new technology in the “Hype Cycle”, the 

initial peak of inflated expectations was met with a trough of 

disappointment. 


Some of the limitations of AI include:

1. Data Sets and training 

- Data set annotation is time consuming. 

-Validation of ground truth diagnosis MUST be robust 

Medico-legal Responsibility 

-Who is responsible for the diagnosis if radiologists are no longer the primary agents of interpretation ? 

-If radiologists validate AI interpretations-do they carry ultimate responsibility, even they do not understand , and cannot interrogate the precise means by which the diagnosis was determined ?

Regulation Issues 

-Presently we rely on ethical behavior of software developers and their collaborators 

Need for access to large volumes of Data 

-Anonymisation of data should be guaranteed.

-Intellectual property rights. 

Data mining and Radiomics 

-Data first, theory later fallacy: Algorithms may not always make sense or be relevant 

Ethical Issues

-AI tools are mathematical constructs- intrinsically they are amoral

Challenge for humans is to anticipate how AI systems can go wrong or be abused and to build in protections against this happening 


So what is the way forward ? 

AI in Radiology Training programs - trainees learn to integrate AI in practice 

Radiologists cannot and should not, wish AI research away, but rather embrace it and integrate it as much as possible in daily work to ensure maximum clinical benefit to patients from new developments. 

Responsibility of radiologists to educate themselves and managers of hospitals and healthcare systems, to maintain an appropriate and safe balance protecting patients while implementing the best of new developments.  


Future Predictions

  • Next 5 years 


AI will help flag exams with critical findings 

AI will auto measure everything 

AI will start to identify some normals 

Normal Mammo, Chest X-Ray, Head CT ?

AI will start to detect some pathology

Breast Cancer, intracranial blood, free air 

AI will help us to get relevant data from EMR 

AI will help us to communicate better 


  • Next 5 -10 years 


When a radiologist will open a chest CT, AI will have already reviewed the patient’s EMR and images will point out some potential findings. 

It will classify normal vs abnormal 

Radiologists will be focussed more on working up abnormal results, biopsy, treatment, advances in technology.

Alexa Digital Assistant  


  • Next 5 -10 years 


Intelligent Equipment 

Pre acquisition 

-Tube performance 

-Scheduling

-Protocols 

-Positioning 

During Acquisition 

-Automatic Dose Modulation 

-Adjusts for movements, breathing 

Post Accquisition 

-Reconstructions 

-Segmentations 

-3D rendering 


  • Next 10 -20 years 


Combining the patients biome, genome with biosensors with imaging, social data and home data to have a single holistic digital map of a person which can be used to predict and prognosticate a future disease. 


  • Next 50 -100 years


Smart Scanners that identify and treat the disease at the same time. 

Memory Chip for Alzheimer’s disease-In case a person with AD gets lost, he can press a button, at the press of a button the implanted memory chip brings in prefed memories into the Hippocampal area of brain so that they remember their name, home etc. 

Biologic, Genetic and Digital and Immortality. Humans may have digital Avatars. Memories could be shot of a laser beam to Moon or Mars. Instead of transporting humans, they could have their Avatar on these locations from where they could download those memories into the Avatars. 




So in conclusion, 

As we are lifted by the latest AI bubble, “Will AI replace radiologists?” is the wrong question. The right answer is: Radiologists who use AI will replace Radiologists who don’t. 

- Curtis Langlotz, 2019

Radiology: Artificial Intelligence 2019; 1(3):e190058




References

       Podcast-Lex Fridman. Youtube link: https://youtu.be/hm9XvPWtZ6Q