Skip to content

Why is radiology AI adoption so slow?

Christoph Haarburger
4 min read
Why is radiology AI adoption so slow?

For several years, the radiology community is surrounded by a hype on artificial intelligence (AI). Also, the number of publications in that subject area is steadily increasing.

Even Turing Award winner and co-inventor of the backpropagation algorithm Geoffrey Hinton said in 2016 that no more radiologists should be trained because AI would take over their work in the future. Four years have passed since that statement. However, the everyday work of radiologists has hardly changed. In this post, I present five reasons for this discrepancy.

1 - Current AI-tools can only solve one distinct problem each

Current machine learning algorithms are good at solving very precisely defined tasks, given that they learn from large amounts of training data. The specific task for an AI application must be precisely defined a priori. For example, an algorithm trained to detect intracranial bleeding cannot be used to detect fractures. This would require a new training dataset with different annotations and different optimization criteria for training.

For a broad application of radiology AI, as of today, a new model would need to be developed for each individual clinical question, which is extremely time-consuming and expensive.

2 - Datasets are rather small and of questionable quality

The major breakthroughs of AI outside of medicine have been achieved using very large datasets. AI vendors with b2c business models can mostly use very large datasets from millions or billions of user records to train their algorithms. For example, Netflix can use their own user data to train ML models to suggest to its users which movies they are likely to enjoy.

In order for AI applications in radiology to fully exploit their potential, datasets would need to consist of tens of thousands of examinations, which is very rarely the case at the moment. Also, keep in mind that this applies to each clinical use case as explained in point 1. In addition, the mere availability of images for training is not enough: For supervised learning, the currently best-performing paradigm of AI, not only radiological image data is required for training, but also the associated diagnoses, on which the ML-models are to be trained. However, these are often "hidden" in the prose text of written reports, i.e. not machine-readable for a computer. As a result, the diagnoses must be extracted from the findings in time-consuming manual work and stored in machine-readable format.

As a result of this time-consuming workflow, the datasets currently in use are often very small, comprising hundreds to a few thousand examinations. At this scale, it is difficult to impossible for AI algorithms to realize their full potential.

3 - Limited reproducibility and generalization

AI systems are currently not very robust to slight changes in the input data. For example, an algorithm trained to recognize faces using passport photos from China might not provide reliable recognitions for group photos from Europe, even though they also show human faces. If the data distribution is only slightly different, the behavior of the algorithm could differ a lot. This problem carries over to AI applications in radiology: The visual appearance of magnetic resonance images differs depending on magnetic field strength, sequences used, and reconstruction algorithms. For AI applications to provide reliable value, they must be robust to these image acquisition parameters. For most methods used, we don't know to what extent this is the case. I think this is currently the most under-researched question in radiology AI.
Another important factor for reproducible accuracy lies in the patient population. Most current radiology AI applications have not been evaluated extensively enough on a diverse patient population.

4 - Benefit for radiologist and patient unclear

For AI applications to offer radiologists an added value, they must make their work more efficient and improve the quality of the reports. A highly efficient integration into PACS/RIS systems and dictation software is therefore essential. However, few AI providers are also providers of PACS/RIS systems. For an efficient integration into the radiological workflow, several software providers must therefore work closely together and integrate their respective products. This process is already tackled by companies like Osimis, but it will take time until highly integrated solutions are available at scale.

5 - Regulatory approval and R&D cost

Approval (CE or FDA certification) is time-consuming and expensive. Extensive prospective studies with a large patient population are required to prove the benefits and performance, not only for formal approval but also to convince potential customers of the technical soundness of the solution. The product development process is therefore more comparable to the development of drugs in the pharmaceutical industry than to classical software development, which affects both the R&D cost as well as the time to market.


This post is by no means a rejection of AI applications in radiology. On the contrary, I am convinced that AI will play a major role in everyday radiology in the future. However, we are still at the beginning of this process. Questions about data quality, reproducibility, bias, workflow integration and approval will play just as important a role in this process as pure accuracy percentages, which are currently fueling the hype. AI will not make radiologists obsolete in the near future. Rather, as an "augmented intelligence," it will be another tool in a radiologist's repertoire, enabling to work more efficiently and improve the quality of the reports.

This post was originally published in German on