Will it be possible to use AI for generation of scientific and clinical evidence that is currently demonstrated via traditional randomized controlled trials?
There are many questions related to the development, limitations and validity of AI and machine learning applied to healthcare solutions and products. Despite various challenges such as dataset shifts and automation bias, one can envision a future where AI-generated data is used to document clinical safety and efficacy, thereby replacing the randomized controlled trials as we know them today. This would lead to a massive acceleration in the development of new healthcare solutions and products. However, there is still a way to go. Another and perhaps more realistic scenario is broader use of AI-supported clinical research and drug development, where AI and machine learning become an integrated tool for faster and less expensive development of safer and more efficacious healthcare solutions and products. It could be for accurate and comprehensive prediction of desired and unwanted effects, individual disease development, risk profile and prognosis, as well as for identifying which patients will benefit the most from new therapies, and allocation to the optimal treatment offer.
As a tool for use in the clinic, AI must meet clinical evidence requirements before implementation, just like other products and solutions for diagnosis or treatment.
Clinical evidence is generated by demonstrating safety and efficacy in controlled trials compared to an established standard.
Paradoxically, in the future it may be possible to use AI and machine learning to synthesize scientific and clinical evidence that is currently demonstrated via traditional randomized controlled trials.
However, there are still many questions surrounding the development, limitations, and validity of AI and machine learning for future healthcare solutions and products.
AI as a co-pilot in healthcare
AI has already found its place in healthcare, particularly in medical imaging as a “co-pilot” supporting traditional assessment of screening examinations, where the benefits of AI include more efficient use of human resources.
For example, AI-supported mammography in randomized controlled trials has shown the same cancer detection rate as mammography without AI assistance but with only half the manual screen reading time.
Other obvious and potential benefits of AI include faster diagnosis times, lower error rates, higher and consistent treatment quality, and greater equity in healthcare.
However, there are also areas where AI’s possibilities are more speculative, and challenges and risks associated with its development, implementation, and use may be difficult to foresee.
Answers to well-defined questions
AI seems particularly useful for solving well-defined questions and tasks based on homogeneous datasets.
AI can identify and notify or warn of deviations from the norm, but this requires that the norm is known and incorporated into the algorithm.
With machine learning, where the model learns from examples rather than being programmed with rules, it is also possible to find patterns and relationships in historical data, which can provide insights into causality that are not achievable with traditional data analysis methods today.
However, there may be discrepancies between historical training datasets and prospective clinical data, making it difficult to feed AI and machine learning models with representative “real-life” values and examples that reflect relevant issues.
Challenges for AI
One of the problems with AI systems are dataset shifts, where there is a mismatch between the data on which an AI model was developed and the data on which it is used, e.g. due to changes in the population, epidemiological conditions, underlying circumstances, and/or patient behavior.
This can lead to poor performance of AI models in practice despite good results in tests.
Another challenge is automation bias, where high expectations and excessive trust in AI and machine learning model results overshadow conflicting “manual” signals and information, leading to incorrect conclusions and decisions.
This can be particularly pronounced if the models are not designed to alert to possible limitations or uncertainties in the output and/or if the underlying algorithms are based on opaque data.
Similarly, a lack of information about the underlying conditions and assumptions in model development and validation can create doubt and uncertainty about generalizability and usability.
The more complex the models become, the harder they are to understand and may appear as a “black box” to users. Increased transparency should therefore be a prerequisite for the implementation and integration of AI and machine learning in clinical research.
Replacement for randomized trials
Despite the various challenges and limitations, one can envision a future where AI-generated data can be used as evidence of clinical safety and efficacy, potentially replacing the randomized controlled clinical trials as we know them today.
This would lead to a massive acceleration in the development of new healthcare solutions and products. However, there is still a long way to go.
Another, more realistic scenario is broader use of AI-supported clinical research and drug development, where AI and machine learning do not eliminate the need for prospective clinical studies but instead become integrated tools for faster and cheaper development of safer and more efficacious solutions and products.
It could be for accurate and comprehensive prediction of desired and unwanted effects, individual disease progression, risk profiles, and prognosis, as well as for identifying which patients will benefit the most from new therapeutic approaches, and allocation to the optimal treatment option.
Maximizing the use of AI in clinical research will likely require modifications to the typically used statistical methods, as well as adjustments to regulatory and legal frameworks and structures.
Regardless of the fate of classic randomized clinical trials, there is probably no doubt that AI and machine learning will play a significant role in the development of new and innovative healthcare solutions and drugs in the future.
And the better understanding of the challenges and limitations of the technology and its application, the better we can harness its potential and avoid negative consequences such as dataset shifts and automation bias.