Artificial intelligence is rapidly transforming healthcare, from assisting in complex surgeries to enhancing diagnostics. But a recent study in The Lancet Gastroenterology and Hepatology reveals a potential drawback: doctors who become too reliant on AI might lose critical skills. The research suggests that doctors who routinely use AI during certain procedures could see their diagnostic abilities decline by as much as 20% when AI is unavailable.
For the first time, real-world data shows that consistent use of AI in diagnostic colonoscopies may lead to a measurable loss of skill among physicians. The study, which analyzed over 1,400 colonoscopies, found that experienced professionals who had grown used to AI support detected 20% fewer pre-cancerous growths when they performed procedures without it. This was compared to their own performance before they started using AI.
While earlier studies celebrated AI's ability to boost the detection of pre-cancerous growths, this new research highlights a significant trade-off: efficiency vs. skill retention.
The findings from the study raise concerns that extend beyond just colonoscopies. According to Dr. Catherin Menon, a principal lecturer at the University of Hertfordshire's Department of Computer Science, this "de-skilling effect" could apply to other areas of medicine as well.
Dr. Menon warned that healthcare professionals who rely heavily on AI might perform worse than they did initially if the AI becomes suddenly unavailable, for instance, due to a cyberattack or system failure. This over-reliance could lead to worse patient outcomes than before AI was even introduced.
This study provides clinical evidence for what was once considered a theoretical risk. It adds weight to a long-standing concern in the medical community that technology, while powerful, can dull human expertise if used without careful consideration. The stakes are now clearer:
Patient safety risks: If AI tools fail unexpectedly, doctors may not be equipped to perform at their previous level.
Cybersecurity vulnerabilities: A compromised AI system could leave professionals unable to effectively step in.
Wider medical implications: The de-skilling observed in colonoscopies could also occur in fields like radiology, pathology, and even surgery.
The study from The Lancet doesn't suggest that AI in medicine is a bad thing. In fact, it has been shown to improve detection rates in colonoscopy trials and other specialties. However, the findings underscore the urgent need for a balanced approach:
Ongoing training: Doctors should periodically perform diagnostics without AI to maintain their expertise.
Hybrid workflows: AI should be a tool that augments, not replaces, a doctor’s clinical judgment.
More research: We need more studies to understand the long-term effects of AI use across various medical disciplines.
In conclusion, the findings from The Lancet are a pivotal moment in the discussion about AI in healthcare. While the technology offers undeniable benefits, unchecked reliance could undermine the very professionals it's meant to support. AI must be a partner, not a crutch. As medicine moves into the AI era, the challenge will be to embrace innovation while ensuring that human skills—the foundation of patient care—remain sharp, resilient, and irreplaceable.