In-context: September 4, 2025

 

In-context: September 4, 2025

Here’s a quick wrap of the three papers we found interesting over the last few weeks with some take home points.

  • 00:30 - Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study

  • 06:05 - Automation Bias in Large Language Model Assisted Diagnostic Reasoning Among AI-Trained Physicians

  • 10:15 - Emerging algorithmic bias: fairness drift as the next dimension of model maintenance and sustainability 

  • 15:20 - Evaluating Large Language Model Diagnostic Performance on JAMA Clinical Challenges via a Multi-Agent Conversational Framework

Some resources and papers we discuss:

Budzyn, K et al. Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study, The Lancet Gastroenterology & Hepatology, Published August 12, 2025, DOI: 10.1016/S2468-1253(25)00133-5

Quazi, AI et al, Automation Bias in Large Language Model Assisted Diagnostic Reasoning Among AI-Trained Physicians medRxiv 2025.08.23.25334280; doi: https://doi.org/10.1101/2025.08.23.25334280

Davis SE, Emerging algorithmic bias: fairness drift as the next dimension of model maintenance and sustainability, Journal of the American Medical Informatics Association, Volume 32, Issue 5, May 2025, Pages 845–854, https://doi.org/10.1093/jamia/ocaf039

Sangwon, KL et al Evaluating Large Language Model Diagnostic Performance on JAMA Clinical Challenges via a Multi-Agent Conversational Framework, medRxiv 2025.08.20.25334087; doi: https://doi.org/10.1101/2025.08.20.25334087

 
 
Next
Next

In-context: August 18, 2025