Practical Guide to Artificial Intelligence, Chatbots, and Large Language Models in Conducting and Reporting Research

Ce dossier présente un ensemble de guides pratiques concernant l'intelligence artificielle et les sources de données massives dans la recherche en chirurgie

JAMA Surgery, sous presse, 2025, article en libre accès

Résumé en anglais

In 1966, Joseph Weizenbaum described the first chatbot: a computer program using natural language processing to decompose user sentences, identify key words in context, and generate appropriate responses. Weizenbaum anticipated and criticized the notion that his chatbot was intelligent: “…once a particular program is unmasked, once its inner workings are explained, its magic crumbles away; it stands revealed as a mere collection of procedures.” Subsequently, decades of research enhanced natural language processing capabilities, most notably via large language models (LLMs, deep learning models that process and generate text, usually by leveraging transformer architectures), which underlie contemporary chatbots. Even LLMs—heralded for their potential to transform science, industry, and society—are mere collections of procedures. Yet already, an LLM trained on both clinical and general English language text can generate clinical notes that pass the Turing Test (ie, physicians could not consistently distinguish between human- and LLM-generated text) while maintaining high linguistic readability and clinical relevance. In surgery, LLMs may be particularly useful in extracting surgical risk factors or outcomes from clinical notes and operative reports, learning from text inputs when performing prognostication and decision-support tasks, and serving as educational tools for surgical concept summarization, active knowledge retrieval practice, and virtual surgical simulation. The purpose of this article is to summarize the limitations and opportunities in applying artificial intelligence (AI), chatbots, and LLMs in conducting and reporting surgical research (Box).