Days
Hours
Minutes
Seconds
July 7, 2026
Doctoral Consortium
July 7, 2026
Tutorials
July 7, 2026
Welcome Reception
July 8-9, 2026
AIME Main Conference
July 8, 2026
Conference Gala Dinner
July 10, 2026
Workshops

July 7, 2026:

Tutorials
(& Doctoral Consortium)

July 8-9, 2026:

AIME Main Conference

July 10, 2026:

Workshops

Tutorials

Tutorial Program

CRX building

T1. Appraise and Stress-Test Clinical AI: Calibration, Dataset Shift, and Post-Deployment Monitoring

Fares Alahdab, Randi Foraker and Anwar Chahal.

Half day.

Clinical prediction models and machine learning systems are being deployed in health systems, yet many adoption decisions still rely on headline performance metrics and incomplete reporting. This tutorial teaches a practical, method-focused workflow for deciding whether a model is ready for local use and what to monitor after deployment. Using a synthetic cohort and precomputed model outputs, participants will: (1) distinguish discrimination from calibration and quantify calibration error, (2) choose decision thresholds by linking model outputs to clinical consequences, (3) evaluate transportability under dataset shift (prevalence, measurement, and missingness changes), and (4) design a lightweight monitoring plan with explicit review and rollback triggers. The tutorial is hands-on and laptop-based, with a no-code workbook (Excel/Google Sheets) and an optional companion notebook for those who prefer code. Small groups conclude with an “adoption decision memo” (Go / No-Go / Go-with-conditions) plus a monitoring plan, mirroring real institutional governance deliverables. Participants leave with reusable templates for appraisal, decision documentation, and monitoring that can be adapted to new clinical domains.

T2. Interpretable AI and Evolving Knowledge in Medicine: From Probabilistic Reasoning and Graphs to Dynamic, Explainable Clinical Systems

Amit Rohila and Nidhi Malik.

Full day.

The tutorial provides the participants with a clear way of understanding how to get to self-evolving knowledge representations based on interpretable AI, focusing on medicine and healthcare: how AI systems represent knowledge, how AI systems deal with uncertainty, and how AI systems revise as new evidence and guidelines come along. This is intended to facilitate the clinical decision support, medical knowledge bases, and evidence-based practice through the availability of ideas in probabilistic inference, graph-based intelligence, and dynamic knowledge representation and connect them to medical applications. We will take our step through developing what AI in medicine actually implies, beginning with a simple case, and building up with the notions of probability and uncertainty. Then we will discuss graphs, and knowledge graphs and indicate how these relate to medical systems, such as SNOMED and ICD. Lastly, we will discuss medical knowledge as dynamic, and the ways in which AI can be made more interpretable and flexible in clinical environments by the creation of time-sensitive and geometry-informed models.

T3. Deployable AI in Healthcare: From Digital Health Operating Systems to Lifecycle Evaluation

Quynh Pham, Pedro Velmovitsky and Joseoh Cafazzo.

Full day.

Despite rapid advances in AI for healthcare, most models never reach clinical deployment. The gap between technical performance and real-world impact persists due to misalignment with clinical workflows, insufficient evaluation across the AI lifecycle, and weak integration into care delivery infrastructure. This full-day tutorial bridges that gap through two complementary halves. Part I (Cafazzo & Pham) introduces AI as a digital health operating system—reframing AI from isolated tools to an orchestration layer for clinical workflows—covering deployable model selection, clinician-facing interface design, and hands-on prototyping with generative AI platforms. Part II (Velmovitsky & Pham) addresses full-lifecycle evaluation of clinical AI: from validation study design and regulatory pathways to post-deployment monitoring, bias detection, and real-world performance assessment. Participants leave with practical frameworks for building, evaluating, and sustaining AI systems that actually work in clinical settings.

T4. From LLMs to DNA: A Practical Guide to Genomic Foundation Models in Healthcare

Pablo Arozarena Donelli, Simone Rancati, Giovanna Nicora, Riccardo Bellazzi, Enea Parimbelli and Luigi Portinale.

Half day.

As genomic sequencing becomes increasingly integrated into clinical practice, the primary challenge is shifting from data generation to functional interpretation. Recent advances in artificial intelligence have introduced genomic foundation models, deep learning architectures trained on large collections of biological sequences that learn contextual representations of DNA. This tutorial provides a practical introduction to genomic foundation models and their emerging applications in medicine and epidemiology. Participants will explore the conceptual transition from natural language processing models to genomic sequence models, focusing on architectural adaptations required for biological data, including long-range dependencies and large genomic contexts. The tutorial will present recent developments in the field through selected case studies, including large-scale models such as Evo2. Participants will learn how these models can support biomedical tasks such as variant effect prediction, genomic representation learning, and pathogen genome analysis. Learning objectives include: (i) understanding the key architectural differences between NLP language models and genomic foundation models, (ii) evaluating their potential applications in clinical genomics and epidemiology, and (iii) exploring practical workflows for inference, adaptation, and evaluation of genomic AI models. The tutorial combines conceptual lectures with practical demonstrations to provide participants with both theoretical foundations and hands-on exposure to emerging genomic AI methods.

T5. From Radiomics to Dosiomics: patterns, tools and challenges

Roberto Gatta, Unai Pérez Goya, Amaia Gastearena Irigoyen and Paola Jablonska.

Half day.

The need for diagnostic and prognostic predictive models based on radiological and nuclear medicine imaging provided the fertile ground for the emergence of the field known as Radiomics. Radiomics has evolved while continuously addressing several critical aspects, including: (a) the protocols adopted in radiology for storing medical images, (b) the signal preprocessing , (c) the research on machine learning techniques to improve performance, (d) issues related to model validation, reproducibility, and replicability of results. In addition, Radiomics is also expanding beyond traditional radiology departments, moving toward technologically related domains such as radiation therapy. Here, the integration of Radiomics with Theragnomics appears to provide a solid foundation for the emergence of a new discipline: Dosiomics. This tutorial is an introduction to the main concepts and challenges behind radiomics and dosomics, with a hands-on example.

T6. Responsible AI-Assisted Qualitative Data Analysis in Health Research: A Hands-On CFIR Tutorial

Zack Van Allen, L. Jayne Beselt, Douglas Archibald, Jerry Maniate and Arun Radhakrishnan. 

Half day.

Large language models are rapidly entering health research workflows and understanding what it means to utilize AI for qualitative analysis (such as when these tools help, where they fail, and how to use them responsibly) has become important. This half-day hands-on tutorial introduces a human-in-the-loop workflow for AI-assisted qualitative data analysis using the Consolidated Framework for Implementation Research (CFIR). Participants will (1) apply CFIR to structure deductive coding of de-identified transcripts, (2) compare manual coding with outputs from free and higher-capability large language models, (3) refine prompts to improve transparency and reproducibility, (4) adjudicate disagreements and recognize predictable failure modes such as missed codes, over-assignment, and shallow rationales, and (5) identify ethical safeguards for privacy, bias, documentation, and human verification. Through live demonstrations and guided exercises, participants will leave with a reusable prompt scaffold, a practical framework for when AI helps or harms, and an ethics checklist they can adapt to their own projects.

T7. Artificial Intelligence Pipelines for Medical Imaging: From Preprocessing to Deep Learning in fMRI and CT

Chiara Pullega and Matteo Dallera. 

Half day.

Artificial intelligence (AI) methods are increasingly used to analyze medical imaging data for disease detection, quantitative analysis, and clinical research. However, applying AI to medical imaging requires carefully designed computational pipelines that transform raw imaging data into reliable machine learning inputs through preprocessing, feature extraction, and model development.
This tutorial provides a structured overview of end-to-end pipelines for AI-driven medical imaging analysis. Two complementary case studies are presented: functional magnetic resonance imaging (fMRI) for the analysis of brain activity and computed tomography (CT) for thoracic imaging. These examples illustrate how different imaging modalities require specific preprocessing strategies while sharing common methodological principles for AI pipeline design.

The tutorial introduces preprocessing tools and workflows, feature extraction strategies, and machine learning approaches commonly used in medical imaging analysis. Particular attention is given to challenges such as high data dimensionality, reproducibility, and generalization.

The goal is to provide participants with practical guidance for designing robust and reproducible AI pipelines for medical imaging applications.

T8. Pragmatic Evaluation of Large Language Models in Healthcare

Hojjat Salmasian, Abdul Tariq, Ashley Oliver, Dhineshvikram Krishnamurthy and Jim Urick. 

Half day.

Large Language Models (LLMs) and LLM-based tools are becoming ubiquitous in healthcare. However, there is a dearth of analyses evaluating the efficacy of these tools. This tutorial has three learning objectives for participants:
(1) gain a rigorous conceptual understanding of how LLMs are built;
(2) become familiar with the various frameworks available in the literature for LLM evaluation
(3) observe three real-world case studies that demonstrate the use of these frameworks (and other open-source tools) to evaluate LLM deployments of increasing complexity.

T9. CARE-AI: Framework Contextual, Accountable, and Responsible Ethics for Artificial Intelligence in Healthcare

Lyn Sonnenberg, Jerry Maniate and Dan McEwen. 

Half day.

Artificial intelligence (AI) is rapidly transforming health professions education, research, and care. While enthusiasm is high, many educators and leaders struggle with how to implement AI responsibly and equitably. Existing ethical frameworks often remain abstract and difficult to apply in the day-to-day realities of academic medicine. To address this challenge, we developed and validated the Health CARE-AI Framework, a set of nine principles for responsible AI in health contexts, alongside a practical implementation toolkit that supports their integration into teaching, clinical decision-making, research oversight, and administrative leadership. This tutorial provides participants with a structured opportunity to engage with these principles, explore case based applications, and consider how to adapt the implementation guide and toolkit to their own institutional and professional settings. By the end of this session, participants will be able to: Describe the nine CARE-AI principles and their underlying rationale; Apply these principles to analyze AI-related scenarios in health professions education, research, and care; Identify ethical and equity considerations in their own contexts where AI is being introduced or scaled; Adapt components of the CARE-AI Toolkit to support responsible AI integration within their professional roles and institutions.