Intelligent clinical decision support with Markov decision processes

Dan Coroian, Kris Hauser

Collaborators: Sriraam Natarajan (IU), Shaun Grannis (IU School of Medicine and Regenstrief Institute)


Decision support tools have the promise to give clinicians better understanding of treatment outcomes and costs by integrating billions of electronic health record (EHR) datapoints with expert recommendations and best practices. This project aims to advance the state-of-the-art in artificial intelligence for decision support that can reason about patient-specific, temporally-extended treatment plans. In doing so, it will set forth a unified computational foundation for recommending optimal action plans - including both diagnosis and intervention actions - for treating chronic disease, multi-step and adaptive treatments, and long-term health habits. It will study two promising techniques from the fields of machine learning and artificial intelligence that are able to reason probabilistically, for addressing noisy and incomplete observations, and temporally, among multiple points in time: 1) statistical relational learning (SRL) for mining probabilistic, temporal patterns in massive EHRs, and 2) partially-observable Markov decision processes (POMDPs) that reason with temporal probabilistic models to optimize sequential treatment plans.

  • D. Coroian and K. Hauser.Learning Stroke Treatment Progression Models for an MDP Clinical Decision Support System. Submitted to SIAM Conf. on Data Mining, April 2015.
  • S. Yang, T. Khot, K. Kersting, G. Kunapuli, K. Hauser, and S. Natarajan. Learning from Imbalanced Data in Relational Domains: A Soft Margin Approach. To appear in IEEE Intl. Conference on Data Mining (ICDM), December 2014.
  • C. Bennett and K. Hauser. Artificial Intelligence Framework for Simulating Clinical Decision-Making: A Markov Decision Process Approach. In Artificial Intelligence in Medicine, 2012. doi: 10.1016/j.artmed.2012.12.003. link

Simulation results from POMDP optimized treatments on the International Stroke Trial dataset suggest that an MDP approach may reduce death rate from 20% (left) to 16% (right) and increase full recovery rate from 19% to 30%. This figure highlights the need for MDP approaches: a "greedily" optimized one-step lookahead (center) leads to 6% more deaths than human doctors.

Simulation results from POMDP optimized treatments on a clinical depression dataset indicate a potential for over 50% improvements in cost while improving outcomes by over 40%.