A Study of Explainable Decision Support for Longitudinal Sequential Decision Making
Document
Description
Decision support systems aid the human-in-the-loop by enhancing the quality of decisions and the ease of making them in complex decision-making scenarios. In the recent years, such systems have been empowered with automated techniques for sequential decision making or planning tasks to effectively assist and cooperate with the human-in-the-loop. This has received significant recognition in the planning as well as human computer interaction communities as such systems connect the key elements of automated planning in decision support to principles of naturalistic decision making in the HCI community. A decision support system, in addition to providing planning support, must be able to provide intuitive explanations based on specific user queries for proposed decisions to its end users. Using this as motivation, I consider scenarios where the user questions the system's suggestion by providing alternatives (referred to as foils). In response, I empower existing decision support technologies to engage in an interactive explanatory dialogue with the user and provide contrastive explanations based on user-specified foils to reach a consensus on proposed decisions. Furthermore, the foils specified by the user can be indicative of the latent preferences of the user. I use this interpretation to equip existing decision support technologies with three different interaction strategies that utilize the foil to provide revised plan suggestions. Finally, as part of my Master's thesis, I present RADAR-X, an extension of RADAR, a proactive decision support system, that showcases the above mentioned capabilities. Further, I present a user-study evaluation that emphasizes the need for contrastive explanations and a computational evaluation of the mentioned interaction strategies.