Most of the available health care diagnostics that use artificial intelligence (AI) function as black boxes—meaning that results do not include any explanation of why the machine thinks a patient has a certain disease or disorder. While AI technologies are extraordinarily powerful, adoption of these algorithms in health care has been slow because doctors and regulators cannot verify their results. However, a new type of algorithm called “explainable AI” (XAI) can be easily understood by humans.