Explainable AI (XAI) is artificial intelligence that is programmed to “explain” why it made a given set of decisions. The purpose of XAI is to give transparency and credibility to the results generated by deep learning algorithms. Explainable AI increases program accountability by recording and disclosing a program’s strengths, weaknesses, decision criteria, data sources and more. As AI models become more complex (and their datasets become exponentially larger), there is a growing demand for XAI as a safeguard against misinformation.