AI Explainability

AI Explainability in Contracting

Explainability instills trust. When done well, the magic of explainable AI (XAI) will lead to a trusted system. This means providing an understanding of the system by being transparent about the system itself and how it was developed with respect to decisions towards fairness, reliability, and accountability.

Explainability isn't just for end users. This is a very narrow definition of explainability that we've held to for too long. Yes, it is an important narrow definition. However, when we look at the broader landscape of an AI system, there are many more stakeholders that require explainability.

This reference guide broadens the view of explainability and offers critical insights into key XAI considerations during procurement and when contracting with AI providers. 

Key Stakeholders

AI explainability runs on a continuum - from highly technical explainability needs to clear and plain language needs. 

Each stakeholder has unique explainability needs.

Auditors & Assessors

Integrators & Security Staff

Procurement Professionals

Administrative Users

Impacted End Users

What matters most when it comes to explainability?

Models Matter

Simply put, some models are more transparent and explainable than others. Model choice matters.

Context Matters

Explainability is a matter of human dignity when the the output results in a life-impacting decision.

Words Matter

AI systems are complex and highly technical. Only some stakeholders speak geek.

Timing Matters

Some stakeholders need the info all at once, others need it sprinkled across the entire system. 

When it Matters Most

Explainability is a fundamental matter of human dignity when it comes to high-risk systems operated in the following domains:





Financial Services

Public Services / Benefits

Utilities / Infrastructure

Law Enforcement

Justice & Legal Services


Biometrics & Geo-Tracking

Product Safety Components

About the Authors

Dr. Cari L. Miller

Founder, Principal, and Lead Researcher at The Center for Inclusive Change. She is recognized globally as one of 100 brilliant women in AI ethics and serves on the board of ForHumanity, an international non-profit organization developing AI audit criteria. She is also the Vice Chair of the IEEE Working Group P3119 drafting an international consensus-based standard for AI procurement. She holds a doctorate degree from Wilmington University with a research focus in AI governance and ethics.

Dr. Gisele Waters

Founder and Lead researcher at Engineering Hearts® exploring the nature of building human-centered artificial intelligence. She is recognized globally as one of 100 brilliant women in AI ethics and serves as Chair of the IEEE Working Group P3119, drafting an international consensus-based standard for AI procurement. She is also a human-centered design researcher and healthcare service developer.  She also advises digital health start-ups on how to build human-centered data science using AI-enabled analytics and remote patient monitoring.  

Download a copy of 

AI Procurement: Explainability Best Practices