Understanding Explainable AI: Getting AI to Comply Series, Part IX

This fall, we’re sharing a series of blog posts exploring AI’s impact on highly regulated industries and the major compliance barrier that stands in the way: the “black box” problem.
In this week’s post, Quantiply’s Vamsi Koduru addresses the fear of AI and how greater transparency can assuage it.
Understanding Explainable AI
For the longest time, the public perception of AI has been linked to visions of the apocalypse: AI is Skynet, and we should be afraid of it. You can see that fear in the reactions to the Uber self-driving car tragedy. Despite the fact that people cause tens of thousands of automobile deaths per year, it strikes a nerve when even a single accident involves AI. This fear belies something very important about the technical infrastructure of the modern world: AI is already thoroughly baked in.
That’s not to say that there aren’t reasons to get skittish about our increasing reliance on AI technology. The black box problem is one such justified reason for hesitation.
Inside the Black Box Problem
The existing methods for understanding the results of machine learning models all have critical limitations. Ad hoc application of machine learning tools or visualizations may yield insights about the original model, but they do not tell us when to apply different methods, or when or why we expect these methods to mislead or fail. Approaches that are tailored to specific use cases, such as visualization and text generation, fail to provide the general principles that guide the programmatic development of explainable AI. Finally, any approach that relies on human technical experts to explain results to human domain experts faces bottlenecks due to the limited number and high cost of such experts.
While a challenge in almost any deployment scenario, in highly regulated industries, like finance or healthcare, the black box problem is a particularly daunting barrier. There is a natural reluctance to introduce anything new into the delicate dance of technology and compliance that exists in these spaces—and that reluctance is only intensified when that new thing is, more or less, unexplainable.
Getting to Explainable AI
Despite the fact that humans often struggle to precisely explain how they came to their decisions, what they do provide is, at the very least, intuitive: we can always refer to some driving sentiment or set of emotions that went into a given choice. That justification, however imprecise, is still closer to acceptable than the mystery of a black box.
So the question we’re facing is simple: How can we achieve that intuitive access with AI? How transparent can we get AI to be?
When I mention transparency, I’m not speaking at the mathematical level. Even though understanding the mathematical explainability provides the soundness of the machinery, explaining all the component minutiae is just too difficult, too zoomed in, and almost guarantees the audience will miss the forest for the trees. Instead, I’m speaking about context, the level where major developments in a decision chain result in a given output. The numbers need a narrative around them. The business needs this information so they can internalize and transfer it to a future problem after the concepts have drifted.
Fundamentally, addressing an AI system at the process level means visualizing the model as a pipeline: Break the system down into segmented processes and describe each step taken on the journey from data to decision. I believe, for example, that many applications can be illustrated as the following 9-step process:
- Raw Data. The raw data source (database access) or files
- Data Views. Views on the problem defined as queries or flat files
- Data Partitions. Splitting of data views into cross-validation folds, test/train folds and any other folds needed to evaluate models and make predictions for the competition.
- Analysis. Summary of a data view using descriptive statistics and plots.
- Models. Machine learning algorithm and configuration that together with input data are used to construct a model just-in-time.
- Model Outputs. The raw results for models on all data partitions for all data views.
- Ensembles. Ensemble algorithms and configurations designed to create blends of model outputs for all data partitions on all data views.
- Scorecard. A local scoreboard that describes all completed runs and their scores, sorted and summarized.
- Predictions. Final predictions for deploying into production.
If an AI system is treated as a flow—instead of just data to prediction—and we flesh out the key phases that happen in between, then it becomes much easier to provide line of sight, understandability, and general explainability on how a model is going from A to Z.
Explainable AI Is Not Just for Compliance
The ability to explain the rationale behind one’s decisions to other people is an important aspect of human intelligence. It is not only important in social interactions—e.g., a person who never reveals one’s intentions and thoughts will be most probably regarded as a “strange fellow”—but it is also crucial in an educational context, where students try to comprehend the reasoning of their teachers. Furthermore, the explanation of one’s decisions is often a prerequisite for establishing a trust relationship between people, e.g., when a medical doctor explains the therapy decision to the patient. Although these social aspects are less important for technical AI systems, there are many arguments in favor of explainability in artificial intelligence. Here are the most important ones:
- Verification of the system: As mentioned before, in many applications one must not trust a black box system by default. For instance, in financial institutions, the use of models which can be interpreted and verified by auditors, examiners, and regulators is an absolute necessity in order to correct any false conclusions that a model might have drawn. A model can learn that certain financial activities have a much lower probability of suspicious activity than others, but an AML expert can quickly determine if the inputs and weights provided to these models were correct to begin with. AI models are making inferences solely from the data, whereas AML experts would immediately recognize that a particular result cannot be true as inputs and weights given to the system were incorrect.
- Improvement of the system: The first step towards improving an AI system is to understand its weaknesses and limitations. This grasp, obviously, is more difficult with a black box system than a more transparent one. Detecting biases in a model or dataset is easier if one understands what the model is doing and why it arrives at its conclusions. Furthermore, model interpretability can be helpful when comparing different models or architectures. For instance, models may produce similar results but differ largely in the features on which they base their decisions. The better we understand what our models are doing (and why they sometimes fail), the easier it becomes to improve them.
- Learning from the system: Because today’s AI systems are trained with millions of examples, they may observe patterns or insights in the data which are not readily apparent to humans. The AI system consequently can identify new typologies or scenarios which can be adapted by institutions. These otherwise hidden insights can be crucial to improving model performance and reducing risk exposure. Systems that are able to explain how they discovered these new insights will be able to add the most value as now this process is repeatable and verifiable.
- Compliance with legislation: AI systems are affecting more and more areas of our daily life. Many legal concerns have recently received increased attention, such as the assignment of responsibility when systems make wrong decisions. Explainability of AI system decisions allows companies to better understand the entire reasoning process and builds trust with AI implementations, which can help businesses, the workforce, and customers better embrace AI.
Methodology for Achieving Explainable AI
As AI gets more and more complex there is an increasing need to understand the how and why behind every prediction a model makes. By keeping in mind the wants and concerns of our audiences, we’ve developed a Consistent, Repeatable, Auditable, Fair and Transparent (CRAFT) approach to XAI. Let’s dive in to understand why each attribute is important:
- Consistent: AI models always look for patterns in the data. It is important that the models recognize the patterns but also make the same predictions every time it recognizes a specific pattern. Through consistency, we can ensure that the models are learning the right patterns. For example, we have three red spheres and have trained a young kid that these three objects are spheres. Learning the wrong patterns can be catastrophic and prevent consistency. Hence, we take great measures to understand the quality of our predictions and tie it to the patterns the model is learning.
- Repeatable: Many artificial intelligence models are designed with some randomness. The randomness stems from the mathematical techniques applied to develop the model. If a prediction is made on day one or day 100 using the same data, it is extremely important that we see the same prediction is made by the model. If different predictions are made, then the user immediately loses trust in the model. Therefore, it is extremely important that repeatability is a key attribute to remember when developing models.
- Auditable: Beings human, we know the importance of being held accountable for mistakes that we make. Accountability is a key attribute that motivates us to learn from our mistakes and do better in the future. Similarly, it is important to hold models accountable for false predictions. Based on the severity of the false prediction we penalize the model so that it learns immediately and doesn’t repeat similar mistakes in the future. But always providing constructive negative feedback is not healthy, so there must be a balance. Hence, we ensure that we reward our models when they make correct predictions. By carefully defining reward-punishment policies we can ensure that our models are always held accountable.
- Fair: Just like humans, models operate with biases. It is our responsibility to ensure that models don’t make decisions with extreme biases. The source of bias could either be the data or the mathematical design of the model but it is very important that we understand the source if we want our models to be fair. In anti-money laundering situations, it is not only important to make accurate predictions but it’s important to make fair decisions. Humans make mistakes too, but if our decision-making process sounds fair, we are more accepting of the decision made. We hold our models to the same standard.
- Transparent: If a model makes fair and consistent predictions, and even if its predictions are repeatable, none of this information can be verified unless there is transparency. We ensure that we are fully transparent. We are able to share all of the logic and decision-making processes taking place under the hood of our analytics platform.
Working with Industry and Regulators
For firms to accept new, complex AI technology into their process, the top levels have to feel reassured. You need to demonstrate that the solution is transparent; you need to walk execs through each of the solution’s different component processes; you need to get buy-in from the decision-makers.
And all this needs to be done in small increments. The “rip and replace” pitch is a perilous approach. Instead, the focus should stay on augmenting a company’s existing investments, delivering value in the short term to establish trust (through a 30 to 90 day proof of concept), and expanding integration over a longer period of time. With the fear around AI today, small, value-centered steps are key to helping companies feel comfortable with a given solution.
To convince regulatory authorities, vendors, and their customers have to be able to account for three specific aspects of an AI system’s behavior, which is determined by the machine learning model used to train the AI. These traits are as follows:
- Explainability: the ability to understand the reasoning behind each individual prediction
- Transparency: the ability to fully understand the model upon which the AI decision-making is based
- Provability: the level of mathematical certainty behind predictions
To lay the groundwork for approval, it is critical—especially for vendors who often forget this step—to actively pursue relationships with regulators. Getting sophisticated AI to compliance is a journey, and you’ll want to start that process with firm footing. Work very closely with the regulatory bodies to identify what can produce high quality and trustworthy results.
Neural Nets and the Future of Explainable AI
You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input—like the intensity of a pixel in an image—and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. On top of this, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce the desired output.
The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes, and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.
Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does—especially as these systems become more and more complex. If somebody can give you a reasonable-sounding explanation for his or her actions, it is probably incomplete; the same could very well be true for AI. It might just be the nature of intelligence for only a piece to be exposed to rational explanation. Some of it is just instinctual, subconscious, or inscrutable.
If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Until that time (if it ever arrives), explainable AI will be a critical piece of innovation in highly regulated industries.
About the Author
Vamsi Koduru is a driven and passionate Product Management Executive who aims to deliver high-quality solutions that make a profound impact on the world. Vamsi Koduru is a co-founder and the VP of Products since joining Quantiply in 2014. He has spearheaded the development of Quantiply’s platform and applications while also managing remote development teams. Vamsi has a deep passion for technology along with an insatiable drive to solve complex problems which make an impact on enterprises, consumers, and society at large. A UCI grad and Bay Area native, Vamsi loves to foster new relationships, experience the many beautiful sights the world has to offer, explore different cuisines, and discover new music.
About Quantiply
Founded in 2014, Quantiply fights financial crime by delivering a suite of automated artificial intelligence (AI) powered risk and compliance software that address Know Your Customer (KYC), and Anti-Money Laundering (AML).
With Quantiply, financial institutions are able to identify suspicious actors, interactions, and activities to address financial crime more successfully than ever before, so they are not only more efficient, they can also mitigate risk against damage to reputation, client trust, and market share. Find out more or request a demo at quantiply.com.

Unlock the “Black Box”
The only way AI’s going to make a real impact in finance, healthcare, and other highly regulated industries is if the “black box” problem tackled head on.
The Amazing, Anti-Jargon, Insight-Filled, and Totally Free Handbook to Integrating AI in Highly Regulated Industries does exactly that. Featuring in-depth pieces from almost a dozen subject-matter experts, this handbook provides a comprehensive breakdown of the problem… and detailed strategies to help you create a solution.