FinTech

Transparency Isn’t Enough: Getting AI to Comply Series, Part X

This fall, we’re sharing a series of blog posts exploring AI’s impact on highly regulated industries and the major compliance barrier that stands in the way: the “black box” problem.

In this week’s post, Ayasdi’s Gurjeet Singh provides insight on the role of human intuition in cracking the black box problem.

Transparency Isn’t Enough

AI integration has become a prime business directive, and the race to operationalize every credible AI application has truly begun. While there is no shortage of obstacles to the successful deployment of these technologies, one in particular, the AI “black box problem,” is by far the most imposing. If an operator cannot understand how the system they use works, their ability to improve the system’s results—or even explain them—is undermined. In highly regulated industries, this lack of explainability can kill an integration project: compliance is difficult if explanations aren’t forthcoming.

While many point to transparency as the answer to the black box problem, the reality is more complex. The good news: transparency is easy. The bad news: it’s not enough.

Transparency is pulling back the curtain on a system and pointing to the model, the math that drives it, and the specifics of the data it uses. It’s a purely mechanical reveal of the system. It does not provide human intuition as to what the machine is doing or, more importantly, why. This is not to suggest that transparency is without value; however, it does not go very far towards demystifying sophisticated AI systems.

Understanding the Challenge

The problem with transparency is that even the clearest, mathematically precise mapping from data to decisions fails to provide human intuition with enough to understand its behavior. Regulators want to be able to have intuitive access to what a given system is doing, and thousands of pages of equations or data descriptions are about the farthest thing from intuitive.

Right now, firms are going to extreme measures to meet these regulatory expectations. In the financial services space, internal model review groups are commonly used to assess a model’s performance and meet compliance standards. Composed of armies of people, these groups are given the model’s inputs and outputs and asked to produce clear justifications for the model’s behavior. Already pushing the limits of feasibility, manual strategies like these will reach their breaking point very soon, as input data quantities and model sophistication are only growing.

So the challenge facing highly regulated industries on this topic is actually a set of challenges:

  1. Transparency isn’t sufficient to meet regulatory needs.
  2. Justification is needed, and this is often currently done through resource-heavy manual processes.
  3. These manual solutions are reaching their limits and will soon be overmatched by the accelerating complexity of AI applications and the exponential growth of the data they ingest.

In short, a better means of producing justification is desperately needed.

Solving for Justification: Regimes

AI’s monolithic complexity is the black box problem, so simplifying the technology would seem to be the obvious first step towards a solution. But complexity is also a necessary attribute of systems capable of solving the kinds of problems we’re asking AI to solve. In many useful instances, machine learning applications can’t be effective and simple at the same time.

But there are ways of creating a rougher, simpler, but more intuitive understanding of a model’s activity. This strategy involves separating the model’s dataset into regimes (major collections of data that have similar attributes or values). Once to divided this way, the model’s behavior under different data regimes can be observed, recorded, and analyzed. As one might expect, patterns emerge, allowing the system’s operators to achieve a general understanding of what the model is doing: given a certain regime, the model will perform in these certain ways.

A practical example can be found in clinical variation management in the healthcare realm. When looking at the entire patient population, it is difficult to discern what the appropriate treatment path would be as there are simply too many variables. In this case, grouping the patients by similar outcomes facilitates understanding of what constitutes good care and what constitutes bad care.

Solving for Justification: Students and Teachers

Another approach to achieving the benefits of both simplicity and complexity is through a technique known as student-teacher learning. In this approach, a sophisticated AI application is used to decode a dataset. It produces the key insights and relationships; it creates the essential mapping of inputs to outputs. Once this phase is complete, another, simpler application is deployed. But, instead of learning from the data, this application learns from the other model, producing a streamlined version of the input/output relationship the other model created. In other words, the simpler model extracts the “rules” discovered by the other model—this is called a “rule extraction”—and it is this far more intelligible, justifiable model that actually goes into the production technology.

An intelligent anti-money laundering application can be used as an example. Banks invest millions of dollars in transaction monitoring systems (TMS) and associated investigation teams to follow up on suspicious account activity. TMS systems are, put simply, a collection of rules, and generating the right rules for these systems is a core challenge these departments face. They apply business logic, domain expertise, and tribal knowledge to construct a set of rules they hope will catch suspicious transactions—and only suspicious transactions.

Here’s where the student-teacher model comes in. An unsupervised machine learning application can be used to derive the rules from the relevant transactional and incident data, and then a rule extraction system—the student model—can extract the rules from the unsupervised machine learning application—the teacher model.

These more sophisticated rules can then be vetted and approved by regulators, and integrated into the rule-based TMSs without altering the workflow, essentially upgrading the TMS to an intelligent application.

Solving for Justification: Atomicity

Atomicity is another complementary approach to getting closer to justification. This is a term straight from the computer science lexicon, and, in this context, it describes the process of breaking down a system into a series of operations and providing a rationale for each operation in the chain. From a machine intelligence perspective, atomicity means that one understands, at the atomic and most granular level, why the machine did what it did—in terms that a human can comprehend.

It’s important to note that these explanations vary by application: it is a mistake to assume that a single UI framework, doc, or output of any kind will suffice for all industries and circumstances. What counts as an explanation means very different things to physicians, fraud investigators, and quantitative traders. Consequently, the solution to this problem lies at the intersection of AI and UX design.

The student-teacher approach and atomicity can be combined to deliver the highest level of justification. Given that any truly sophisticated AI process is, more or less, a chain of complex input/output relationships, only by applying both strategies, can a complex, opaque process be broken down into its complex, opaque parts. Simpler models learn the input/output mapping of each part and are used in the production system. Finally, context-specific explanations are created for each new, simplified component operation in the chain and built into the operator’s UI view.

The net effect is that, for every action that the machine takes, there is an explanation.

No Silver Bullet

As you have likely picked up on, there’s a meta-strategy at work here. Solving the black box problem is going to look different depending on the specifics of the deployment and industry, and firms are going to need to be fluent in the range of techniques used to solve for the justification their situation requires. This kind of sophistication is going to be a must. AI is only going to get more complex, and industry cannot expect the regulatory community to want less insight into the increasingly consequential decisions AI applications will help drive.


About the Author

Gurjeet Singh is Ayasdi’s CEO and co-founder. He leads a technology movement that emphasizes the importance of extracting insight from data, not just storing and organizing it. Gurjeet developed key mathematical and machine learning algorithms for Topological Data Analysis (TDA) and their applications during his tenure as a graduate student in Stanford’s Mathematics Department where he was advised by Ayasdi co-founder Prof. Gunnar Carlsson. Gurjeet is the author of numerous patents and has published in a variety of top mathematics and computer science journals. Before starting Ayasdi, he worked at Google and Texas Instruments. Dr. Singh serves on the Technology Advisory Board at HSBC and on the U.S. Commodity Futures Trading Commission’s Technology Advisory Committee. He was named to the Silicon Valley Business Journal’s “40 Under 40” list in 2015.

Gurjeet holds a B.Tech. from Delhi University, and a Ph.D. in computational mathematics from Stanford University. He lives in Palo Alto with his wife and two children and develops multi-legged robots in his spare time.

About Ayasdi

Ayasdi is a pioneer in the creation and deployment of enterprise-class intelligent applications for the financial services, healthcare, and public sectors. Ayasdi’s award-winning artificial intelligence platform, developed by Stanford computational mathematicians, has already solved some of mankind’s most difficult challenges in areas as diverse as cancer, diabetes, financial crimes, and predictive maintenance. The company’s accomplishments have earned it recognition as one of the world’s most innovative companies from both Fast Company and the World Economic Forum.

Based in Menlo Park, CA, Ayasdi is backed by Kleiner Perkins Caufield & Byers, IVP, Khosla, Centerview Technology Partners, Draper Nexus, Citi Ventures, GE Ventures, and Floodgate Capital.

integrating ai

Unlock the “Black Box”

The only way AI’s going to make a real impact in finance, healthcare, and other highly regulated industries is if the “black box” problem tackled head on.

The Amazing, Anti-Jargon, Insight-Filled, and Totally Free Handbook to Integrating AI in Highly Regulated Industries does exactly that. Featuring in-depth pieces from almost a dozen subject-matter experts, this handbook provides a comprehensive breakdown of the problem… and detailed strategies to help you create a solution.

Download Now