AI’s Biggest Integration Hurdles: Getting AI to Comply Series, Part VI

This fall, we’re sharing a series of blog posts exploring AI’s impact on highly regulated industries and the major compliance barrier that stands in the way: the “black box” problem.
In this week’s post, FinTech Innovation Lab’s Jonah Crane explains the how regulators and innovators can work together to overcome the AI’s “black box” problem.
AI’s Biggest Integration Hurdles
Ignore the rumors: Compliance is not the obstacle to AI technology integration in financial services and other highly regulated industries. There are many significant challenges to AI integration before you even get to compliance questions—identifying appropriate use cases, building software that nontechnical users can navigate, adequately training the algorithms, turning analysis into business decisions, etc. The good news is that addressing those challenges can, in turn, go a long way towards achieving compliance.
That said, it’s hard—and it’s only going to get harder—for firms to get their cutting-edge AI applications clear of the high bar regulators are setting. To help industry players better understand the activity and perspective of the regulatory community on these issues, this piece will highlight the primary regulatory and compliance obstacles that AI will have to navigate on the path to deployment in financial services.
Solution/Problem Fit: AI’s Key Challenge
The first challenge is making sure that an artificial intelligence or machine learning tool is the right solution for the problem you are trying to solve—this is the B2B version of product-market fit, and it’s absolutely critical to successful proof-of-concept and, ultimately, deployment. So, start with problem identification and then ask whether one of the varieties of AI-driven tools are really, to paraphrase Star Wars, “the droids you’re looking for.”
Regulatory guidance typically focuses on policies, procedures, and governance more than outcomes. For example, “poor fit” and “incorrect use” are two of the principal model risks identified by regulators. So, if you’ve gone about problem identification and solution targeting in the right way, and documented that decision-making process, you’re probably a third of the way to compliance.
The rest comes down to testing and monitoring the solutions, and this is where regulators have had the most trepidation. Why? They’re concerned about the so-called “black box” problem.
The Biggest Compliance Hurdle
The black box problem refers to the difficulty in understanding and explaining the reasons for the outcomes of an AI-driven analysis. If you can’t explain how a model produced the outcomes it produced, how do you know you’ve properly trained it? How do you test it? How can you comply, for example, with FCRA requirements to notify your customers who are denied credit and explain the basis of their denial? How do you monitor the potential for disparate impact in fair lending violations? How do you explain to regulators why a particular transaction may have been flagged as potentially fraudulent or criminal or a particular series of internal communications was flagged as potential misconduct?
Regulatory guidance governing the use of models places significant weight on the process firms use to develop and validate their models, requiring an understanding of the factors that drive outcomes. To paraphrase official guidance,1 firms are expected to understand and document the theory, design, and assumptions underlying their models. This expectation can present a real challenge, especially for more adaptive AI tools and more complex neural network tools, because the model’s assumptions and—to some extent—design may be dynamic. In these cases, it is particularly difficult to link inputs and outputs or to explain the assumptions underlying a model.
In financial services, the black box problem is compounded by a related problem: the baseline problem. This concept is most clearly exemplified in the case of autonomous vehicles, which are new—and therefore scary. Despite the fact that over a hundred people are killed in car accidents every day in the United States, every accident involving autonomous vehicles is national news. The novelty of certain applications of AI has created perhaps excessive scrutiny of their shortcomings and limitations, measuring performance against perfection rather than against existing alternatives. Which is not to say firms should overlook the limitations of their models! Quite the contrary. A realistic assessment of an algorithm’s limitations, especially by independent validators, will help convince regulators that your process is rigorous and may help to establish more realistic expectations.
The financial services industry—and other highly regulated industries—have to seriously contend with this hurdle. Until AI can be demystified for regulators, deployment in these sectors will continue to receive extra scrutiny.
Stepping Up
The regulatory community is actively working to overcome the challenges related to AI integration. They’re starting with education: learning about the various types of machine learning tools and how they’re being deployed. There’s been increasing adoption of innovation offices2 and other ways to engage formally or informally with market participants. Regulators are going to conferences and talking to outside experts. Each of the past two years, for example, the entire FinTech Innovation Lab class came to Washington for meetings with a small army of financial regulators. Each time we had a truly productive and engaged dialogue. Without a doubt, there’s an earnest effort in this community to understand this technology and applications.3
Interestingly, at the meetings in 2018, there was a shift in tone from regulators. While still focusing on testing and model validation, regulators were discussing ideas and potential solutions rather than just asking questions. It was very clear that they’d already taken a really hard look at their model risk management framework, and thought hard about how it might apply in the context of AI. I don’t think we’re at a point where we have a ton of answers, but we are making substantial process.
One challenge I haven’t seen the regulatory community do enough to tackle, at least so far, is the human capital dimension—bringing serious expertise in-house. This challenge applies well beyond AI—to FinTech as a whole—but may be especially acute in areas like AI or blockchain that are so esoteric and specialized. There hasn’t been a significant shift in hiring priorities from the regulators in trying to bring in more expertise, something that would undoubtedly help regulators address all of the substantive issues more directly and effectively.
Regulators & Innovators Coming Together
Bridging the gap between regulators and innovators is often framed as one or both sides needing to “compromise.” While progress will ultimately require a mindset shift on both sides, I hesitate to view this as binary, which implies zero-sum outcomes. In many cases, where AI is being implemented in financial services, firms and regulators generally share the same objectives. We all want more accurate underwriting. We all want deeper, more contextualized analysis, more accurate models, and more effective surveillance and fraud prevention. These things are good for customers, and they’re good for safety and soundness. In short, there’s a broad set of shared objectives that make this more of a positive sum than a zero-sum game.
That said, a mindset shift is necessary. Regulators have taken the first step in making that mindset shift. They’ve crossed the most important bridge: They now see the potential benefits from AI across a wide number of use cases. This step has provided motivation to push through that fear of the unknown and increased their appetite to work with industry to explore solutions. This change is clear in the level of engagement we’re seeing from the regulatory community. Regulators aren’t just sitting back and asking questions anymore: They’re sincerely looking for answers.
While that is true of senior regulators, and innovation teams in Washington, practitioners will tell you that openness to new approaches can often take a while to filter down to examiners in the field. As a result, firms can be caught flat-footed when policymakers in Washington appear comfortable with new solutions but their examiner pulls out last year’s checklist and notices all the boxes don’t match.
On the industry side, there’s often a failure to appreciate the importance regulators place on the process. Industry is naturally concerned with outcomes. Regulators want to make sure businesses have the right policies, procedures, and governance in place. They want to make sure that firms are exercising good judgment, but they don’t want to substitute their own judgment for that of business leaders.
To enable regulators to make their own judgment around what is and is not acceptable, building the appropriate controls and governance framework around model development, implementation, and use is critical. When it comes to AI, monitoring outcomes is a key component of that control framework, but it’s not the only—or even the most important—thing. It’s important to step back and understand the way regulators will view the risks associated with AI. Try to put yourself in their shoes. Independent review of AI solutions by people familiar with the business needs—but not involved in the development of the solution—can help provide some objectivity.
In the longer run, I’m also not sure industry appreciates how much the bar for effective compliance will be raised as more advanced technologies are adopted. Today’s cutting-edge technology is tomorrow’s baseline expectation, so firms should be preparing now for those heightened expectations.
Cheaper vs. Better
While regulators generally do not second-guess business judgments, when it comes to RegTech—technology solutions used in regulatory and compliance functions—they certainly will be focused on effectiveness (catching more bad individuals). This focus has the potential to create a disconnect, because efficiency (fewer false positives) is, understandably, a big part of the appeal of RegTech and a big part of the value proposition of AI in particular. Regulators, however, will be more interested in effectiveness.
In transaction monitoring for BSA/AML compliance, for example, banks are very interested in reducing the notoriously high rate of false positives, which has been north of 90% in many cases, leading to thousands of staff hours spent processing those alerts. Regulators, on the other hand, are far more focused in increasing the total number of true positives. In a world where less than 1% of money laundering is caught, catching more bad individuals is a higher priority to regulators—even if it costs more—than catching fewer bad individuals but doing so more efficiently.
The recent large fines against U.S. Bank for reporting suspicious financial activity are a good example. Essentially, U.S. Bank was calibrating its “suspicious activity” reporting based on available resources, and regulators responded with a clear message: The bank needs have an effective framework in place and dedicate whatever resources are required to keep it effective.
Fortunately, AI holds out the promise of bending the “risk-cost” curve—that is, in areas like BSA/AML compliance, AI seems likely to be capable of achieving better outcomes and doing so more efficiently.
Parting Words for Industry
It might sound corny, but, if you really want to stay compliant while you build, test, and deploy AI, focus on doing the right thing. AI applications can have a huge impact on people’s lives, and those that build them have an obligation to do so conscientiously.
Part of the appeal of AI is that it replicates tasks that we once thought only humans could do but without the bias of human decision-makers. There are, however, many accounts of ML applications taking on human biases. Early sentiment algorithms on Instagram, for instance, associated the word “Mexican” with the word “illegal” in initial testing because of the frequency of their association on parts of the internet.
The regulatory community is well aware of these risks and expects industry to address them. And this isn’t limited to the United States. The incoming Chair of the FCA recently highlighted the risk of the misuse of data—for example, algorithms that produce results that undermine confidence in the fairness of the financial system—as his number one priority.
The Treasury Department’s recent 222-page report on FinTech included some astute observations on the topic of the responsible use of AI. The phrases were mentioned almost in passing, and have received little attention, but I believe they point the way forward. Treasury called for “an appropriate emphasis on human primacy in decision making for higher-value use-cases relative to lower-value use-cases” and the importance of “accountability of human beings.” Algorithms are built by human beings, and those human beings should be responsible for the outcomes.
Practically, this means you have to be really thoughtful when you’re training algorithms. You have to think hard about the data that’s being used and how you’re testing. You have to test repeatedly and rigorously and obsessively monitor outcomes against both business and regulatory expectations.
End Notes
- For more information, visit www.federalreserve.gov/supervisionreg/srletters/sr1107a1.pdf.
- For more information, visit www.cftc.gov/LabCFTC/Overview/index.htm.
- Some regulators have been using AI for their own purposes for a couple of years. The SEC, for example, has been using AI in its market surveillance operations for a while. So they’re not all at the start of the learning curve.
About the Author
Jonah Crane is an advisor to financial technology startups, helping them navigate the complex U.S. regulatory landscape and stay ahead of regulatory change as they scale. Jonah is also Regulator in Residence at the FinTech Innovation Lab in New York, and Executive Director of RegTech Lab in Washington D.C. where he advises regulators and policymakers on facilitating innovation. Until January 2017, Jonah served as Deputy Assistant Secretary for the Financial Stability Oversight Council and Senior Advisor at the United States Department of Treasury, and prior to that as a policy advisor to Senator Chuck Schumer on the Dodd-Frank Act, the JOBS Act, and other economic policy matters. Prior to joining Senator Schumer’s staff, Mr. Crane was a corporate attorney focusing on mergers and acquisitions at Milbank, Tweed, Hadley & McCloy LLP in New York. Mr. Crane received a J.D. from New York University School of Law.

Unlock the “Black Box”
The only way AI’s going to make a real impact in finance, healthcare, and other highly regulated industries is if the “black box” problem tackled head on.
The Amazing, Anti-Jargon, Insight-Filled, and Totally Free Handbook to Integrating AI in Highly Regulated Industries does exactly that. Featuring in-depth pieces from almost a dozen subject-matter experts, this handbook provides a comprehensive breakdown of the problem… and detailed strategies to help you create a solution.