AI’s Biggest Compliance Problems: Getting AI to Comply Series, Part V

This fall, we’re sharing a series of blog posts exploring AI’s impact on highly regulated industries and the major compliance barrier that stands in the way: the “black box” problem.
In this week’s post, Prattle’s Evan Schnidman breaks down the “black box” problem and explains why the “cyborg strategy” might be just the solution industry is looking for.
How Innovation Causes (& Potentially Solves) AI’s Biggest Compliance Problems
In highly regulated industries like finance and healthcare, AI faces at least three key compliance issues. First, these industries tend to be built on a high degree of human expertise, so social pressure remains to simply hire another person to fill a role rather than utilizing technology to make existing personnel more productive. Second, this human-centric ethos is codified with asset management exams (Series 7 and 63), board licensure (in the case of medical practitioners), and training qualifications like an MBA, CFA, MD, and NP. Machines cannot “pass” such exams or engage in such training, so the personnel managing the machines are liable for their performance and therefore hesitant to adopt new technologies.
The last barrier is the “black box” problem, which is arguably the single most significant hurdle to widespread AI adoption in regulated industries. In short, the “black box” problem is essentially the reluctance on behalf of decision-makers to adopt new technology that cannot be easily explained.
This last hurdle will be the focus of this piece.
Understanding the Problem
Much of modern AI relies on deep learning models that are almost impossible to explain to the average layperson. In many cases, this fact has resulted in the perception of these systems as “black boxes”: technological monoliths that simply cannot be understood without data science or developer expertise.
For example, deep learning models, like Long Short-Term Memory models (LSTMs; a form of neural network) are predicated on “remembering” information over what are essentially arbitrary time intervals selected to ensure the models perform best. The arbitrary nature of these models makes classically trained statisticians nervous about over-fitting and prevents non-technical personnel from being able to explain how or why the models work the way they do.
These LSTMs and other deep learning models are nearly impossible to explain, propagating the image of AI applications as inscrutable, mysterious, and, unfortunately, untrustworthy.
The Regulatory Response
Regulators have simultaneously chosen two methods for overcoming these AI-related compliance hurdles:
- Force a higher degree of disclosure.
- Hire data scientists to more effectively and efficiently evaluate regulatory infractions.
The first method shifts the regulatory burden to those being regulated and slows the adoption of AI technology in regulated industries. No doctor, for instance, wants to lose their medical license over an AI-generated diagnosis that proves to be wrong. This fear creates a scenario in which AI technology will not be adopted unless it is proven 100% accurate, a standard that is impossible for humans to meet and is not cost effective in most applications.
The second method is due in part to cost pressures and the need to streamline enforcement. Thus, underfunded regulators, like the SEC, have begun to take a more data-intensive approach to identify regulatory infractions. This approach has the added benefit of encouraging regulators to gain a better grasp of modern technology, along with the ability to more capably evaluate it.
Industry’s Responsibility
For the time being, the burden still falls on the industry to meet the regulators. The regulators are taking baby steps into the modern economy by hiring more technical personnel, but their institutional and budgetary pressures can slow their rate of change to a crawl.
This situation poses a significant challenge to those working in highly regulated industries that want to adopt AI. Since technology is largely still being regulated the same way humans are, those adopting the technology are pressured to utilize only “explainable AI.” While this narrows the field of technology options available and may result in suboptimal outcomes, it also helps both regulators and human experts better understand how technology is being utilized.
The norm in most highly specialized industries is for regulators to single out an individual human being for an infraction, but the addition of more technical personnel is making this difficult. For example, hedge funds are increasingly being run by data scientists who have little to no financial services expertise and are simply looking for ways to optimize models.
These technical folks tend to struggle to understand the regulatory perspective, in large part because they are data or methodological experts rather than industry experts. As technical personnel play an increasingly prominent role in regulated industries, it is no surprise that they are likely to miss the underlying rationale behind regulations that have been built based on decades of historical context.
The Cyborg Strategy
The marriage between human- and machine-driven strategies is more important than ever. In chess, we have seen that “cyborg” or “centaur” chess teams continue to beat human grandmasters as well as pure computer opponent because humans still possess innate knowledge that machines cannot calculate.
By combining the best qualities of humans and machines, those in regulated industries can both optimize outcomes and make their strategy more easy to explain to regulators. Over time, technology will likely improve to a point where AI can outperform cyborg teams, but by then regulators will hopefully have technical personnel capable of understanding and explaining the role AI played in the process.
Technology as a Bridge
For generations, those in highly regulated industries have butted heads with the regulators charged with maintaining systemic safety in those industries. As both these industries and their regulatory bodies shift from human-centric to technology-centric, they may be able to remove human personalities from the equation a bit, thereby allowing regulators to evaluate computational models rather than mental processes.
This shift means regulators may no longer have to worry about determining intent, but rather simply examine the model and verify that the regulated entity was doing things within legal boundaries. In short, technology can help regulators and those being regulated to remove emotion from the equation and simply look at the facts. While this is still just a possibility, this technology has the potential to improve the historical pattern of contentious relations between regulators and those working in regulated industries.
About Evan Schnidman
Before starting Prattle, Evan taught at Brown University and Harvard University and co-authored How the Fed Moves Markets. Evan is also widely published in macroeconomics, political economy, and finance publications. Along with being a respected academic, author, and researcher, Evan is also an experienced consultant for large corporations and financial institutions.
About Prattle
Prattle is a research automation firm that quantifies market-moving language using proprietary machine learning and natural language processing technology. Prattle provides predictive central bank and equities analytics that give clarity to investors struggling with a flood of unstructured information. Founded by experts in textual analytics and economic forecasting, Prattle is backed by top-tier Wall Street and Silicon Valley investors and has drawn interest from leading investment banks and asset managers. Prattle produces its data using Portend, a proprietary data science software platform. For more information, visit
www.prattle.co.

Unlock the “Black Box”
The only way AI’s going to make a real impact in finance, healthcare, and other highly regulated industries is if the “black box” problem tackled head on.
The Amazing, Anti-Jargon, Insight-Filled, and Totally Free Handbook to Integrating AI in Highly Regulated Industries does exactly that. Featuring in-depth pieces from almost a dozen subject-matter experts, this handbook provides a comprehensive breakdown of the problem… and detailed strategies to help you create a solution.