FinTech

Three Ways to Tackle the Black Box Problem: Getting AI to Comply Series, Part XI

This fall, we’re sharing a series of blog posts exploring AI’s impact on highly regulated industries and the major compliance barrier that stands in the way: the “black box” problem.

In this week’s post, Nadav Ellinson of Intelligo explains how humans and AI can (and should) work together to create innovative, thorough, and explainable solutions.

Three Ways to Tackle the Black Box Problem & Four Questions to Ask AI Vendors to Stay Compliant

And it’s also not just about compliance. While the regulatory barriers posed by AI’s opaque nature are serious, it also raises basic questions of service quality. In our space, background check services, the businesses we serve rely on our reports to make key investment decisions, and that’s not something that should be left to a process that no one can explain. For these reasons and many more, AI can’t just be a means to a better end; it also needs to be a means to an intelligible end.

For those currently grappling with this challenge (and the others that soon will), I’ll review three primary ways of dealing with this issue: quality assurance, user experience, and human review.

AI Needs QA

Step one in the journey towards compliant and trustworthy AI is quality assurance, and this process starts with the data we use, from the sources and databases our algorithm finds to the information it receives from data providers. We carefully select our sources, and thoroughly vet datasets for accuracy. Even after integration, we regularly audit all provisioned data to ensure our information standards are met. We review our data query protocols, set up monitoring systems, and prepare contingency plans.

But it’s not only ingest data that requires these measures. Complex AI systems go through multi-step reasoning processes, and, at every point in that chain, the accuracy of data has to be validated. Every product release also merits this sort of scrutiny, as the accuracy of each new version of your model should be reassessed. New behavior and performance baselines need to be established as a fundamental—and critical—point of reference, and question need to be answered: Have any of the system’s new features or innovations unexpectedly impacted any aspect of the automation?

I’ve covered accuracy and uptime, but quality assurance also means comprehensive information coverage. For us, the proceedings of 20,000 US courts, thousands of news outlets, and hundreds of regulatory watchlists are critical intelligence, so this is an enormous challenge. We have to take extreme steps to ensure all the relevant data is brought to bear on our system’s decision process and is presented in a user-friendly manner to the client. We’ve identified that a large pain point for our customers is the need for continually updating critical information. We’ve enhanced the information coverage available in the market by producing a new product called Ongoing Monitoring, which addresses their needs by continuously analyzing critical databases and alerting clients to the behavior of their subject, even after we’ve published their initial reports. We’ve leveraged AI in this way to sift through critical datasets and extract meaningful information so that our clients receive immediate alerts.

By taking quality assurance seriously, by frequently (even obsessively) reviewing and assessing every aspect of the model’s mechanics, and by ensuring data is always up to date and available through live alerts, you have laid the foundation for a compliant AI system.

AI Needs UX

The second part goes to the crux of the explainable AI problem: building the tools and the interface necessary to justify the machine’s decisions to its human operators.

While this is the subject of ongoing academic research, there are ways of tackling this challenge today. This is particularly true in supervised learning systems, where the model’s features are known and can be measured. Let me give you an example, drawn from our space, that might help illustrate how something like this can be achieved.

Imagine you’re doing background research on Peter Jones and you know a series of simple facts about Peter:

You know that:

  1. His middle initial is D
  2. He was born in 1973
  3. He lives in New York
  4. He works for a big soda company

Now imagine you’ve found a court case that appears to mentioned Peter, but the information it contains is sparse at best. The Peter mentioned in the case has the same job, birth date, and middle initial as your Peter. The court case, however, is from California…and your Peter lives in New York. If you decided that this was a match, despite the potential conflict, and presented it without any explanation to a supervisor, then you would run into some real problems.

They’re going to spot the discrepancy—and might entirely disregard your recommendation. The supervisor and the investigator in this story are in the positions of a human operator and an inscrutable AI system.

Think back to your high-level goals for developing the technology in the first place. For us, it was to democratize trust in the business world by giving investors advanced tools to run background checks. We’ve been able to achieve those goals by not only providing them with a comprehensive product but by giving them insight into the decisions that led to our final reports. The technical ways we do so, are by highlighting the facts that helped us draw conclusions, encouraging transparency in how connections were made and analysis was conducted. The additions add a vital insight into the decision-making process, making it possible for a client to assess the output provided.

In short, the explainable UX challenge walks you through the decision making process. It is the provision of a conclusion with the relevant, broad strokes of a story and a general sense of confidence.

AI Needs Human Eyes

Finally, for many applications, you should seriously consider the use of human quality assurance as the final review of any important algorithmic output. For many of our premium reports, we use senior analysts to validate key data output, adding a layer of assessment that is certainly appreciated by our clients.

People trust people, and an additional layer of quality assurance can up the confidence of those relying on your system.

Four Questions for AI Vendors

Instead of ending with a summary of what’s been covered, I thought I’d add a useful set of questions for tech buyers to ask tech vendors to make sure they’re not getting an opaque system they’ll never operationalize.

    1. What do you mean by “AI”? AI is a vague term. It’s a buzzword. So, before you let vendors put their system under the magic AI umbrella, make sure you get a clear breakdown of the product’s mechanics. What decisions are made via machine learning? What decisions are made using rule-based algorithms or other techniques? Why? What are the implications? Demand that vendors demystify their terminology and technology.
    2. What do you mean by “explainable”? Take them to task on how their UI delivers credible, compliance-ready justification…or, at the very least, how you can access the data that underlies a given decision. If a vendor gets squirrely around either of these requests, it’s probably best to take your business elsewhere.
    3. What are your QA procedures? For the reasons discussed above, you need to know how they choose their data sources, how they process their data, and how they can assure continued accuracy over time.
    4. How do humans come into play?

My last piece of advice revolves around the human element. You need to know exactly how the operators will be interacting with the system. What are their core responsibilities? What will they be reviewing? What aspects of the process do they influence?


About the Author

Nadav brings years of experience in product management and development, having completed significant web app and web design projects for clients in Australia and Israel. Nadav holds a BSc and BA from Monash University, Australia and an MBA in Technology Entrepreneurship from Tel Aviv University.

About Intelligo

Intelligo is on a mission to democratize trust by giving businesses in the investment space advanced capabilities to run comprehensive background checks. The first of its kind, our automated SaaS platform leverages AI and machine learning to tackle the complexities that otherwise define the industry.

A pioneer in comprehensive background checks, Intelligo has clients across the financial sector, including Fortune 500 Companies, Investment Banks, Private Equity Firms, Investment Consultants, Hedge Funds, Allocators, and more.

Find out how Intelligo can work for your business at www.intelligo.ai.

integrating ai

Unlock the “Black Box”

The only way AI’s going to make a real impact in finance, healthcare, and other highly regulated industries is if the “black box” problem tackled head on.

The Amazing, Anti-Jargon, Insight-Filled, and Totally Free Handbook to Integrating AI in Highly Regulated Industries does exactly that. Featuring in-depth pieces from almost a dozen subject-matter experts, this handbook provides a comprehensive breakdown of the problem… and detailed strategies to help you create a solution.

Download Now