An Honest Guy on AI, Part II: AI, Hype, and Risk Technology

If you have a low tolerance for buzzwords, you’re in the right place. This interview with Basis Technology VP of Product Chris Mack gives an insider’s perspective on AI and its impact on industry without utopian visions, apocalyptic scenarios, or vague terminology. (For those in risk technology, make sure to check out the last two questions).
Missed our previous “Honest Guy on AI” post from COO Steve Cohen? Check it out then come back here.
Could you tell us a little about yourself and your background in AI?
In 1995 I started Exec.Net—a small ISP my team built by convincing businesses (through door-to-door sales) that the internet was going to be important. I got lucky. The industry boomed and after five years, I sold Exec.Net and transitioned from internet services to big data.
I’ve been in big data for 20 years, and machine learning has been part of that journey from the beginning. During this period, I worked for Lattice Engines, a B2B predictive analytics provider. Lattice Engines fed all the enterprise data we could collect—CRM, transactional, customer history, and third-party data—to a machine learning algorithm that used the aggregated information to recommended the next best action for B2B sales reps. These recommendations improved the effectiveness of our clients’ sales teams by an average of 20%—a result that awakened me to real-world impact AI technology can have.
I’ve been at Basis Technology for five years now. Basis Technology specializes in solving human language problems for businesses and governments using AI. Our approach is incredibly grounded. From deep learning to text embeddings, we’re on the cutting-edge, but we only integrate innovative technology where and when it makes a substantive performance difference. That approach is something I very much appreciate, as I’m a true believer in the business value AI can create.
How would you describe AI?
Artificial intelligence is what isn’t technologically possible at a given moment—a label for the machine capabilities that are just out of reach. We just keep moving the goalposts. What was “AI” just a few years ago now goes by other, less sci-fi monikers.
But, when we’re talking about AI in practical terms (which is what I care most about), what we really are referring to is a collection of machine learning techniques that we can use to make people and systems more effective.
What do you think about the technology, where’s been and where’s going?
As excited as I am AI technology, I’m even more excited about the data. Algorithms will continue to improve year-over-year, leading to incremental breakthroughs in performance and accuracy. However, I strongly believe that advances in the collection, cleansing, and accessibility of data will ultimately cause the biggest gains in what is possible for AI systems.
In games like chess and go, AI systems can outstrip even the best human players. These leaps in machine capability have come through improved leveraging of training data. As we look at more difficult applications of AI, it will also be the availability and curation of data making the difference.
Current natural language processing (NLP) and computer vision technology have been real game-changers in terms of what we consider data. Traditionally, data has been a term used to describe information that has been, to some degree or another, organized for machined consumption or numerical in nature. NLP and computer vision have opened the door to a whole new definition, as everything from blog posts to Instagram photos can now be ingested and understood (to some degree) by machines. A huge percentage of available information falls under this category—anywhere from 60% to 80%—and it’s incredible to consider how much AI applications will improve as the automated processing of this data improves.
Along with data, human feedback—known as “human-in-the-loop”—will be another key to the successful deployment of AI. For most AI applications, true optimization will require the guidance of a human operator. These subject-matter experts make fine grain distinctions and correct errant conclusions. This process is complementary to data-based training, allowing AI systems to learn outside the boundaries of their training data.
In my opinion, data and human guidance are the foundation for future AI success stories.
Where’s the hype? What’s being oversold?
Honestly, hype is everywhere. One of the biggest offenses is the packaging of an AI application as a panacea, and chatbots are a great example this. Only a handful of months ago, chatbots were the hottest thing in AI. Firms were rushing to integrate the software into their websites and call centers. But sky-high expectations were quickly tempered by the limitations of the technology. Chatbots could not provide the level of targeted interaction needed, and the enthusiasm quickly died.
The best bots today use far less AI than advertised—and are usually just a small component of what is essentially a deterministic message workflow. For example, AI would be used to infer whether or not a user’s question was sales or service related and then direct the conversation down the appropriate branch. Each of these series would be composed of questions that were pre-programmed by a human, not generated by some magic AI in the cloud.
Basis Tech recently produced a booklet entitled The Honest Guide to AI for Risk. Why does risk need AI? What are some major challenges facing this sector and what applications might AI have to them?
There are a number of reasons why AI is a good fit for risk technology. There’s actually a line in The Honest Guide that really captures a fundamental concept that I think sets up my response here well: “Managing risk is about knowing.” I’ll explain why knowing is becoming increasingly difficult in just a moment, but the short answer is that AI helps organizations know. That’s why risk needs AI.
As we touched on earlier, data quantities are increasing exponentially and a sizeable portion of that data is unstructured. This data is not only wild in organization, it’s also incredibly diverse in nature. This data comes in all manner of forms, and, perhaps most importantly, an incredible array of languages. While technological advances have helped produce this predicament, the diversity and quantity of the data organizations must now be fluent in is also integrally linked to the continued rise of global trade.
While one might hope the regulatory burden would decrease as material difficulties increase, I’m afraid that relationship is positively correlated. Financial institutions have it particularly rough, as the list of people and organizations they serve is only growing in diversity and size and the penalties for compliance failures are at historic highs.
Only AI applications can process data at this scale with the requisite speed and rigor. But even more than efficiency and accuracy, unsupervised AI has the ability to approach and analyze data free of bias, opening up a world of insight human bias had shuttered.
What are some of the major roadblocks to AI integration in this space and how can they be overcome?
The biggest roadblock to integration is known as the “black box” problem. During the training process, machine learning systems build “models.” Models are basically a mapping of how inputs relate to outputs that are generalizable. As you might imagine, models become incredibly complex. This is increasingly true as the systems being trained can create more nuanced, layered distinctions, like with deep learning. Applications built on such sophisticated AI technology can be remarkably accurate, but the models that power them are often far too intricate and dense to understand. They become black boxes: information enters and predictions come out, but the gears of calculation are inscrutable.
This fact introduces at least two major problems. One is optimization. If you don’t know why a system made the judgment it did—and that judgment is wrong—how does one go about fixing it? The other is compliance. If you don’t know why a system made the judgment it did, how does one go about explaining that to a regulator? In this space, regulators expect a defensible, repeatable process, and defense is hard to come by without understanding. Recently, there’s a been a fair amount of research into this problem, but, for now, it is certainly a barrier for AI technology.