An Honest Guy on AI: Limitations and Applications of AI for Risk

If you have a low tolerance for buzzwords, you’re in the right place. This interview with Basis Technology COO Steve Cohen gives an insider’s perspective on AI and its impact on industry without utopian visions, apocalyptic scenarios, or vague terminology. (For those in risk technology, make sure to check out the last three questions).
Could you tell us a little about yourself and your background in AI?
One of my first jobs out of college was with Cognex, a firm specializing in machine vision. Cognex used some of the most advanced techniques of the time for measurement, machine control, and manufacturing feedback. While no one used the term “AI,” Cognex was definitely an AI technology company, and my time there thoroughly impressed upon me AI’s business-problem-solving potential.
After Cognex, I partnered with Carl Hoffman to found Basis Technology. We were both passionate about human language and computers, and we saw this company as an opportunity to unite these interests in one endeavor. I consider myself very fortunate to see that initial impulse evolve into a modern AI technology company dedicated to solving real-world problems through innovation.
How would you describe AI?
How do I define AI? I sometimes say it’s “the thing that you thought a computer couldn’t do but now it can.” In other words, the definition of AI is a moving target. “AI” is whatever human capabilities we aren’t yet used to computers having. Naturally, this process forces us to rethink what we mean when we say “human capabilities.”
But that doesn’t make definitions useless. Right now, the line between what is and isn’t AI is proactivity:
- Not AI: explicitly programming a computer model to behave a certain way
- AI: a computer model that comes to conclusions and takes actions based on example outcomes it’s been given
Modern AI is all about training—not programming—an approach that’s almost frighteningly similar to how you teach a child. How does a parent teach their child, for instance, to distinguish between a cat and a dog? The adult may occasionally give a child explicit definitions of each animal, but that’s not how kids really learn concepts. Instead, for the most part, it’s from inferences the child draws through trial and error. When a child sees a cat for the first time, he or she might point and ask, “dog?” The parent would say in response, “no, that’s a cat.” The parent might also supply a few differences, “you see, dogs don’t have an [x] like that,” but the bulk of the characteristics of both animals are something the child’s mind builds on its own, through repeated comparison. This is essentially how we train modern AI models.
What do you think about the technology, where it’s been, and where it’s going?
When I look back on my personal experience with AI in the early 90s, I can recognize that it was during that time period the technology gained true traction. From machine vision to early voice recognition technology, AI made real headway in industry through targeted applications that fit the limitations of the period’s hardware and software. It was these niche solutions that rescued the technology from the last so-called “AI winter.”
From that point until now, we have seen a slow burn explode. Because of rapid gains in hardware and software, AI innovations that have been around for decades are rushing into our businesses and homes. And for me, it’s amazing to see something that was a successful experiment become a daily reality. We can talk to our cars. We can ask our computers, phones, and homes to order dinner.
I mean, I am dictating this interview right now into a system that’s doing automatic voice recognition and transcription. And it works. It doesn’t work perfectly, but it works well enough to be useful. And it’s only getting better.
What are the most important developments in the AI industry? What are you most excited about now?
There are your front-page innovations, like self-driving cars or life-saving, AI-powered border security applications, but there are also the innovations that work behind the scenes. While I am obviously excited about the former, the latter are just as important.
Chief among these unsung heros are the incredible improvements in machine translation. Google Translate and tools like it are, I daresay, infinitely better than they were a decade ago. They have unlocked a world of information once trapped behind language barriers. Translation tools are now always at work under the hood of modern applications, tirelessly pushing the boundaries of accessibility.
The area that we work in—using AI to connect data and draw insights from human-generated text—isn’t always considered front-page material, but we believe it also makes a sizeable impact on society. These applications are helping corporations and governments manage risk, bolster security, and save lives every day.
Where’s the hype? What’s being oversold?
Pretty much anything that gets reported on the front page of a newspaper is overhyped to some degree, but I’ll highlight two areas that I believe are particularly buzzy.
The first is self-driving cars. While I’m a huge fan of the technology and firmly believe that it’s the future, I’m also pretty scared of what’s on the road right now. We’ll likely see production vehicles in five to seven years, but they’re going to have to pass quite a few more layers of safety testing to get in our garages. At the moment, I think people should view the current technology with a healthy dose of skepticism.
The second is the job-takeover panic. There are valid concerns here, but the fact of the matter is, AI is nowhere near where it has to be to start displacing knowledge workers. In addition, technology always changes the landscape of work; and, what first seems like doomsday is eventually seen as an opportunity. Farm tech booted many people out of the fields, but they landed in offices.
AI has gone through several hype cycles…are we on the edge of another?
The AI summer/winter cycle has been a staple of the last six decades, but I don’t think we’ll ever see another dark winter. Siri and Alexa are obviously AI, actually useful, and too ensconced in our everyday lives to allow for the serious downturns we’ve seen in the past. This technology is in phones and on bedside tables, serving as everyday reminders that AI is real and here to stay.
Basis Technology recently produced a booklet entitled The Honest Guide to AI for Risk. Why does risk need AI?
The simple answer: because everything can use AI.
The more specific answer is that AI is really good for comprehensive, thorough pattern-identification, and this capability is essential to people working in risk. Analysts and investigators in this space are trying to navigate a landscape of billions of interactions and transactions, and it’s simply impossible for a person or an even an army of people to review those analytics, to monitor a sea of information. But it’s a perfect application for computers.
AI is going to revolutionize transaction monitoring, pattern analysis for fraud and risk, and the identification and resolution of people to known databases. Humans will always be there for final analysis and decision making, but it’s only a matter of time before AI becomes integral to these processes.
What are some of the major roadblocks to AI integration in this space and how can they be overcome?
The first major roadblock I see is making the technology work well enough to be actually useful. We stare down this challenge daily.
The second major roadblock is making systems that can be evaluated and understood. Explainability is important because of regulations. Regulators want to understand how a given piece of technology works so that they can evaluate whether it meets their standards or not, and, with AI, that can be very challenging. Many of the most advanced and powerful AI technologies are inherently opaque. Unstructured, deeping learning models are, for example, basically inscrutable. This fact is, understandably, a hard pill for regulators to swallow.
These challenges mean there are always two tests for AI to pass. First, we need to constantly ask ourselves how our project solves a defined, specific real-world problem. We need to put prototypes in the hands of practitioners and make sure that they actually make lives easier.
For the second, we need to work hand-in-hand with regulators during the development and integration of AI systems, setting realistic expectations along the way. In the end, people just need to trust AI to do its job, and the best way to get there is to not gloss over the rough patches in the road. There is also quite a bit of work being done now to understand how deep learning models actually operate, and the results of this research will likely play a role in getting regulators more comfortable with the technology.
Are there some rules of thumb you follow for separating the wheat from the chaff when it comes to AI technology vendors?
Make sure the vendor is being candid about the limits of their product. There is no panacea, and you should be very wary of vendors that advertise their product in such a fashion. In fact, it is exactly this kind of unbridled enthusiasm that led to AI winters past. Quality vendors will present the strengths and weakness of their solution—and give you a plan for mitigating those weaknesses. Make sure to pry them on this point.
Secondly, make sure you reach an agreement with the vendor on how to evaluate the software. Measuring accuracy and performance is not easy for technologies that are fundamentally statistical, so a critical part of this process is having a realistic data set for testing. In our assessments, we use a human-created and fully validated truth set to evaluate how the machines functions.
As a side note, it can be an interesting exercise to take the human process that you’re looking to replace and run it through the same analysis. More often than not, you’ll be shocked by how error-prone humans are.
Lastly, you need to set yourself up for the long game when you work with a vendor on implementation. Your solution should work for now and for the years to come. The great thing about machine learning applications is that they, by design, improve over time. Make sure you’ve got a plan in place for taking advantage of this feature.