The AI Hype Machine: Part II

This spring, we’re sharing a series of blog posts exploring where AI technology delivers on its promises and where it fails to live up to the hype, especially as it applies to the risk industry. Today we continue our brief overview of the rise and fall of AI enthusiasm over the past few decades.
Missed last week’s update, check out The AI Hype Machine: Part I, then come back.
Teaching the Machine to Learn
“Every time I fire a linguist, the performance of the speech recognizer goes up” – Fred Jelinek, famed IBM researcher and academic
Machine translation had always been seen as a milestone on the road to real AI. The ability to translate implies linguistic understanding, and linguistic understanding is the foundation for a system that could pass pioneering mathematician Alan Turing’s famous test of AI viability: can a machine use language well enough to trick a person into believing that it’s not a machine?1 To build a better translator, researchers had to look beyond experts and rules and return to AI’s roots: they had to build a machine that learns like a person.
Human concepts are complex. Unthinkably complex. The problem of defining a chair—a Philosophy 101 staple—will illustrate this principle with startling force. Before it’s been attempted, one would think defining the everyday item would be trivial. Nope. Not by a long shot.
Attempt 1: “Chairs are something you sit in.”
Counter: “So are couches.”
Attempt 2: “Chairs are something you sit in that only holds one person.”
Counter: “Stools are something that you sit in that only holds one person, but stools are not chairs.”
Attempt 3: “Fair enough! Chairs are something you sit in that only holds one person that has back and arms.”
Counter: “Some stools have a back, and some chairs have no arms…or legs. A bean bag chair doesn’t really have any of those, but it’s still a chair.”
Attempt 4: [silence]
In short, generating a definition of a chair that doesn’t exclude things that are chairs or include things that aren’t isn’t forthcoming.
There are many lessons embedded in the chair problem, but one of the biggest is that complete, formal definitions for the concepts we effortlessly and accurately employ every day might just be impossible. Humans can’t comprehend the magic at work in their own communication, so the machines built using that flawed understanding necessarily fall short.
This leads to an important question: if it’s so hard to properly define concepts, how do we ever learn them?
Answer: through iterative comparison.
When children begin to learn language, they point at things and ask what they are. A child might point at a small, short-haired dog and ask, “dog?” and their parent would likely answer, “yes.” Then that same child might point at a very different dog, large and long-haired, and ask, “dog?” and their parent would likely also answer “yes.” While the parent might supply some broad strokes to help their child understand what makes a dog a dog, those are hardly a sufficient to actually identify dogs. Instead, through the process of repeatedly matching concept to concrete examples, the child’s mind builds an intricate set of features that qualify creatures as “dogs.”
Many of these features, if not most, are entirely unknown to the child.
And this is precisely how machine learning (ML) works. Instead of building a set of rules that the machine should operate on, ML—specifically supervised ML—works by supplying a system with a multitude of input/output matches. This process is known as “training.” To continue with the dog example, a series of pictures of dogs as inputs would be provided with “dog” as outputs. Through iterative comparison, the system builds a set of features that define “dog,” the precise nature of which would be mysterious to the system’s builders. Neural net technology allows for the construction of even more layers of distinction, allowing AI systems to construct ever more nuanced and accurate conceptual models.
As a relevant site note, in unsupervised learning, the system is not provided with a set of matches. Instead, it’s left to its own devices to infer the connections between input and output datasets.
It was these developments that enabled machines to begin accurately translating human text and redeemed AI from it’s second bout in purgatory.
Winter Is Coming
While many factors laid the groundwork for AI’s troubled history, there are at least two, clear culprits.
The first is the simple fact that the term “AI” is a nebulous mass. AI is an academic discipline, an industry, and an idealized potential technology. While they are three distinct things, they all have been inextricably linked from birth.
As an academic discipline, AI has grown steadily over the past 60 plus years, and the same is true of the AI-technology industry. But this remarkable upward trend has always been overshadowed and subsumed by the AI that could be. In the sixties, the AI that was helped scientists better understand the human mind, but the AI that could be would be a marvelous, mechanical replica of that mind. In the eighties, the AI that was helped companies in limited, yet meaningful ways, but the AI that could be would revolutionize business. Monolithic, the same ideal AI that captured imaginations and fueled funding dollars would drag everything down with it when the bubble burst.
The second culprit? Shoddy trend analysis. It’s easy, but almost always wrong, to project from limited data—no matter how convincingly it seems to curve toward the heavens.
From 1977 to 2005, the number of American Elvis impersonators grew from 150 to 85,000, and, if that trend continues at the same rate, one in three people worldwide will be regularly moonlighting as the King in 2019.2
“If that trend continues at the same rate” is a deceptively plausible justification for what is little more than speculation, because, as Richard Tomkins points out, “The thing about trends, however, is that they seldom do continue.” And when investors are relying on things that seldom continue continuing, bad stuff happens.
Because AI is still a catchall, undifferentiated concept, and journalists are still launching lofty projections off of partial data, a correction is to be expected at the macro level. This latest deflation may not cause popular sentiment to entirely implode, but in a world where every startup is built on blockchain technology, “best of breed” AI, or both, the likelihood that the inevitable losses will temper the current mania is high.
Learn more
How will current advancements in AI filter into industry, particularly, risk technology? Check back next week, or download the complete Honest Guide to AI for Risk now.
1 Alan Turing, “Computing Machinery and Intelligence;” Gideon Lewis-Kraus, “The Great AI Awakening”
2 Richard Tomkins, “A theory on trends”