What Is AI? Complex Technology in Plain English: Getting AI to Comply Series, Part I
This fall, we’re sharing a series of blog posts exploring AI’s impact on highly regulated industries and the major compliance barrier that stands in the way: the “black box” problem.
Setting up the rest of the series, this week’s blog post provides clear explanations of core AI technology and terminology.
Artificial Intelligence, Explained
There’s a quote from the classic science fiction writer Arthur C. Clarke that perfectly captures the modern AI phenomenon: “Any sufficiently advanced technology is indistinguishable from magic.” AI is about as sufficiently advanced as technology gets in the 21st century, so it should, therefore, be no big surprise when its advertised capabilities don’t seem to have clear limitations.
But, it’s not just its inherent complexity that gives AI the appearance of unlimited potential. It’s also due to how poorly it’s explained.
So, to help demystify humanity’s latest magical machine, I’ve compiled a set of clear definitions and intuitive explanations of AI’s most important terms. As a solid grasp of AI is becoming more and more critical to everyone in the business world—let alone anyone reading this blog—I believe you’ll find these descriptions extremely useful.
artificial intelligence (AI)
(ärdə’fiSHəl in’teləjəns) noun
AI is what we use to build algorithms that solve complex problems. What’s a complex problem? Good question. It’s a problem where a simple relationship between inputs and outputs is difficult to create. Voice to text is a perfect example of such a problem.
Imagine you’re designing a voice to text solution. The first challenge is calibrating your system to analyze the input. The input isn’t text: It’s a signal, a wave—a time series. And each instance will be unique. Even the same person repeating the same word doesn’t produce a perfectly identical, repeating pattern. As a consequence, you cannot create clean definitions for inputs or write “if/then” rules: If I see X and Y wave pattern, then I know that it’s the word “Z.” Even if you’re only considering inputs, you’re looking at a complex problem.
To solve your dilemma, you have to have a more sophisticated means of mapping inputs to outputs. What you need is a more flexible methodology of learning the key attributes that would allow your system to match a signal pattern to a given word. Here’s where AI comes in. Using machine learning, you can teach your system to recognize given signal-to-word matches by letting them build a model of how they relate.
First, you create (or buy) a dataset composed of input/output matches. We typically call it a “gold standard,” as it provides the true, human-curated mapping of inputs to outputs. In this dataset, there would be a variety of pronunciations of a particular spoken word coupled with the appropriate written word. For example, numerous audio files of the word “helicopter” would be labeled as inputs and matched to “helicopter” as a labeled text output. These input/output matches are then fed to a machine learning (ML) algorithm, which, through this training process, develops an understanding of the core features of the signal patterns that map to particular words. This understanding is known as a model, and it allows you to build voice to text applications that can deal with input variety.
What I’ve just described to you is classic ML, and it is through ML that we build applications for hard problems. While there are different types of ML (which we will get to later), this is the key approach that defines modern AI.
machine learning (ML)
(mə’∫in ‘l3rnIŋ) noun
Machine learning is the training process we described above, but it is often used to describe what an application does after training is complete: It predicts. Training produces an understanding of the relationship between inputs and outputs, but this understanding (model) isn’t perfect. Because of this, when an application produces outputs on the basis of that model, it’s making a prediction. Given what it has seen, it predicts input “X” should be matched with output “Y.”
This terminology might throw some people off, as we rarely think of our own judgment process in such a manner, but that isn’t because we aren’t making predictions. Anytime our brain attempts to recognize anything, it’s trying to predict what that thing is. It’s trying to match our sensory data to pre-existing conceptual patterns, and, when it comes to a perceived match, it produces an interpretation: a prediction of what something is. Sometimes we really know we’re only guessing—memory fails or we aren’t terribly familiar with a given pattern—but sure or uncertain, every interpretation our brain makes based on sensory data is (more or less) a prediction.
Given this fact, the similarity between modern AI (ML) and our own organic intelligence should now be a little more apparent.
As already mentioned, a model is the output of the training process. It is the mapping between inputs and outputs that the ML algorithm has made based on the data it was given. Instead of more unfamiliar definitions, let me give you a more relatable, intuitive explanation.
When you study for an exam, you could look at previous exams and memorize the question/answer pairs. This method works wonderfully…if the exam is only composed of questions that have appeared on previous tests. But, if you get asked a new question, one that didn’t show up before, you’re out of luck.
The other, better way of studying is to look at the previous exams and create concepts from which you can generalize. Using this approach, you can handle the questions you’ve seen before and those you haven’t.
This is a model: a general concept of how Xs relate to Ys.
natural language processing (NLP)
(‘næt∫(ə)rəl ‘læŋgwIdʒ ‘prasesIŋ) noun
AI is divided into a number of subcategories that deal with different kinds of data, and NLP is the subcategory that focuses on computationally handling problems related to human languages.
There are a large variety of NLP applications. Some are used to extract quantities, entities, or insights from large datasets: NLP helps search engines understand language and deliver meaningful search results and financial institutions sift through large numbers of financial documents to find key phrases or concepts. Others are used to transform raw information into prose: Some major newspapers now use natural language generation to produce short articles. These are just a few examples of how this AI technology is used.
NLP is particularly important given how much human information is stored in language…a challenge that has grown exponentially since the arrival of the internet. There is a veritable (and ever-growing) ocean of data relevant to the everyday lives of individuals…as well as the operations of commercial and government organizations. Unfortunately, the size of the opportunity goes hand in hand with the difficulty of seizing it: Sure, there’s a mountain of important data, but separating the relevant from refuse is the Everest of data science challenges.
structured & unstructured data
(‘str^kt∫ərd ənd ^n’str^kt∫ərd ‘dætə) noun
Structured data has some sort of a known, unambiguous order that can be easily understood. The information contained in tables, spreadsheets, taxonomies, and protocols are all examples of structured data.
Unstructured data does not have a formal, clear order that can be easily understood. Unstructured doesn’t mean “no rules”: Prose (hopefully) follows the rules of grammar while being unstructured. Instead, unstructured data describes information whose format is difficult for a computer to interpret—and who didn’t find grammar a pain?
supervised & unsupervised learning
(‘süpər vīzd ənd ən’süpər vīzd ‘l3rnIŋ) noun
Supervised learning is the classic way ML algorithms build their models during training, and it is the process we’ve described in every ML example so far. By feeding an ML algorithm a dataset that has labeled inputs and outputs (i.e., annotated data or “gold standard”), the algorithm infers the connections between inputs and outputs, allowing it to produce a model.
Unsupervised learning is also about algorithms interpreting data to produce models, but there is a twist—the data the machine trains on are not annotated. Instead, the ML algorithm identifies patterns in the data without any assistance.
This approach is very good for finding and identifying new, useful patterns. For example, imagine you’re an e-commerce giant, and you have a list of all the people that visit your website. Let’s say you want to segment them so you can send each group a targeted newsletter. You could use conventional categories, like students, parents, etc. But what if you don’t think those are the best possible groupings? To find optimal divisions, you could use unsupervised learning to sort the data into groups, known as clusters, based on the inherent features of that data (e.g., demographics, buying habits, previous purchases), and that data alone. This method produces an interpretation free from pre-existing ideas or biases and could help you target your newsletters to the best audience possible.
deep learning (DL)
(dip ‘l3rnIŋ) noun
Deep learning is another subcategory of ML. DL models are multi-layered, neural networks, where each layer has captured a particular nuance of an input/output relationship. For example, if you were to use a deep learning approach to matching photos of dogs to the word dog, each neural net layer would add complexity (or resolution) to the features captured by the layer preceding it.
For example, the first layer would capture the most general features of a photo of a dog: light, shadow, curve, etc. The next layer would add more detail to each of those general characteristics: curves become more discernible shapes, light and shadow begin to define an object. The next layer would do the same with this higher resolution interpretation until we eventually identify the entire complexity of the dog. In DL, each layer creates a more refined interpretation of the features handed down by the layer preceding it, and these layers can become incredibly dense—or deep—hence the name.
(‘nutrəl nets) noun
The architecture of neural nets is inspired by the design of the human brain and is the technological backbone of the DL approach. It’s basically a network of neurons, where the output of each individual neuron is then used as one of the inputs for the next neuron. By allowing for such sophisticated models, this design has enabled ML to tackle far more difficult tasks than was possible via traditional methods.
We live in an exciting time. Thanks to the accessibility of large datasets, enormous computational power, and innovative algorithms, we are finally able to train and run deep neural networks. Containing millions of interconnected neurons, these models are capable of solving problems that would have been unthinkable for AI systems just a decade ago. In this sense, the AI revolution happening right now.
Because of the universal impact this revolution is having, everyone needs to understand AI basics—and they shouldn’t have to be a data scientist to do it. It’s my hope that this introduction to these buzzy—but vital—terms provided you with the beginning of that foundation. Armed with this advice, you’ll not only be able to navigate the rest of this blog series with ease…you’ll also have ample material to be more than impressive at the rest of this year’s dinner parties.
About the Author
Dr. Kfir Bar is the Chief Scientist of the BasisTech text analytics team. He has spent many years working in a wide range of natural language processing disciplines, including statistical machine translation, machine learning, ontologies, and language generation. Kfir joined Intuview in 2005 as CTO, supporting national security and counter-terrorism missions by deducing authorship, sentiment, intent, and other contextual information. In 2013, he co-founded Comprendi, which transforms big data into actionable marketing insights. Kfir lectures at three different universities in Israel where he teaches courses in computer science, digital humanities, machine translation, algorithms, and NLP. Kfir holds a Ph.D. in computer science from Tel Aviv University for a thesis on Semantics and Machine Translation.
Unlock the “Black Box”
The only way AI’s going to make a real impact in finance, healthcare, and other highly regulated industries is if the “black box” problem tackled head on.
The Amazing, Anti-Jargon, Insight-Filled, and Totally Free Handbook to Integrating AI in Highly Regulated Industries does exactly that. Featuring in-depth pieces from almost a dozen subject-matter experts, this handbook provides a comprehensive breakdown of the problem… and detailed strategies to help you create a solution.