Artificial Intelligence Terminology 101

By Katherine Paulson

 

This week’s blog is an introduction to some of the basic terminology used in Artificial Intelligence (AI) and their definitions.  As someone new to AI, but a veteran of the financial services industry, I made an effort to start learning quickly about the basics when I arrived here at AlphaTrAI. Once I understood these terms, a new world of asset management was open to me and the exciting possibilities that came with it.  

 

Although it seems like AI terminology just recently started appearing online and in social media, it’s not new, it has been around for decades.  In fact, the term “Artificial Intelligence” was coined in 1956 at Dartmouth College.  The reason for AI’s incredible growth in the last decade is because of these key developments:  

 

  • Computational Power
  • Access to Algorithms and Libraries
  • New Research
  • Access to Data

 

The growth in AI investing started in 2011 and continued its upward trajectory with a big acceleration between 2015 through mid-2019 when VC’s invested $60B in AI related startups. 

The Basics

 

When thinking about AI terminology, let’s begin at the top, Artificial Intelligence, Machine Learning (ML)  and Deep Learning (DL).  Artificial Intelligence encompasses the entire scope, Machine Learning is a subset within AI and Deep Learning is a subset of Machine Learning as illustrated here:

In the simplest terms:

 

Artificial Intelligence = Computer is doing something human

Machine Learning = This computer is learning

Deep Learning = This computer is learning in a specific way

 

Two types of Machine Learning

 

Now, let’s move on to Supervised Learning vs. Unsupervised Learning.  The main thing to remember in Supervised Learning is that it is all about labeled data.  In Supervised Learning, imagine a manager who oversees all of the data and tells the computer exactly what that data is.  The computer is fed with examples, directed to determine what is correct or incorrect and told their differences with the end goal being able to train it to identify its features. Features are attribute information you know to be important for making a prediction. Then, when new data that has never been seen before is introduced, based on the model that the computer created (think of a model being a shortcut to a pattern the computer saw), it creates an output with a prediction of what that new data is.  An example output may be expressed as “65% confident”. 

 

In Unsupervised Learning, you guessed it, it is the opposite – there are no labels, no manager and no direction.  The computer or system is provided a data set and it organizes the data in its own way.  For example, a popular algorithm in Unsupervised Learning is “clustering”.  Clustering is the task of dividing the population or data points into a number of groups such that data points in the same groups are more similar to other data points in the same group and dissimilar to the data points in other groups.  Clustering algorithms are popular for customer segmentation.

 

Going back to the basic definitions of AI, ML and DL, remember AI can be described simply as the computer doing anything human-like that can range from autonomous cars to a very sophisticated set of if-then statements that recreate human behavior.  Within AI is ML which is most of what we read and hear about.  So, to differentiate ML vs. DL, we need to distinguish what separates DL.

 

What is Deep Learning?

 

Deep Learning is specific to its structure that mimics the human brain, also called Neural Networks.  Neural Networks are set up to model how humans process information.  They are generally more computationally intensive than traditional ML.  Artificial Neural Networks (ANN) can look as simple as one input layer, one hidden layer and one output.  However, Deep neural networks have more than one hidden layer, that is what separates them.  Each hidden layer is a specialized function.  These multiple layers or depth allows you to create much more abstract features that can be extracted from the upper layers which enables the improvement of the overall flow of the network to classify something, an image for example.

To describe this in a simple way, let’s use an analogy.  A person with very poor vision with a prescription of -15 looks at an image and sees a complete blur.

Then, someone hands them a pair of glasses that improves their vision to -7.  Now, they start to see some clusters of pixels in the image.

Next, someone hands them a pair of glasses that improves their vision to -3 and they can determine that there are ears, body and paws in the image.

Finally, they are handed another pair of glasses that allow them to have 20/20 vision and now they see clearly the clusters form an image of their favorite pet.

Deep Learning = Multiple hidden layers that “build” on each other to “extract” higher level features like “paws”.

 

What is incredible to think about DL is the ability to apply this layered learning to the abstract.  Business applications are boundless.

 

I hope this brief overview gives you a better understanding of AI’s basic terminology and the next time it comes up in conversation, you won’t miss a beat.