
Overview: Artificial Intelligence in 100 Words
AI is Learning
The basic premise behind artificial intelligence is to create algorithms (computer programs) that can learn by viewing unknown data and comparing that data to data that it is already familiar with or has learned what it is. (We’ll discuss the difference between “already familiar” with “has learned ” further in this article), so let’s start by looking at a simple example.
Image by Susann Mielke from Pixabay. Text by SMS.
In the Mind of AI
Is this a fork or a spoon? Well, they both have handles, but this one has spikes. Let me look up what pieces of information I have in my database that looks like this item. Oh, I have a piece that resembles this spike property, so it must be a fork!
These programs are written to scan new data, then they break the data up into the different characteristics that compose the item. It then matches those characteristics to other data it already knows and makes a determination.
The more information (characteristics) it can compare the new item to, the more precise its determination will be.
AI compares new data to old data in order to make a determination of what the new data is. This is called data analytics.
Now, let’s go a bit deeper into how a computer program is written.
Writing the Computer Program

We spoke about how computers are programmed using instructions in our bits and bytes article, but as a refresher, let’s recap!
Computer programs, called algorithms, tell the computer to do things by reading instructions that a human programmer has entered. One of our examples was a program that distributes funds to an ATM recipient. It was programmed to distribute the funds if there is enough money in the person’s account and not if there isn’t.
But THIS IS NOT AI since the instructions are specific and there are no variations to decide anything other than “if this then that”.
In other words, the same situation will occur over and over with only two results. There is no determination that there may be more issues, such as possible fraudulent activity.
Bottom line – There is no learning involved.
Writing a Learning Program
The ATM example above is limited to two options, but AI is extremely more intensive than that. It is used to scan thousands of items of data in order to determine a conclusion.
How Netflix Does It
Did you ever wonder how Netflix shows you movies or TV shows that are tuned to your interests? It does this by determining what your preferences are based on your previous viewings.
The algorithm analyzes large amounts of data, including user preferences, viewing history, ratings, and other relevant information to make personalized recommendations for each user.
It employs machine learning (we will be discussing this more later in this article) to predict which movies or TV shows the user is likely to enjoy.
It identifies patterns and similarities between users with similar tastes and suggests content that has been positively received by those users but hasn’t been watched by the current user.
For instance, if a user has watched and enjoyed science fiction movies, the recommendation might be to suggest other sci-fi films or TV shows that are popular among users with similar preferences.
The program will learn and adapt as the user continues to interact with the platform, incorporating feedback from their ratings and viewings in order to refine future recommendations.
By leveraging machine learning, streaming platforms like Netflix can significantly enhance the user experience by providing tailored recommendations, increasing user engagement, and improving customer satisfaction.
This can’t be done using the non-learning ‘if-else’ program we previously spoke about in the ATM example.
A Gmail AI Example
As you type your email, Google is reading it and then offers words to accompany the sentence that would coincide with what you are about to type before you have even typed it.
This is called language modeling and is another method by which AI is used.
In language modeling, the algorithm uses a factor of probability that is used to predict the most likely next word in a sentence based on the previous entry.
A Vocabulary Update
Before we continue, let’s get a bit more technical. The word ‘characteristics’ has been used here for simplicity, but the actual term where the computer looks at points of a subject is called “patterns“. And pattern recognition is the process of identifying unique points in the data.
AI algorithms feed on data to learn new things. The more data that exists, the easier it will be for the algorithm to identify the characteristics or patterns of an entity.
AI: How it All Works
There are three main types of machine learning: supervised learning, unsupervised learning and reinforcement learning.
Supervised Learning
This is the most common type of machine learning. It involves feeding a computer a large amount of data, with the aim of enabling it to recognize patterns and make predictions when confronted with new data. In other words, supervised learning consists of training a computer program to learn from marked data (data that has been already identified).
How the Machine Thinks with Supervised Learning

Show and Tell: A human shows the program what a bridge is. Then he shows the program what a building is. The program takes note of the characteristics of each, technically called patterns. If computer instructions were written in plain English, this is what it would say:
This is a bridge. Look at the patterns that make up the bridge. And this is a building. Look at the patterns that make up the building.
When the program runs it would think in these terms.
I can see distinguishable characteristics between the two. Let me match them up with what my human showed me which is which and then make a decision of whether this is a bridge or a building.
Supervised learning is used in many applications such as image recognition, speech recognition, natural language processing and recommendation systems.
It’s Raining Cats and Dogs
A supervised learning algorithm could be trained using a set of images called “cats” and “dogs”, and each cat and dog are labeled as such.
The program would be designed to learn the difference between the animals by using pattern recognition as it scans each image.
A computer instruction (in simplified terms) might be “If you see a pattern of thin lines from the face (whiskers), then this is a cat”.
The end result would be that the program would be able to make a distinction of whether a new image it is presented with is that of a cat or dog!
This type of learning involves two categories – cats and dogs. When only two classifications are involved, it is called Binary Classification.
Supervised Learning Usining Multi Classifications
An Example

Suppose you are studying insects and you want to separate flying insects from crawling ones. Well, that’s easy. You take a bug that you found in your backyard and compare it to the ant and fly you already stored on your insect board. In AI terms, this is supervised binary classification.
You immediately know, based on the pattern configuration of the insect which classification it belongs to – the crawlers or the flies. Now you grab more flies and put them in the fly category and do the same with the creepy crawlers for their category.
Let’s say you want to go deeper in the fly classification and find out what type of fly it is, (e.g. house fly, horse fly, fruit fly, horn fly, etc.); but you only have two classifications to compare them two – flies and crawlers, so what do we do? You create more classifications for the fly class.
This is multi-classifications, or more technically called multi-class classifications, which provides additional labeled classes for the algorithm to compare the new data to.
We will delve more into multi-class classifications and how this works in our next article, but for now, just know what binary classifications and multi-class clarifications are.
Unsupervised Learning

Unsupervised learning involves training a computer program without providing any labels or markings to the data. The aim is to enable the program to find (learn) patterns and relationships on its own.
It does this by reading millions of pieces of information and grouping them into categories based on their characteristics or patterns, then making decisions on what the new entity is by matching it up to one of the categories it created.
In other words, it finds matching patterns from the groups in the dataset completely on its own and then labels them without human intervention. This is called clustering.
Anomaly detection is the task of identifying data points that are unusual or different from the rest of the data. This can be useful for tasks such as fraud detection and quality control.
Reinforcement Learning
Reinforcement learning (RL) is a type of machine learning that focuses on training agents to make decisions based on experience in an environment. RL algorithms learn by trial and error, receiving feedback in the form of rewards or penalties for their actions.
The goal of RL is to maximize cumulative rewards over time by finding an optimal policy, or a sequence of actions, that leads to the highest possible reward. The agent explores the environment by taking action and receiving feedback, which it uses to update its policy to improve its performance.
One of the defining features of RL is the use of a feedback loop in which the agent’s actions influence the state of the environment, which in turn affects the rewards received by the agent. This feedback loop allows the agent to learn from experience and adjust its behavior accordingly.
RL has been applied to a wide range of problems, such as games, robotics and autonomous driving. It is particularly useful in scenarios where the optimal action may not be immediately clear and where exploration is necessary to find the best solution.
Overall, these AI tasks are all widely used in various industries and applications and we continue to see growth and development as artificial intelligence technology advances.
What are the advances or dangers that AI can bring in the future? Read our article on the Pros and Cons of AI to find out.