Category Archives: Artificial Intelligence

Artificial Intelligence 102

AI Review

AI robot
iStock

In our Artificial Intelligence 101 article, we spoke about binary classification with supervised learning using the fly example. Then we discussed the limitations of this type of clarification because it has only two data sets to compare with the unknown data.

In the case of the fly example, we are only able to determine if it is a flying or a crawling insect. If we want to get more precise, such as determining what type of fly it is, we need to acquire more categories of labeled data. This is called multiclass classification

As we proceed with the multiclass classifications, we are also going to delve into the types of models that are used for this process, but before we begin, let’s clarify a couple of AI terms so that everything is clear, starting with data points which we scratched the surface within our AI 101 article.

What is a Data Point?

Colorful illustration of AI unsupervised clustering
Photo by Google DeepMind on Unsplash

A data point is a specific attribute that is input into the machine learning algorithm (AKA the model). It is a component that is part of a complete unit. The more data points there are, the more precise the model will be in its conclusion.

What is a Dataset?

A dataset is a collection of data points. A data set can contain any number of data points, from a few to billions. 

Data Point and Dataset Usages

Our fly example is a representation of AI data points and datasets, but in the real world, these factors work for a large variety of conditions. Below are just a few of them.

  • Financial predictions
  • Using a self-driving car
  • Facial recognition
  • Medical diagnosis models
  • Agriculture
  • Predictions for better sales
  • Fraud detection systems
  • A customer service chatbot

Together, the algorithm reads the unknown data points that are given to it and compares those data points to the labeled model The more data points that are supplied, the more accurate the model will be.

Now, let’s look at the AI models that are available. 

Honor Thy Neighbor! The K-Nearest Neighbor Model

One of the models is called K-Nearest Neighbor (KNN). This algorithm will look at the unknown piece of data and compare it to the marked data. This is nothing new. We learned about this in our previous lesson on supervised learning, but now the comparisons will be matched against more than two classes. 

Close up picture of a fly
Image by Erik Karits from Pixabay

In our fly example, let’s create classes that will include four types of flies: house fly, horse fly, fruit fly and horn fly. Each one of these flies have specific characteristics or patterns of data points that distinguish them from the other classes.

Example 1: Imagine you have a big puzzle with different pieces. Each piece of the puzzle represents a data point. Just like how each puzzle piece is unique and contributes to the overall picture, a data point is a single piece of information or observation that helps us understand or solve a problem.

Example 2: Let’s say we want to know the favorite color of each student in a class. Each student’s favorite color is a data point. We can collect all these data points to find patterns or make conclusions about the class’s preferences.

In simpler terms, a data point is like a puzzle piece that provides us with a small part of the whole picture or information we are trying to understand. By putting all the data points together, we can learn more about a situation, solve problems, or make decisions based on the available information.

In other words: A data point is a small piece of information or a single example that helps us understand or learn about a larger group or class of things. It’s like having one item or measurement from a collection that represents the whole group.

The k-nearest neighbors (KNN) algorithm uses data points of specific marked classes to compare to the unknown (given) data. The more data points of a specific class, the more likely the unknown data will match that class.

The algorithm will scan the data points of the unknown fly and ask itself which known fly category looks to be the closest neighbor to the unknown fly? Technically speaking, which set of data points of a specific class is the closest match to the set of data points to the unknown data? Looking at it in reverse, which class is the most distant match to the unknown data? 

This is the KNN process, which finds the closest pattern of data points of the unknown data. The more accurate the data points that match the unknown data, called votes, the better of a match you have, and those classes will be its closest neighbors.

Another way of explaining KNN is once the K nearest neighbors are identified, the unknown data point is assigned the class label that is most prevalent among its neighbors. This means that the majority class among the k nearest neighbors determines the classification of the unknown data point.

But How Do We Measure These Distances?

Do the Math

Man using ruler on notebook
Photo by Tamarcus Brown on Unsplash

Math is used (don’t worry. It is simply high school math) to determine which neighbors are the closest in proximity to the unknown data and those neighbors are designated by the letter K.

The math that is used is the distance between two points. If you don’t remember how to calculate the distance between two points, you can go to this refresher course. This procedure is called the Euclidean Distance and the computer instructions are based upon this concept.

So the data points that match the unknown data get more votes and subsequently are given a number that represents the distance to the unknown entity. The lower the number, the closer the data class resembles the unknown.

To relate Euclidean Distance to our fly example, it would mean what fly category has the line with the least distance to the unknown fly. 

The KNN algorithm is based on the concept that similar things exist in close proximity, so the best match would be those where the lines in the graph are the shortest distance. 

What is a Predictor?

A predictor is the output that an algorithm releases based on a learned dataset that it uses to make further predictions. 

The Regression Model

This algorithm is a supervised learning model used when future predictions are required. It takes the input data, also known here as independent variables and makes predictions based on the patterns it sees from what it learned from the dataset. In other words, Regression models are trained on a dataset of historical data. The model learns the relationship between the independent and dependent variables from the data. Then it can be used to predict the value of the dependent variable for new data points. 

Conclusion

  1. A major advantage of AI lies in its ability to improve efficiency. Similar to the Industrial Revolution, AI is streamlining the manufacturing process, increasing productivity and reducing human error.
  2. Artificial Intelligence enhances decision-making through data analysis and predictive capabilities. In healthcare, AI can analyze a vast amount of medical datasets, aiding doctors in diagnosing diseases and suggesting treatment plans. Financial institutions rely on AI for fraud detection, increasing security and efficiency. and governments use machine learning to predict criminal activities and allocate resources for improved public safety.
  3. Machine learning algorithms can generate art, compose music, and write literature. In design and engineering, it assists in more efficient and aesthetically pleasing products.
  4. AI is expediting scientific research by rapidly analyzing extensive datasets, accelerating discoveries in genomics, drug development, and climate science.
  5. This technology also holds promise in addressing global challenges such as in agriculture, where it can enhance crop yields. Disaster prediction and response are also improved through AI analytics.
  6. Natural Language Processing (NPL) gives us voice recognition that enables better interaction with digital devices, especially for people with disabilities.

As AI continues to advance,  the potential to reshape industries and improve the quality of life for people around the world is extremely promising, but we must ensure that the utilization of machine learning does not fall into the wrong hands. Ethical considerations and responsible development must remain at the forefront so that artificial intelligence benefits are harnessed responsibly and equitably throughout the world!

 

AI 101 – How Does Artificial Intelligence Work?

Illustration of computer chips on a wall with a woman in front
Image by Gerd Altmann from Pixabay

Overview

You are a robot, but like the scarecrow in the Wizard of Oz, you have no brain. John the human wants to change that, so he filled your brain with a model of a fire engine. 

But John also wants you to identify the fire engine by knowing the components that comprise it, so he provides you with this knowledge.

In addition, he provides you with information as to other variations of the fire engine vehicle, meaning that if the parts do not entirely match that of a fire engine, the components may be more closely matched to that of an ambulance or possibly some other type of vehicle.

Photo of a fire engine
Photo by John Torcasio on Unsplash
Now you have the data necessary to identify a fire engine and know what the parts are that encompass it. You can use this knowledge to compare the model to other objects and determine if any of those objects are fire engines or decide that it is something else entirely, and if so, what else could it be?
Congratulations! You are now a machine that can differentiate between objects, or more specifically, you are artificial intelligence!
Ok, we admit this scenario is quite simplified but the idea is to provide the concept of artificial intelligence. So now, let’s dwell into the details of exactly how this works, but before we continue, here are a few technical terms that you should familiarize yourself with. We will be discussing them in more detail further into this article.
Datapoint = The components that make up the model (parts of the fire engine).
Dataset = The combination of all the components together that make up the model (the vehicle as a whole unit).
Supervised Learning = The ability to look at a particular object and compare it to the object (model) that you have in your possession.

AI is Learning

The basic premise behind AI is to create algorithms (computer programs) that can scan unknown data and compare it to data that it is already familiar with. So let’s start by looking at another example.

Image of a fork
Image by Susann Mielke from Pixabay. Text by SMS.

The AI Mindset

Is this a fork or a spoon? Is it a knife? Well, they both have handles, but this one has spikes. Let me look up what pieces of information I have in my database that look like this item. Oh, I have a piece that resembles this spike pattern, so it must be a fork!

AI algorithms scan the unknown data’s characteristics, called patterns. It then matches those patterns to data it already has recognized, called pattern recognition. The data it recognizes is called labeled data or training data and the complete set of this labeled data is called the dataset. The result is that it decides as to what that unknown item is.

The patterns within the dataset are called data points, also called input points. This whole process of scanning, comparing, and determining is called machine learning. (There are seven steps involved in machine learning and we will touch upon those steps in our Artificial Intelligence 102 article).

For example, if you are going to write a computer program that will allow you to draw a kitchen on the screen, you would need a dataset that contains data points that make up the different items in the kitchen; such as a stove, fridge, sink, as well as utensils to name a few; hence our analysis of the fork in the image above.

Note: The more information (data points) that is input into the dataset, the more precise its algorithm’s determination will be.

Now, let’s go a bit deeper into how a computer program is written.

Writing the Computer Program

Computer Program Instructions
Photo: iStock

We spoke about how computers are programmed using instructions in our bits and bytes article, but as a refresher, let’s recap!

Computer programs, called algorithms tell the computer to do things by reading instructions that a human programmer has entered.  One of our examples was a program that distributes funds to an ATM recipient. It was programmed to distribute the funds if there was enough money in the person’s account and not if there wasn’t.

But THIS IS NOT AI since the instructions are specific and there are no variations to decide anything other than “if this, then that”.

In other words, the same situation will occur over and over with only two results. There is no determination that there may be more issues, such as the potential for fraudulent activity.

Bottom line – There is no learning involved.

Writing a Learning Program

The ATM example is limited to two options, but AI is much more intensive than that. It is used to scan thousands of items of data to determine a conclusion.

How Netflix Does It

Did you ever wonder how Netflix shows you movies or TV shows that are tuned to your interests? It does this by examining what your preferences are based on your previous viewings.

The algorithm analyzes large amounts of data, including user preferences, viewing history, ratings, and other relevant information to make personalized recommendations for each user.

It employs machine learning to predict which movies or TV shows the user is likely to enjoy.

It identifies patterns and similarities between users with similar tastes and suggests content that has been positively received by those users but hasn’t been watched by the current user.

For example, if a user has watched science fiction movies, the recommendation might be to suggest other sci-fi films or TV shows that are popular among those users with similar preferences.

The program will learn and adapt as the user continues to interact with the platform, incorporating feedback from their ratings and viewings to refine future recommendations.

By leveraging machine learning, streaming platforms like Netflix can significantly enhance the user experience by providing tailored recommendations, increasing user engagement, and improving customer satisfaction.

This can’t be done using the non-learning ‘if-else’ program we previously spoke about in the ATM example.

A Gmail AI Example

As you type your email, Google reads it and then offers words to accompany the sentence that would coincide with what you are about to type before you have even typed it.

This is called language modeling which uses the Natural Language Process (NPL) model.

In NLP, the algorithm uses a factor of probability that is designed to predict the most likely next word in a sentence based on the previous entry.

AI algorithms feed on data to learn new things.
The more data (data points) that exist, the easier it will be for the model to identify the patterns of an unknown entity.

AI: How it All Works

There are three main types of machine learning: supervised learning, unsupervised learning, and reinforcement learning.


Click CC above for closed caption

Supervised Learning

This is the most common type of machine learning. It involves feeding a computer a large amount of data to enable it to recognize patterns from the labeled dataset and make predictions when confronted with new data.

In other words, supervised learning consists of training a computer program to read from a data sample (dataset) to identify what the unknown data is. 

How the Machine Thinks with Supervised Learning

Poyab Bridge under construction, Freiburg, Switzerland
Photo: iStock

Show and Tell: A human labels a dataset with data points that identify the sample set to be a building.

Then the human does the same thing to identify a bridge. This is another classification different from the building classification and is identified with specific patterns that make up a bridge.

The program takes note of the patterns of each classification. If computer instructions were written in plain English, this is what it would say:

This is a bridge. Look at the patterns that make up the bridge. And this is a building. Look at the patterns that make up the building. I can see distinguishable differences in the characteristics between the two. Let me match them up to the unknown data and make a decision on whether this new data is a bridge or a building.

Supervised learning is used in many applications such as image recognition, speech recognition, and natural language processing.

Supervised learning uses a data sample to compare unknown data. The data sample is called a data model.

It’s Raining Cats and Dogs

A supervised learning algorithm could be trained using a set of images called “cats” and “dogs”, and each cat and dog are labeled with data points that distinguish each.

The program would be designed to learn the difference between the animals by using pattern recognition as it scans each image. 

A computer instruction (in simplified terms) might be “If you see a pattern of thin lines from the face (whiskers), then this is a cat”.

The result would be that the program would be able to make a distinction of whether the new image it is presented with is that of a cat or dog!

This type of learning involves two categories – cats and dogs. When only two classifications are involved, it is called Binary Classification.

Supervised Learning Usining Multi Classifications

An Example

Illustration of a fruit fly
Image by Mostafa Elturkey from Pixabay

Suppose you are studying insects and you want to separate flying insects from crawling ones. Well, that’s easy. You take a bug that you found in your backyard and compare it to the ant and fly you already stored on your insect board. In AI terms, this is supervised binary classification.

You immediately know, based on the pattern configuration of the insect which classification it belongs to – the crawlers or the flies. Now you grab more flies and put them in the fly category and do the same with the creepy crawlers for their category.

Let’s say you want to go deeper in the fly classification and find out what type of fly it is, (e.g. house fly, horse fly, fruit fly, horn fly, etc.); but you only have two classifications to compare them two – flies and crawlers, so what do we do? You create more classifications for the fly class.

This is multi-classifications, or more technically called multi-class classifications, which provide additional labeled classes for the algorithm to compare the new data to.

We will delve more into multi-class classifications and how this works in our next article, but for now, just know what binary classifications and multi-class clarifications are.

Unsupervised Learning

Colorful illustration of AI unsupervised clustering
Photo by Google DeepMind on Unsplash

Unsupervised learning involves training a computer program without providing any labels or markings to the data. The aim is to enable the program to find (learn) patterns and relationships on its own.

It does this by reading millions of pieces of information and grouping them into categories based on their characteristics or patterns, then making decisions on what the new entity is by matching it up to one of those categories.

In other words, it matches patterns of the unknown data to the groups it created and then labels them without human intervention. This is called clustering.

Anomaly detection is the task of identifying data points that are unusual or different from the rest of the data. This can be useful for tasks such as fraud detection and quality control.

Reinforcement Learning

Reinforcement learning (RL) learns by trial and error, receiving feedback in the form of rewards or penalties for their actions. Any negative number that gets assigned means it is punished.

The higher the negative number, the more the algorithm will learn not to pursue that particular circumstance and will subsequently try again until positive numbers are assigned, called a reward. It will continue this process until it is properly rewarded. The goal of RL is to maximize its rewards over time by finding a sequence of actions that leads to the highest possible reward. 

One of the defining features of RL is the use of a feedback loop in which the agent’s actions (an agent is the decision-making unit that is responsible for choosing actions in the environment that was provided to it). The loop permits the agent to learn from experience and adjust its behavior accordingly.

The feedback loop works as follows:

  1. The agent takes an action in its environment.
  2. The environment provides the agent with feedback about the action, such as a reward or punishment.
  3. The agent then updates its policy based on the feedback.
  4. The agent will repeat steps 1-3 until it learns to take actions that lead to desired outcomes (rewards).

RL has been applied to a wide range of problems, such as games, robotics, and autonomous driving. It is particularly useful in scenarios where the action may not be immediately clear and where exploration is necessary to find the best solution.

Conclusion

Overall, these AI methods are widely used in various industries and applications. We will continue to see growth and development as artificial intelligence technology advances.

What are the advances or dangers that AI can bring to the future? Read our article on the Pros and Cons of AI to find out.

Machine Language Terms to Know

  • Computer Instruction
  • Computer Program
  • Algorithm
  • Data Points
  • Patterns
  • Labeled Data
  • Dataset
  • Data Model
  • Pattern Recognition
  • Machine Learning
  • Binary Classification
  • Multiclass Classification
  • Supervised Learning
  • Unsupervised Learning
  • Reinforced Learning

Artifical Intelligence: The Pros and Cons

Human hand touching a brain and AI hand touching a brain
Image by Gerd Altmann from Pixabay

The Quandary of AI

Are you afraid of what AI can do or are you looking forward to the benefits it can provide?  Part of your decision would be based on personality glass is half full or the glass is half empty, but there are always consequences to technological advancements, whether for the good of humankind or for those looking to gain an upper hand in a nefarious manner. The development of the atom bomb was the result of Einstein’s theory of relativity, even though the scientist had no idea of the negative consequences his theory would bring.

Let’s take a look at both the positives and negatives of artificial intelligence and what it can potentially have for us and then you can make a decision.

AI Overview

Artificial intelligence (AI) is a rapidly growing field that has the potential to transform our world in countless ways. From healthcare to finance, and education transportation, AI can benefit mankind in a myriad of ways, but not everyone is on board with this as we will see in this article, the good and the bad of the advancements of artificial intelligence. 

The Benefits

Advancement on Healthcare

Doctor at a laptop
Photo: IStock

One of the most significant benefits of AI is its potential to revolutionize healthcare. AI can analyze vast amounts of medical data, including patient records, lab results, and imaging studies.

With this information, AI algorithms can detect patterns and make predictions that could help doctors diagnose and treat diseases more accurately and quickly. It can also help identify high-risk patients, allowing doctors to intervene early and prevent diseases from progressing.

Transportation

Cars in traffic
Photo: iStock

Another area where artificial intelligence can benefit us is in the field of transportation. Self-driving cars, buses, and trains have the potential to significantly reduce accidents, traffic congestion, and pollution. By removing the human element from driving, these vehicles can make our roads safer and more efficient.

Additionally, AI can be used to optimize traffic flow, reducing congestion and travel times. This can save time and money for individuals and businesses alike.

Education

AI can also be used to improve education. AI-powered tutoring systems can provide personalized, adaptive learning experiences for students of all ages and abilities. By analyzing a student’s learning style, strengths, and weaknesses, these systems can create customized lesson plans that help them learn more effectively. This can lead to improved academic outcomes and greater educational equity, as students who may struggle with traditional teaching methods can receive tailored instruction that meets their needs.

Finance

Graph of gold on the rise
Photo: GraphicStock

Detecting fraud, managing risk, and optimizing investments are just three of the ways AI is being used to advance the financial sector. By analyzing financial data, algorithms can detect patterns that may indicate fraudulent activity, alerting financial institutions to potential threats before they cause significant damage.

Additionally, AI can help them manage risk more effectively by predicting market fluctuations and identifying potential investments that offer high returns with low risk.

AI can also benefit society by improving public safety. AI-powered surveillance systems can detect potential threats in public spaces, alerting law enforcement and allowing them to respond more quickly. AI can also be used to analyze crime data, helping law enforcement identify patterns and allocate resources more effectively.

The Environment

Illustration of the effects of climate change, showing grass and then barren ground
Photo: iStock

Finally, AI can benefit mankind by helping us protect the environment. By analyzing environmental data, AI can help us understand the impacts of human activity on the planet and develop strategies to mitigate them. For example, AI can help us optimize energy consumption, reduce waste, and improve recycling efforts. Additionally, AI can help us predict and respond to natural disasters, reducing their impact on human lives and property.

The Benefits of AI – A Summary

AI has the potential to benefit mankind in countless ways. From healthcare to education, finance to public safety, and the environment. It can help us solve some of the biggest challenges facing our society. However, we must approach AI development with caution and foresight, taking steps to mitigate risks and ensure that it is used in ways that prioritize human welfare and respect for human rights. With careful planning and collaboration, we can harness the power of machine learning to create a better future for all.

Potential Dangers

Unknown person in black sourrounded by binary code
Photo: Pixabay

Artificial Intelligence can pose significant dangers that need to be addressed. Similar to the potential dangers of the use of quantum computers, the same threats are associated with AI.

One concern is the potential for it to be used in ways that violate privacy or human rights. Additionally, the use of AI in decision-making processes could result in biases or discrimination if the algorithms are not carefully designed and monitored. Finally, there is the risk that it could become too powerful, leading to unintended consequences or even threatening human existence.

The Labor Question

As AI technology advances, it becomes increasingly capable of performing tasks that were once done by humans, leading to job loss and economic disruption. For example, self-driving cars have the potential to replace human drivers, which would lead to unemployment in the transportation sector. This could result in a significant reduction in the workforce and an increase in social inequality.

AI and Bias

Another danger of AI is its ability to perpetuate biases and discrimination. AI algorithms are designed to learn from data, and if the data used is biased, the AI will also be biased. This can result in unfair decisions being made by AI systems, such as in hiring, lending, or criminal justice. This can have significant negative impacts on individuals and communities.

Global Security

Furthermore, AI could pose a significant threat to global security. With advancements in AI technology, it is becoming increasingly possible for AI systems to be used in cyber-attacks or even to control weapons systems. This could lead to significant risks and damages, such as loss of life or damage to critical infrastructure.

Nefarious Exploitation

Finally, the development of AI could also pose ethical and moral dilemmas. As machine language systems become more intelligent, questions arise about their autonomy and decision-making capabilities. If an AI system makes a decision that is morally or ethically questionable, who is held accountable? What happens if it is programmed to harm humans or perform unethical tasks?

In a Nutshell

Artificial Intelligence Illustration AI
Image by Tumisu from Pixabay

While AI has the potential to bring significant benefits, it is important to be cautious in its development and use. The dangers of should be taken seriously and addressed through proper regulation and oversight. It is important to ensure that AI systems are developed and used responsibly and ethically to minimize the potential risks and maximize the benefits of this technology.

To mitigate these risks, we must approach AI with caution and foresight. We must ensure that AI is developed and used in ways that prioritize human welfare and respect human rights. This requires ongoing dialogue and collaboration between technologists, policymakers, and the public.

With that said, we do have the opportunity to live better in all aspects of our lives and it is well worth something for all of us to look forward to!