AI 101 – How Does Artificial Intelligence Work?

Illustration of computer chips on a wall with a woman in front
Image by Gerd Altmann from Pixabay

Overview: Artificial Intelligence in 100 Words

AI is Learning

The basic premise behind artificial intelligence is to create algorithms (computer programs) that can learn by viewing unknown data and comparing that data to data that it is already familiar with or has learned what it is. (We’ll discuss the difference between “already familiar” with “has learned ” further in this article), so let’s start by looking at a simple example.

Image of a fork

Image by Susann Mielke from Pixabay. Text by SMS.

In the Mind of AI

Is this a fork or a spoon? Well, they both have handles, but this one has spikes. Let me look up what pieces of information I have in my database that looks like this item. Oh, I have a piece that resembles this spike property, so it must be a fork!

These programs are written to scan new data, then they break the data up into the different characteristics that compose the item. It then matches those characteristics to other data it already knows and makes a determination.

The more information (characteristics) it can compare the new item to, the more precise its determination will be.

AI compares new data to old data in order to make a determination of what the new data is. This is called data analytics.

Now, let’s go a bit deeper into how a computer program is written.

Writing the Computer Program

Computer Program Instructions
Phto: iStock

We spoke about how computers are programmed using instructions in our bits and bytes article, but as a refresher, let’s recap!

Computer programs, called algorithms, tell the computer to do things by reading instructions that a human programmer has entered.  One of our examples was a program that distributes funds to an ATM recipient. It was programmed to distribute the funds if there is enough money in the person’s account and not if there isn’t.

But THIS IS NOT AI since the instructions are specific and there are no variations to decide anything other than “if this then that”.

In other words, the same situation will occur over and over with only two results. There is no determination that there may be more issues, such as possible fraudulent activity.

Bottom line – There is no learning involved.

Writing a Learning Program

The ATM example above is limited to two options, but AI is extremely more intensive than that. It is used to scan thousands of items of data in order to determine a conclusion.

How Netflix Does It

Did you ever wonder how Netflix shows you movies or TV shows that are tuned to your interests? It does this by determining what your preferences are based on your previous viewings.

The algorithm analyzes large amounts of data, including user preferences, viewing history, ratings, and other relevant information to make personalized recommendations for each user.

It employs machine learning (we will be discussing this more later in this article) to predict which movies or TV shows the user is likely to enjoy.

It identifies patterns and similarities between users with similar tastes and suggests content that has been positively received by those users but hasn’t been watched by the current user.

For instance, if a user has watched and enjoyed science fiction movies, the recommendation might be to suggest other sci-fi films or TV shows that are popular among users with similar preferences.

The program will learn and adapt as the user continues to interact with the platform, incorporating feedback from their ratings and viewings in order to refine future recommendations.

By leveraging machine learning, streaming platforms like Netflix can significantly enhance the user experience by providing tailored recommendations, increasing user engagement, and improving customer satisfaction.

This can’t be done using the non-learning ‘if-else’ program we previously spoke about in the ATM example.

A Gmail AI Example

As you type your email, Google is reading it and then offers words to accompany the sentence that would coincide with what you are about to type before you have even typed it.

This is called language modeling and is another method by which AI is used.

In language modeling, the algorithm uses a factor of probability that is used to predict the most likely next word in a sentence based on the previous entry.

A Vocabulary Update

Before we continue, let’s get a bit more technical. The word ‘characteristics’ has been used here for simplicity, but the actual term where the computer looks at points of a subject is called “patterns“. And pattern recognition is the process of identifying unique points in the data.

AI algorithms feed on data to learn new things. The more data that exists, the easier it will be for the algorithm to identify the characteristics or patterns of an entity.

AI: How it All Works

There are three main types of machine learning: supervised learning, unsupervised learning and reinforcement learning.

Supervised Learning

This is the most common type of machine learning. It involves feeding a computer a large amount of data with the aim of enabling it to recognize patterns and make predictions when confronted with new data, called training data or a data sample. In other words, supervised learning consists of training a computer program to learn from a data sample that identifies what that data is (called marked or labeled data) and subsequently, the algorithm looks at this sample to see if it matches the unknown sample. 

How the Machine Thinks with Supervised Learning

Poyab Bridge under construction, Freiburg, Switzerland
Photo: iStock

Show and Tell: A human labels a piece of data to be a building and identifies the building with specific characteristics called patterns that distinguish it specifically as a building.

Then the human does the same thing to identify a bridge. This is another classification different from the building classification and is identified with specific patterns that make up a bridge.

The program takes note of the characteristics of each classification. If computer instructions were written in plain English, this is what it would say:

This is a bridge. Look at the patterns that make up the bridge. And this is a building. Look at the patterns that make up the building. I can see distinguishable characteristics between the two. Let me match them up to the unknown data and make a decision on whether this new data is a bridge or a building.

Supervised learning is used in many applications such as image recognition, speech recognition and natural language processing.

Supervised learning uses a data sample to compare unknown data, also called a data model.

It’s Raining Cats and Dogs

A supervised learning algorithm could be trained using a set of images called “cats” and “dogs”, and each cat and dog are labeled as such.

The program would be designed to learn the difference between the animals by using pattern recognition as it scans each image. 

A computer instruction (in simplified terms) might be “If you see a pattern of thin lines from the face (whiskers), then this is a cat”.

The end result would be that the program would be able to make a distinction of whether a new image it is presented with is that of a cat or dog!

This type of learning involves two categories – cats and dogs. When only two classifications are involved, it is called Binary Classification.

Supervised Learning Usining Multi Classifications

An Example

Illustration of a fruit fly
Image by Mostafa Elturkey from Pixabay

Suppose you are studying insects and you want to separate flying insects from crawling ones. Well, that’s easy. You take a bug that you found in your backyard and compare it to the ant and fly you already stored on your insect board. In AI terms, this is supervised binary classification.

You immediately know, based on the pattern configuration of the insect which classification it belongs to – the crawlers or the flies. Now you grab more flies and put them in the fly category and do the same with the creepy crawlers for their category.

Let’s say you want to go deeper in the fly classification and find out what type of fly it is, (e.g. house fly, horse fly, fruit fly, horn fly, etc.); but you only have two classifications to compare them two – flies and crawlers, so what do we do? You create more classifications for the fly class.

This is multi-classifications, or more technically called multi-class classifications, which provides additional labeled classes for the algorithm to compare the new data to.

We will delve more into multi-class classifications and how this works in our next article, but for now, just know what binary classifications and multi-class clarifications are.

Unsupervised Learning

Colorful illustration of AI unsupervised clustering
Photo by Google DeepMind on Unsplash

Unsupervised learning involves training a computer program without providing any labels or markings to the data. The aim is to enable the program to find (learn) patterns and relationships on its own.

It does this by reading millions of pieces of information and grouping them into categories based on their characteristics or patterns, then making decisions on what the new entity is by matching it up to one of the categories it created.

In other words, it finds matching patterns from the groups in the dataset completely on its own and then labels them without human intervention. This is called clustering.

Anomaly detection is the task of identifying data points that are unusual or different from the rest of the data. This can be useful for tasks such as fraud detection and quality control.

Reinforcement Learning

Reinforcement learning (RL) is a type of machine learning that focuses on training agents to make decisions based on experience in an environment. RL algorithms learn by trial and error, receiving feedback in the form of rewards or penalties for their actions.

The goal of RL is to maximize cumulative rewards over time by finding an optimal policy, or a sequence of actions, that leads to the highest possible reward. The agent explores the environment by taking action and receiving feedback, which it uses to update its policy to improve its performance.

One of the defining features of RL is the use of a feedback loop in which the agent’s actions influence the state of the environment, which in turn affects the rewards received by the agent. This feedback loop allows the agent to learn from experience and adjust its behavior accordingly.

RL has been applied to a wide range of problems, such as games, robotics and autonomous driving. It is particularly useful in scenarios where the optimal action may not be immediately clear and where exploration is necessary to find the best solution.

Overall, these AI tasks are all widely used in various industries and applications and we continue to see growth and development as artificial intelligence technology advances.

What are the advances or dangers that AI can bring in the future? Read our article on the Pros and Cons of AI to find out.

Machine Language Terms to Know

Data Sample
Data Model
Data Pattern
Binary Classification
Multiclass Classification
Supervised Learning
Unsupervised Learning
Reinforced Learning

How to Optimize for Voice Search in 2023

Illustration of voice seach with man at microphone
Image: iStock

Voice Search Overview

Voice search is here to stay and will only be gaining momentum as we proceed into the future and for those that are in marketing or SEOs, it is important to stay up to date with these features and optimize accordingly.

The processes behind voice and text search are quite different. Voice search queries may be longer and more complex, as people tend to ask questions in a conversational style, while text queries are typically shorter and more direct.

Another difference is in the way search results are presented. In text search, results are typically displayed on a search engine results page (SERP), with a list of links and brief descriptions. In contrast, voice search typically provides only the most relevant result, read aloud by a virtual assistant or smart speaker; such as Apple Siri, Amazon Alexa, Google Assistant and Microsoft Corona. This means that optimizing for voice search requires a different approach to traditional SEO, with an emphasis on providing clear, concise answers to common voice questions.

Searching by sound is an SEO component that cannot be overlooked and with the accelerating advancements in artificial intelligence, it is imperative that web developers and SEOs keep a watchful eye on this evolving technology.

The Statistics

Laptop computer showing statistics
Photo: iStock

As of the writing of this article, 32% of people between the age of 18 and 64 use a voice search medium (Alexa, Siri, Corona, etc.) and that number will only grow as we move to the future. 

Entering standard text search queries on mobile devices is commonplace, with over 60% of cell phone users text searching and 57% of mobile users taking advantage of voice search. 

It should be no surprise that Google is the most successful interpreter of audio searches with a 95% accuracy.

In a study in 2021, 66.3 million households in the US were forecasted to own a smart speaker and that forecast has become a reality as of 2023.

Voice technology stretches beyond search queries as 44% of homeowners use voice assistants to turn on TVs and lights, as well as an array of other smart home devices currently on the market. 

With statistics as these, speaking to robotic assistants is here to stay and will only be growing with new technologies as we proceed through the 2020s and beyond. 

How Does Voice Search Work?

Woman speaking into a moblie phone
Photo: iStock

The Physics Behind It

If you just need to know that there is an analog-to-digital conversion and are not interested in the specifics of how it’s done, you can skip this part and go to the next section, which is Where Does the Data Come From?

We will summarize the process of how the sound of human speech is converted into machine language, which is filtering and digitizing.

Filtering: Smart speakers and voice assistants are designed to recognize the human voice over background noise and other sounds; hence, they filter out negative sounds so that they can only hear our voices.

Digitizing: All sound is naturally created in analog frequencies (use of sinewaves). Computers cannot decode analog frequencies. They must be converted to the computer language of binary code.

Below are the details of how an analog signal is converted to digital. 

The Analog Conversion Process

Illustration of a sine wave
Image by Gerd Altmann from Pixabay

|n order to make this conversion, an Analog-to-Digital Converter (ADC) is required. The ADC works by sampling the analog signal at regular intervals and converting each sample into a digital value. 

The steps are as follows:

    1. Sampling: The first step is to sample the analog signal at a fixed interval. The sampling rate must be high enough to capture all the frequencies of interest in the analog signal. The Nyquist-Shannon sampling theorem states that the sampling rate must be at least twice the highest frequency in the signal. Sampling means taking regular measurements of the amplitude (or voltage) of the signal at specific points in time and converting those measurements into a digital signal. Sampling is necessary in order to convert analog sound waves into digital signals, which are easier to store, transmit, and process using digital systems such as computers or microcontrollers. The rate at which the analog signal is sampled, known as the sampling rate or sampling frequency, is important because it determines the level of detail that can be captured in the digital signal. Sampling an analog signal is an important step in converting it to a digital signal that can be analyzed, manipulated, or transmitted using digital systems.
    2. Quantization: Once the analog signal is sampled, the next step involves assigning a digital value to each sample based on its amplitude. The resolution of the quantization process is determined by the number of bits used to represent each sample. The higher the number of bits, the greater the resolution of the digital signal.
    3. Encoding: The final step is to encode the quantized samples into a digital format. This can be done using various encoding techniques such as pulse code modulation (PCM) or delta modulation.

Overall, the main process of converting analog to digital frequencies involves sampling, quantization, and encoding. The resulting digital signal can then be processed using digital signal processing techniques.

In summary: Smart speakers and voice assistants take in the audio from a person’s speech and convert it to machine language.

Where Does the Data Come From?

Outline of a computer screen wiht a cloud behind it
Image by Gerd Altmann from Pixabay

Information gathered from smart speakers and voice assistants pulls data from an aggregate of sources.

If you want your business to grow, you must be attentive to where content for voice search is collected so that you can make intelligent decisions regarding how to optimize for these devices. 

Amazon Alexa

When Alexa responds to a query, it relies on Microsoft’s Bing search engine for the answer. Why? Because Amazon, as well as Microsoft, are in direct competition with Google, even though Google has the most popular search engine in the world. 

Amazon’s refusal to use Google for audio responses is not something to be concerned about. After all, Bing’s search algorithms are very similar to Google’s.

With that said, if a person speaks to Alexa with a specific request, (e.g. “What’s the weather today?”), Alexa can pull that information from a database associated with that request. In this case, Alexa will connect to Accuweather. The device can access Wikipedia and Yelp if it needs to as well.

Apple Siri

Initially, Apple used Bing as its default search engine, but in 2017, Apple partnered with Google. Now, when you say “Hey Siri”, you can expect Siri to access the immense data repository from Google and supply the answer. This applies to the Safari browser for text searches as well.

There is a caveat though. When it comes to local business searches, Siri will call on Apple Maps data and will use Yelp for review information.

Microsoft Cortona

This one is probably the most straightforward out of all of the search engines, as Cortona relies on what else but Microsoft Bing for its information. 

Google Assistant

OK, this one’s a no-brainer. Google can currently index trillions of pages to retrieve information and since this also applies to Apple’s Siri, this section is of most importance if you want to optimize voice search for these voice assistants.

In most cases, Google and Siri will read from Google’s featured snippet.

So What is a Featured Snippet?

Screenshot of a Google featured snippet
Image: © SMS

Featured snippets are what you see after you run a Google search query. It is a paragraph that appears at the top of the page that summarizes the answer to a question.

The information that Google applies to the snippet is gathered from, what Google determines to be the most reliable source (website) for that information.

How Does Google Determine a Featured Snippet?

For a snippet to be posted by Google, it needs to know that the source is trustworthy via its domain authority, link juice and high-quality content, to name three important organic factors, as any SEOs would already know, but in addition to these factors, Google will defer to “HowTo” and FAQ pages most often to pull in the snippet.

Is Structured Data Needed?

Structured data is extra code that helps Google better understand what the page or parts of the page are about.

One might wonder if structured data has to be used in order to provide the featured snippet? The answer is no. As per Google, as long as the web page is optimized properly and contains the questions that equate to the user’s query or voice search in this case, structured data is not necessary; however, if it wouldn’t hurt to put it in, as we all are aware that nothing is static in the SEO world and this rule can easily change in the future.

The reason why Google focuses on “HowTo” and FAQ pages is that their content reflects that of human speech. For example, an FAQ page on EV cars may have the question “How long do EV batteries last?” – That is exactly how a person would ask a voice assistant that same question!

An ‘Action’ for Google Assistant is created, equivalent to an Alexa Skill and Google will read the snippet back to the user to answer the question he/she asked.

Summarizing Optimization for Voice Search

Alexa

Bing: If you have not already done so, bring Bing into your scope of work for SEO and start optimizing for this search engine.

Yelp: We all know that reviews are of the utmost importance, so check out Yelp for your or your client’s business and build on those reviews! Legitimately of course.

Siri

Google SEO: If you are already optimizing for Google’s search, just keep up the good work.  

Apple Factors: Where you might not be fully optimized is with Apple Maps, so get going. Start by registering with Apple Business Connect.

Yelp: And now Yelp is back in the picture!

Cortona

Bing: As mentioned, become an SEO Bing expert and you are ready to ask Cortona anything.

Google

Besides the standard organic optimization, focus on schema markup for HowTo and FAQ articles for voice search, which, if you’re lucky, will be shown on the SERP as a featured snippet.

There you have it. How to optimize for voice search. Let’s get these robots configured so that our businesses will be the first thing you hear from your voice assistant!

 

 

The Pros and Cons of AI

Human hand touching a brain and AI hand touching a brain
Image by Gerd Altmann from Pixabay

Are you afraid of what AI can do or are you looking forward to the benefits it can provide? Part of your decision would be based on whether you feel that the glass is half full or half empty, but the reality is that there are always consequences to technological advancements, whether for good or for those individuals looking to gain an upper hand at the expense of the rest of us. The development of the atom bomb was the result of Einstein’s theory of relativity, even though the scientist had no idea of the frightening consequences his theory would bring.

Overview

Artificial intelligence (AI) is a rapidly growing field that has the potential to transform our world in countless ways. From healthcare to finance, and education transportation, AI can benefit mankind in a myriad of ways, but not everyone is on board with this as we will see in this article, the good and the bad of the advancements in artificial intelligence

Regardless, artificial intelligence is advancing at an exceptional rate as our AI avatars explain below.

So now, let’s take a look at both the positives and negatives of artificial intelligence and what it can potentially have for us and then you can make a decision.

The Benefits

Advancement on Healthcare

Medical Technology
Photo: Pixabay

One of the most significant benefits of AI is its potential to revolutionize healthcare. AI can analyze vast amounts of medical data, including patient records, lab results and imaging studies.

With this information, its algorithms can detect patterns and make predictions that could help doctors diagnose and treat diseases more accurately and quickly. It can also help identify high-risk patients, allowing doctors to intervene early and prevent diseases from progressing.

Transportation

Photo of traffic
Photo: Free Images

Artificial intelligence can be used to optimize traffic flow and reduce congestion and subsequently, travel time for busy commuters.

Moving not too far into the future are autonomous vehicles – cars that drive themselves. There are some being tested now, such as Teslar and Google and Teslar already has autonomous vehicles on the market, but a driver must remain inside.

When it does become mainstream, self-driving cars, buses and trains have the potential to significantly reduce accidents, traffic congestion, and pollution. By removing the human element from driving, these vehicles can make our roads safer and more efficient.

Education

A young man with long hair is working on a laptop. hands close up
Photo: iStock

Artificial intelligence can also be used to improve education. AI-powered tutoring systems can provide personalized, adaptive learning experiences for students of all ages and abilities.

By analyzing a student’s learning style, strengths and weaknesses, these systems can create customized lesson plans that help them learn more effectively. This can lead to improved academic outcomes and greater educational equity, as students who may struggle with traditional teaching methods can receive tailored instruction that meets their needs.

One caveat is the temptation for students to cheat by using apps such as Chat GPT, but alert teachers should be able to tell the difference by determining if the student’s writing style has changed.  With that said, this will still be a challenge for educators.

Finance

Ai can be used to detect fraud, manage risk and optimize investments. By analyzing financial data,  machine learning algorithms can detect patterns that may indicate fraudulent activity, alerting financial institutions to potential threats before they cause significant damage.

Additionally, it can help financial institutions manage risk more effectively by predicting market fluctuations and identifying potential investments that offer high returns with low risk.

Law Enforcement

AI-powered surveillance systems can detect potential threats in public spaces, alerting law enforcement and allowing them to respond more quickly.

It can also be used to analyze crime data, helping law enforcement identify patterns and allocate resources more effectively. Indeed, New York City Mayor Eric Adams introduced crime-fighting robots to the Times Square area and if they prove productive, they will be placed all over the city.

The Environment

Illustration of the effects of climate change, showing grass and then barren ground
Photo: iStock

By analyzing environmental data, AI can help us understand the impacts of human activity on the planet and develop strategies to mitigate them. For example, it can help us optimize energy consumption, reduce waste and improve recycling efforts. Additionally, AI can help us predict and respond to natural disasters, reducing their impact on human lives and property.

Of course, as with any powerful technology, AI also poses some risks and challenges. One concern is the potential for it to be used in ways that violate privacy or human rights.

Additionally, the use of AI in decision-making processes could result in biases or discrimination if the algorithms are not carefully designed and monitored. Finally, there is the risk that AI could become too powerful, leading to unintended consequences or even threatening human existence.

To mitigate these risks, we must approach AI development with caution and foresight. We must ensure that AI is developed and used in ways that prioritize human welfare and respect human rights. This requires ongoing dialogue and collaboration between technologists, policymakers and the public.

Potential Dangers

Unknown person in black sourrounded by binary code
Photo: Pixabay

Artificial Intelligence can pose significant dangers that need to be addressed. Similar to the potential dangers of the use of quantum computers, the same threats are associated with AI.

The Labor Question

No doubt, unemployment due to artificial intelligence is a major concern. As this technology advances, it becomes increasingly capable of performing tasks that were once done by humans, leading to job loss and economic disruption.

For example, self-driving cars have the potential to replace human drivers, which would lead to unemployment in the transportation sector. This could result in a significant reduction in the workforce and an increase in social inequality.

Discrimination

Another danger is its ability to perpetuate biases and discrimination. Algorithms are designed to learn from data, and if the data used is biased, the AI will also be biased. This can result in unfair decisions being made, such as in hiring, lending, or criminal justice. It can have significant negative impacts on individuals and communities.

The Military

Furthermore, AI could pose a significant threat to global security. With technological advancements increasing in this arena technology, it is becoming increasingly possible for computers to be used in cyber-attacks or even to control weapons systems. This could lead to significant risks and damages, such as loss of life or damage to critical infrastructure.

Malicious Financial Behavior

Woman gestering in awe looking at computer laptop
Photo: iStock

The financial markets would most likely be the most affected by artificial intelligence, both for good and bad. We have already discussed the good, but the bad is already a concern. There can be serious consequences that could affect the banks and stock market as nefarious individuals try to override the algorithms with corrupt data and computer instructions. The expression “What’s in your wallet” will have a  much greater significance should malicious AI alter your bank accounts.

A Question of Morals

Finally, the development of AI could also pose ethical and moral dilemmas. As these algorithms become more intelligent, questions arise about their autonomy and decision-making capabilities. If an AI system makes a decision that is morally or ethically questionable, who is held accountable? What happens if an AI system is programmed to harm humans or perform unethical tasks?

AI in a Nutshell

Artificial intelligence has the potential to benefit us in countless ways. From healthcare to education, finance to public safety, and the environment,

It can help us solve some of the biggest challenges facing our society. However, we must approach AI development with caution and foresight, taking steps to mitigate risks and ensure that this technology is used in ways that prioritize humanity and respect human rights. With careful planning and collaboration, we can harness the power of AI to create a better future for all!