An Introduction to AI: Some Cool and Not-So Cool Stuff

Let’s talk about something super cool called Artificial Intelligence, or AI for short. You actually interact with AI a lot more than you might even know! It’s showing up everywhere these days—at work, in government buildings, hospitals, and even schools. It is even in your daily scrolling and watching of TV, social media and yes, even on YouTube!

Imagine AI as a super-smart helper. It can do tricky tasks for grown-ups, like figuring out who needs help and how to give it to them in the office. Doctors use it to find yucky stuff like cancer or diabetes, and teachers can even get help from AI to check homework and suggest ways for students to get even better at what they’re learning.  There is an app or a Vibe for that, all you have to do is look around.

While AI is amazing at making things faster and better, sometimes it can also make things less efficient or fair. But don’t worry, we’re going to explore what AI really is and how it’s being used. Now, the term “artificial intelligence” was first thought up way back in 1955 by some smart folks who wanted to do a research project. One of them, Professor John McCarthy, later said that AI is all about “the science and engineering of making intelligent machines.

As this field grew, AI came to mean a big part of computer science that focuses on making computer programs and machines that can do thinking tasks and make decisions all by themselves.  AI brains come in different “sizes.” We have “narrow” or “weak” AI, which can only do simple jobs. Think of Apple’s Siri—she’s a narrow AI that’s great at answering questions and doing basic math. Then there’s “general” or “strong” AI, which is more like a human brain and can handle all sorts of different tasks. You usually see this kind of super-smart AI in science fiction movies, like “Avengers: Age of Ultron”,”Terminator.” and even “Wall-E”

The most common type of AI we use today is called “machine learning,” which is a kind of narrow AI. Machine learning uses special math tricks and computer rules to learn from patterns in information. This learning helps it guess what should happen next or what decision to make. For example, Netflix uses machine learning to suggest movies you might like, and self-driving cars use it to know how fast to go with other cars.

There are two main ways machine learning gets the information it needs: supervised and unsupervised.

Supervised learning is like teaching a computer with flashcards. You show lots of examples of what something looks like. For instance, you could show an AI system many pictures of furry, four-legged creatures with wagging tails and tell it, ‘This is a dog!’ After seeing enough examples, the AI learns to recognize a dog. Another example might be, you could show an AI system many drawings of three-sided figures and tell it, ‘This is a triangle!’ After seeing enough examples, the AI learns to recognize a triangle.”

Unsupervised learning is a bit different because it doesn’t use those labels. Instead, the computer tries to find its own patterns and special things about a dog. Maybe it notices the little nose, the types of ears, or the particular shape and size that makes something a dog.

Even though most of the AI we use now is narrow AI, super-smart general AI is on its way! Every day, AI is getting smarter and becoming more a part of our lives. This is happening because we’re collecting tons more information, the computer rules are getting better, and computers are getting super fast. But even with all these cool advancements, today’s AI isn’t anything like the super-intelligent machines you see in sci-fi books and movies.

In the next part, we’ll talk about some common myths and things people misunderstand about AI. This will help us see why we need to be careful and think about both the good things and the not-so-good things about artificial intelligence.

AI is made by people, and the main goal is to help individuals and society. While AI has a lot of amazing potential, it also has some important limitations. As AI tools become more common in our world, it’s really important for us to understand what they can’t do and to find a good balance between their benefits and their risks.

Remember how we talked about training an AI to spot a dog — a furry, four legged-creature with a wagging tail? Well, let’s think about that a bit more. We know that not all dogs have long tails, right? So, if our AI was only trained on wagging tails, it might look at a dog without a long tail and say, “Nope, that’s not an dog!” This kind of mistake is called a “false negative.”

Here’s another example. Imagine we’re teaching a machine learning computer program to help us hire new employees by showing examples of awesome employees we’ve had in the past. We’d show them things like their work experience, schooling, and special skills. Now, if our company mostly hires a certain type of person, the computer program will learn to look for those same types of people as good employees. But then, if someone different comes along, the program might say that person isn’t a good fit for the company. This is an example of something called “machine learning bias,” and we’ll dig into this more later.

Lately, AI has started to become more involved in our decision making.  That means AI is now helping to make big decisions in places like hospitals, courthouses, and job offices—like deciding who gets insurance, who gets out of jail early, and who gets hired.  People often misunderstand things about AI when it comes to its accuracy, fairness (objectivity), who’s responsible for it, and if it can actually “feel” things.

Accuracy: AI systems are super good at following rules and steps they’ve learned. But if the information they learned from was not good or well-thought-out, then the AI system will make bad or silly results.

Objectivity (Fairness): Like I said earlier, humans create AI. So, AI systems naturally show human ideas and ways of doing things. This means they can sometimes be unfair or biased. The data AI uses needs to be validated, refreshed and inspected to ensure its not what is sometimes referred to as G.I.G.O – Garbage in, Garbage out.

Responsibility: Research has shown that people tend to trust computer-made decisions too much, sometimes even ignoring information that shows the computer is wrong! This is called “automation bias,” and it can be harmful to using AI responsibly. It’s important for people to understand that AI systems don’t have feelings or a sense of responsibility. So, we need to have good rules and people watching over AI to make sure it’s developed and used safely and wisely. 

Sentience (Feeling): Remember how we talked about AI having different “sizes” from narrow to general? Just to remind you, narrow AI does simple tasks, like finding patterns based on what it learned. General AI can do lots of different tasks, just like a human. AI sentience, or “artificial general intelligence,” means a machine that can think for itself, know it exists, and even feel emotions! Even though science fiction movies often show AI that can feel, like the movies we reference before, truly feeling AI is still a long way off.

There are many fantastic ways AI can help people! Some ways are for certain groups of people, and other ways are for everyone. I’m going to share a couple of examples of how AI can make people’s everyday lives better.

When Mark was in his early twenties, a sudden illness caused him to lose a lot of his hearing. He loved going to school and spending time with his friends, but he started to worry about being able to understand conversations and keep up in class. He looked for cool technologies that could help him stay connected to the world around him. One amazing tool he discovered was a smart app that uses AI to turn spoken words into text, right on his phone screen.

This AI-powered app helped Mark in so many ways! It could transcribe lectures in real-time, letting him read along as his teachers spoke. When he was out with friends, it could show him what they were saying, even in noisy places like a restaurant. Because of this app, Mark could participate in discussions, follow along in movies, and even have heart-to-heart talks with his family, all by reading the words as they were spoken. For Mark, this smart hearing app gave him back a kind of independence that was absolutely priceless. From this story, we can clearly see how much AI can lend a helping hand to individuals.

Now, let’s think more about how AI can generally make things more efficient, effective, and fair.

AI is super good at doing jobs that are repeated over and over, and it can do huge amounts of these tasks much faster than humans. Take credit card fraud, for example. Between 2019 and 2022, the number of credit card fraud reports went up by almost 40%! A big reason for this is that credit card companies started using AI that can keep an eye on tons of things that might mean fraud, like how often and where you make purchases. In the past, this kind of fraud was only fixed after the fake purchase happened. But today, AI is being used more and more to quickly check transactions and make billions of decisions right away, stopping a fraudulent purchase before it even hits your card!

While being super-fast is great, it’s also important for AI to be effective. Going back to our credit card example, if real purchases are accidentally flagged as fake, it’s bad for both the credit card companies and the people who own the cards. With better information and better computer programs, companies can do a much better job of finding and stopping fake transactions as well as protecting customers from having their accounts frozen for no reason.

Besides making things more efficient, AI can also help make some things fair. Let’s think about credit reports and scores, which are used to decide if someone can get a credit card. Old ways of doing credit reports and scores have been found to be unfair. But new AI-powered ways of figuring out if someone is good with money are being created to find and fix hidden ways that show unfairness, like if how much debt someone has or where they live is connected to things like their race, gender, or age. Advances in using AI can help uncover these hidden biases and help more people get fair access to credit.

Imagine Sarah applying for her dream job. An AI system is used to quickly look through thousands of applications to find the best candidates. This AI was taught by looking at lots of resumes from people who got hired for similar jobs in the past. But here’s the problem: if most of those past successful hires came from a very specific type of company or had a very particular kind of training that isn’t common anymore, the AI might unfairly ignore someone like Sarah. Even if Sarah has amazing new skills and fresh ideas that are perfect for the job now, the AI might accidentally toss her application aside because it only ‘knows’ to look for the old-fashioned way of doing things. This means a super talented person could miss out on a great opportunity, or a company can miss out on a great candidate, not because they aren’t good enough, but because the AI learned from old information.

Another example might be police departments are using more and more AI technologies, like facial recognition systems, to try and make their work faster. Facial recognition systems use machine learning to identify, collect, store, and check facial features so they can be matched to photos of people in a huge database. But these systems often have problems because of bias. 

NIST (National Institute of Standards and Technology) looked at how accurate these facial recognition systems were. Their research found that facial recognition systems often aren’t good at recognizing faces that aren’t Caucasian. They make more mistakes on people of color and women.

In this section, we’ll explore three of the biggest risks of AI: bias and discrimination, transparency and accountability, and privacy and security.

Let’s start with bias and discrimination. We just talked about an example with facial recognition, but what actually causes this to happen? “Algorithmic bias” means that a computer system makes mistakes that are unfair, and these mistakes happen because the computer program made wrong assumptions when it was learning. In our example of the facial recognition system making a mistake, algorithmic bias may lead to an innocent person being arrested.

Here’s another way bias can sneak into AI. Imagine a popular online video sharing platform where you upload your cool videos. The platform uses AI to decide which videos to show to more people and which ones to hide, maybe because they seem like spam. But sometimes, this AI gets a little confused. If the AI was mostly taught using videos made in one specific way, or using certain words or sounds, it might accidentally think that videos from people who speak differently or share stories from a different culture are ‘less interesting’ or ‘not as good.’ This means that awesome videos from talented creators might not get seen by as many people, not because they aren’t great, but because the AI’s training made it accidentally unfair to certain ways of sharing.

Now let’s talk about transparency and accountability. Machine learning programs can be very different in how complicated they are. Some are simple, and some are super complex. Simple programs are usually easy to understand. We can easily see what things the program considered and how it used them to make its guesses. But complex programs are often like “black boxes.” They are so complicated that even the people who made them don’t fully understand how the program made its decisions. This lack of being able to see inside has led to them being called “black boxes.” This means it might be impossible to know what the program considered and how it used those things to make its predictions.

When we can’t see inside these black boxes, it’s harder to make sure the AI is doing its job properly and that people are responsible for its actions. For instance, think about a self-driving car that uses image recognition to spot and avoid things. It’s important to know what information was used and how the program was taught to make sure the self-driving car performs safely.

Finally, let’s look at the dangers to privacy and security. Many of the AI systems you use every day rely on your personal or sensitive information. For example, when you search for something on your phone, the results can be based on many personal things like what you’ve looked at before, where you are, and what you do on other apps. AI systems can be attacked in ways that mess with their data, like “adversarial machine learning.” This includes tricks like “data corruption” and “poisoning,” where a bad person puts in harmful information to make the AI program guess wrong. If these attacks aren’t stopped, they can cause serious harm.

Going back to our self-driving car example, imagine a mischievous person wants to make the self-driving car mess up. They know the car has learned to recognize a stop sign and completely stop. By cleverly spray painting over the word “stop” on the sign, they might trick the AI into not seeing it as a stop sign anymore. The car would then keep going and drive right past the stop sign! This is obviously super dangerous for the driver and everyone else around them.

We’ve taken a super fun ride through the wild and wonderful world of AI or Artificial Intelligence. We’ve seen how AI is like a super-smart helper, making everything from finding lost sight to finding misplaced job applications a little bit easier (and sometimes, a little bit trickier!). We’ve peeked behind the curtain to understand what AI really is, busted some myths, and even spotted a few of the sneaky ways bias can creep into these clever computer brains.

But here’s the really cool part: AI and Machine Learning are still like little puppies, learning new tricks every single day. They’re not here to take over the world (unless it’s the world of doing boring chores faster!). Instead, they’re powerful tools that we, as humans, get to design and use.

So, here’s my challenge to you: The next time you use a phone assistant, scroll through your favorite video app, or even just look at recommended shows, remember that AI is working behind the scenes. And then, I want you to think: Where in your world could a little bit of AI magic make things even better? Maybe it’s organizing your toys, helping your pet learn a new trick, or even making your homework a little less daunting.

Just remember our golden rules: always double-check the data, keep an eagle eye out for any unfairness, and never forget that humans are the brilliant brains in charge. With smart thinking and careful checking, we can make sure AI continues to be an amazing force for good, helping us build a smarter, more helpful future, one clever algorithm at a time!

Share:

More Posts