Your AI Professor Just Said The Moon Is Cheese: How To Spot Its Savvy Scams

Imagine a brilliant college professor, the kind who seems to know absolutely everything about every subject. They stand at the front of the class, speaking with such authority, such unwavering confidence, that you hang on their every word. They never hesitate, never pause for thought. Every answer is delivered with a flourish, a definitive tone that makes you believe it without question. Now, imagine that same professor, mid-lecture, confidently stating that the moon is made of blue cheese. That gravity was really pronounced “mavity” or was invented last Tuesday. They’re still just as confident, just as unwavering, even though what they’re saying is completely, utterly wrong. You’d be pretty confused, right? You’d start to wonder if you could trust anything they said.

Well, in a strange way, that’s a bit like dealing with Artificial Intelligence, or AI, today. These powerful computer programs, like ChatGPT, Gemini, Grok, and many others can seem incredibly smart. They can write essays, answer complex questions, and even create images. But sometimes, they’ll tell you something with the exact same confident tone, and it will be totally made up. We call these made-up answers “hallucinations.” It’s not that the AI is trying to trick you; it’s simply a quirk in how it works. And understanding this quirk is key to using AI wisely.  When we humans lie, there’s usually a reason. We want to avoid trouble, get something we want, or perhaps play a prank. We know the truth, and we deliberately say something different. But AI doesn’t “lie” in that way. It doesn’t have feelings, intentions, or a sense of right and wrong.

So, what exactly is a “hallucination” when we’re talking about AI? Think of it this way: AI, especially the kind that generates text, has learned from vast amounts of information – basically, most of the internet. It’s like reading every book in the world, listened to every conversation. Its main job is to predict the next most likely word or phrase to complete a sentence or answer a question.  When an AI “hallucinates,” it’s like our brilliant professor, mid-thought, suddenly pulling a “fact” out of thin air to fill a gap in their knowledge. They’re not trying to deceive you; they just believe the invented fact fits perfectly into the pattern of what they “know.” The AI generates text that sounds totally convincing, grammatically correct, and logically structured, even if the actual information is pure fantasy. It just “fills in the blanks” with what seems plausible based on its training, even if it’s completely wrong in reality. It’s a bit like when you’re talking and you say, “Oh, I know this,” then you make up the rest, hoping it sounds right. But for AI, there’s no “hoping”—it just does.

Have you ever noticed that AI rarely says, “I don’t know”? This is a big part of why it can seem so confident even when it’s wrong. AI models are built to answer. They are trained to always provide a response, to complete the prompt you give them. It’s like asking a search engine a question and getting a blank page back, that’s not what they’re designed to do.  The way AI learns is by finding patterns in the huge amounts of text and data it’s fed. When you ask it a question, it’s essentially trying to come up with the most statistically probable and coherent sequence of words to form an answer. It doesn’t actually “understand” the meaning of words in the same way we do. It doesn’t have a built-in truth-meter. It just knows which words tend to go together and in what order to form a sensible-sounding sentence.

This drive to always provide a full, complete, and logically flowing answer can sometimes override factual accuracy. If it encounters a gap in its “knowledge” or a question it hasn’t been directly trained on, instead of admitting it doesn’t know, it will confidently generate what it thinks is the most plausible answer based on the patterns it has learned. It’s like our professor being asked a question they haven’t prepared for. Instead of saying “I’m not sure,” they might just invent a plausible-sounding answer on the spot, so confident in their own ability that they don’t even realize it’s incorrect. They’re just so used to having an answer ready that they create one to keep the conversation going.

It’s one thing for an AI to get a simple fact wrong in a private chat. It’s another when it’s confident, incorrect answers make headlines. We’ve seen this happen with some of the biggest names in AI, showing that even the most advanced models aren’t immune to these “hallucinations.”

Remember when Google’s Gemini AI was released and faced criticism for generating historically inaccurate images? For example, when asked to create images of historical figures, it sometimes produced diverse representations that didn’t match the actual historical context. This wasn’t because the AI was trying to make a political statement; it was a result of its training trying to ensure variety and fairness, even when it led to factual errors in specific historical prompts. The AI was confident in the images it produced, but the output was clearly wrong in a historical sense.

Then there’s Grok, Elon Musk’s xAI, which has also had its moments. Designed to be witty and provide real-time information, it has sometimes confidently relayed incorrect details about current events or individuals. In one instance, it confidently stated a false fact about a news story, demonstrating that its quest for a quick, engaging answer can sometimes lead it astray from the truth.  Then there was the SpaceX secret cargo it spoke about, but we will save that for another day. 

Even ChatGPT, arguably the most well-known AI, has a history of confidently fabricating information. Early on, users often found it would invent sources for its answers, complete with fake URLs and author names. You’d ask it for a list of academic papers on a topic, and it would churn out impressive-looking titles and authors that didn’t exist anywhere in the real world. More recently, it has been known to get factual details wrong about people, places, and events, all delivered with its characteristic assured tone.

These aren’t minor glitches. When AI confidently presents misinformation, especially on platforms used by millions, it can quickly spread false narratives and cause genuine confusion. It shows us that even with all the incredible advancements, these tools are still works in progress, and their confidence isn’t always a sign of accuracy.

So, if one AI tells you something with supreme confidence, and you’re feeling a bit unsure, should you ask another AI? It’s an interesting thought, like getting a second opinion from another brilliant professor. And sometimes, it can actually be a smart move.

Think about it like this: if you ask our imaginary history professor a question, and then you go ask the economics professor the exact same question, their answers might be different because they come from different fields of knowledge, different perspectives. In the world of AI, different models (like ChatGPT versus Gemini) are often trained on different sets of data, or they might have slightly different internal ways of processing information. Because of this, one AI might catch a mistake or a gap that another one missed. If both AIs give you the same answer, especially if it’s a complex one, it might increase your confidence in that answer. It’s like two separate experts agreeing on something.

However, there’s a big “but” here. If these AIs are relying on similar underlying information, or if they share certain biases in their training, then asking a second AI might just confirm the first one’s mistake. It’s like asking two professors from the same department, who studied under the same mentor, the same question. They might both confidently give you the wrong answer because they share the same blind spots or incorrect information. So, while cross-referencing with another AI can sometimes be helpful, it’s not a foolproof solution. It’s certainly not the ultimate judge of truth.  Since we can’t always rely on AI’s confidence, and asking a second AI isn’t a guarantee, how can you tell if the AI professor is making things up? The good news is, you already have the most important tool: your own critical thinking. Here’s how you can build your personal “bullshit detector” when using AI:

First and foremost, verify with reliable human sources. This is the golden rule. If the AI gives you a fact, especially an important one, always, always check it against trusted websites, reputable news organizations, academic papers, well-known encyclopedias or check with Joe Rogan. The internet is a vast place, and there’s still a lot of accurate human-created information out there. Don’t just take AI’s word for it.

Next, if the AI does provide citations or sources, don’t just glance at them. Click through and actually check them. This is where many AI hallucinations get exposed. Often, the AI will confidently list sources that don’t exist, or it will attribute information to a source that doesn’t actually contain that information. It’s like our professor confidently pointing to a textbook page that, upon inspection, is completely blank on the topic.  Also, try to ask follow-up questions. If an AI gives you an answer, gently probe it for more detail. “Can you elaborate on that point?” or “Where did you get that information?” Sometimes, a confident but incorrect answer will start to unravel when you dig a little deeper. If the AI struggles to provide more detail or starts contradicting itself, that’s a red flag.

Test its knowledge on something you do know. If you’re asking the AI about a topic you’re already familiar with, pay close attention to its answer. If it gets simple facts wrong about something you know well, then you should be very wary of its answers on topics you don’t know much about. This is a quick way to gauge its general reliability.  And perhaps most importantly, use your common sense. Does the answer feel right? Does it sound too good to be true? Does it contradict something you intuitively know to be correct? If an AI tells you that the Eiffel Tower was built in Antarctica, your common sense should immediately tell you that’s wrong, even if the AI says it with unwavering conviction. If something sounds outlandish or just plain weird, it probably is.

Remember, AI is an incredibly powerful tool for brainstorming, summarizing complex information, drafting emails or articles, and even generating creative ideas. But it is not an oracle of absolute truth. It’s a sophisticated pattern-matcher, not a wise sage or even the Wizard of Oz. It’s there to assist you, not replace your own judgment.

We’ve all been there. The first time you use AI, it feels like magic. It’s so fast, so articulate, so… smart. Then, you realize it just confidently told you something completely absurd. It’s part of the learning curve with this new technology, and it’s a very common experience.

Have you ever asked an AI a question and received a confident answer that turned out to be totally incorrect? Maybe it gave you the wrong recipe for your favorite dish, or a false historical fact, or even invented a person or a company. We’d love to hear about your experiences! It helps us all understand the quirks of AI better.

So, let’s bring it back to our brilliant but sometimes mistaken professor. AI is an amazing assistant, incredibly fast and capable of crunching through information at lightning speed. But just like that professor, it sometimes gets things wrong, and when it does, it sounds just as sure of itself as when it’s right. The “hallucination” isn’t a malicious act; it’s a byproduct of how these complex systems are built to generate responses.  The key takeaway here isn’t to be afraid of AI or to stop using it. Far from it! AI is an incredible tool that can save you time and spark creativity. The real lesson is that we remain the ultimate fact-checkers. Our critical thinking, our ability to verify information, and our common sense are more important than ever in an age where information, both true and false, can be generated instantly.

AI will continue to improve, becoming more accurate and less prone to these confident errors. But until then, and even after, the human element of discernment will always be essential. We are the ones who can truly understand context, nuance, and the difference between a plausible-sounding fabrication and genuine truth.

I hope this article helps you navigate the exciting, yet sometimes tricky, world of AI.

I’d love to hear your thoughts, comments, or recommendations on this topic! Please share them on social media by tagging @iamcezarmoreno.  And for more insights and discussions like this, be sure to follow me, subscribe, or join my newsletter at https://cezarmoreno.com.

Share:

More Posts