How to Spot When AI Is Lying to You

How to Spot When AI Is Lying to You

You know that friend who never says “I don’t know”?

You ask them for the name of that Italian place on the corner, and instead of admitting they can’t remember, they give you a name, an address, and a confident recommendation — and when you get there, it’s a dry cleaner.

AI does the same thing. Except it does it with a straight face, perfect grammar, and the kind of authority that makes you feel silly for even questioning it.

This is, hands down, the most important thing to understand about using AI right now. And it’s the thing that most guides gloss over in favour of showing you clever tricks. So let’s spend some time on it — because once you see how this works, you’ll use these tools with a lot more confidence, not less.


What’s Actually Going On (And Why People Call It “Hallucination”)

The tech industry landed on the word “hallucination” to describe this — when AI produces information that sounds authoritative but is partly or entirely made up. It’s not a perfect word (the AI isn’t seeing things that aren’t there), but it’s the one that stuck, so we’ll work with it.

Here’s the simplest way I can explain what’s happening under the hood:

AI language tools work by predicting the next word in a sentence. They’ve been trained on vast amounts of text and have picked up patterns about what words tend to follow other words. When you ask a question, the tool isn’t searching a database for the answer. It’s generating a response that sounds like the kind of answer that would typically follow your question.

Think of it like the autocomplete on your phone — but for entire paragraphs. Your phone doesn’t know what you’re trying to say. It just predicts what word probably comes next based on patterns it’s seen before. AI does the same thing, just at a much bigger scale.

This is why AI is genuinely useful for writing emails, summarising long documents, and explaining concepts in plain language. Those tasks lean on language patterns, and language patterns are what these tools are built for.

It’s also why AI can be dangerously wrong about facts. It doesn’t have a sense of “I know this” versus “this sounds about right.” It doesn’t distinguish between the two. It has no concept of either.


Real Examples of Confident AI Nonsense

These aren’t made-up scenarios. These actually happened.

The lawyer who cited fake cases. In 2023, a New York attorney used ChatGPT to prepare a legal brief and submitted it to court. The brief cited six previous court cases as precedent. None of them existed. ChatGPT had generated realistic-sounding case names, docket numbers, and even short summaries of rulings that never happened. The attorney was sanctioned by the judge.

The invented academic sources. Ask ChatGPT to provide references for a research claim, and it will often produce author names, journal titles, volume numbers, and page ranges that look perfect in a bibliography. Some turn out to be real. Many don’t. The formatting is flawless. The source is fiction.

The confident biography. AI tools have been documented generating biographical details about real people that are completely wrong — attributing books they never wrote, degrees they never earned, positions they never held. All delivered in the same matter-of-fact tone as the accurate information sitting right next to it.

The plausible statistic. “Studies show that 73% of employees prefer hybrid work.” That sounds like something from a reputable survey report. AI will generate sentences like this freely, complete with specific percentages, and there may be no study behind the number at all.

The pattern is always the same: the output reads like a fact. It has the shape, structure, and confidence of verified information. But the source is pattern prediction, not knowledge.


Six Signs That an AI Response Might Be Made Up

Not every wrong answer is obvious. But with a little practice, certain patterns become easier to spot.

1. Suspiciously Specific Details

When AI invents something, it often over-specifies. A real person hedging might say “I think that was around 2019.” An AI making things up will say “this was first documented in a 2019 study by researchers at the University of Zurich, published in the Journal of Cognitive Science, Volume 34, Issue 2.”

The more specific and citation-like the detail, the more it’s worth checking — especially if you didn’t ask for that level of detail.

2. Unwavering Confidence With No Hedging

Real experts hedge. They say “the evidence suggests” or “in most cases” or “as far as we know.” That’s because real knowledge comes with an awareness of its limits.

AI often skips this entirely. It states things flatly. “X causes Y.” “This was proven in 2018.” When an AI response reads like an encyclopedia entry with zero uncertainty, that’s a moment to pause.

3. The Answer Comes Too Easily

Some questions are genuinely hard. If you ask something niche — a specific statistic from a specific year, a detail about a lesser-known person, the exact wording of an obscure policy — and the AI answers instantly and fluently, it may be generating a plausible-sounding response rather than retrieving an actual fact.

A useful mental check: “Would a knowledgeable person need to look this up?” If the answer is yes, the AI’s instant reply deserves extra scrutiny.

4. It Tells You Exactly What You Want to Hear

AI tools are trained on human feedback, and humans tend to prefer helpful, complete answers. This creates a pull toward giving you an answer rather than saying “I’m not sure.”

If you ask “Is X a good approach?” and the AI enthusiastically agrees, consider that it might be optimising for helpfulness rather than accuracy. Asking the opposite question — “What are the problems with X?” — sometimes reveals a very different picture.

5. The Sources Don’t Check Out

This is the most concrete test. If an AI response cites a study, names a researcher, or references a specific report, that information can be verified. A quick search for the cited paper or source often reveals whether it actually exists.

Many people skip this step because the citation looks real. Proper formatting, plausible journal names, realistic author names — they all create an impression of legitimacy. But a well-formatted citation and a real citation are not the same thing.

6. Inconsistency Across Attempts

Ask the same factual question twice in separate conversations, and if you get different specific details — different dates, different numbers, different attributions — that’s a strong indicator that at least one response was generated rather than retrieved.

Real facts stay the same. Fabricated ones shift.


How to Check What AI Tells You

Spotting the warning signs is step one. Knowing what to do next is step two. None of these checks take long, and any one of them can save you from passing along something that isn’t true.

Search for the specific claim. Copy a key phrase from the AI’s response — the statistic, the case name, the researcher — and search for it. If the claim is real, it will appear in other sources. If it doesn’t appear anywhere, or only shows up in other AI-generated content, that’s telling.

Check named sources directly. If the AI cites a journal article, search for it on Google Scholar or the journal’s own website. If it names a person, check their actual bio or publication list. This takes about 30 seconds and catches the most common fabrications.

Ask the AI to verify itself. This is imperfect — an AI can double down on its own fabrications — but prompting with something like “Are you certain this source exists? Can you provide a working link?” sometimes causes the tool to flag its own uncertainty. Tools have become somewhat better at this over time, though it’s not reliable enough to use as your only check.

Cross-reference with a second source. If something matters, check it with a different AI tool, a search engine, or a person who knows the subject. No single source — human or AI — deserves uncritical trust on factual claims.

Look for the primary source. AI often generates summaries of summaries. When accuracy matters, the goal is finding the original — the actual study, the actual statement, the actual data. If the AI can’t point you to one, and you can’t find it independently, treat the claim as unverified.


A Simple Way to Think About Trust

None of this means AI is useless. Far from it. It means AI is a tool with a specific weakness, and knowing that weakness actually makes it a better tool, not a scarier one.

Here’s a practical way to think about when to trust what it gives you:

High trust, low checking needed: - Drafting emails, rewriting text, adjusting tone - Brainstorming ideas, generating outlines - Explaining concepts in plain language - Summarising text you’ve provided to it

These tasks lean on language ability, which is AI’s genuine strength. The output still deserves a read-through and some editing, but the risk of harmful fabrication is low.

Medium trust, worth a quick check: - Answering general knowledge questions (“What is compound interest?”) - Providing overviews of well-documented topics - Suggesting approaches to common problems

These are usually directionally correct but may contain errors in specifics. Worth a sanity check, especially for anything that’s going to be shared with others.

Low trust, always check: - Specific statistics, dates, or numbers - Named sources, citations, or references - Claims about real people, organisations, or events - Legal, medical, or financial information - Anything where being wrong has consequences

This is where fabrications are most dangerous and most common. The output might be right. It might not. The only way to know is to check.

It comes down to one question: “What happens if this is wrong?”

If the answer is “not much” — an internal brainstorm, a first draft, a casual explanation — the risk is low. If the answer is “I’ll look foolish,” “someone makes a bad decision,” or “this goes into an official document” — verify it.


The Bigger Picture

AI fabrications aren’t a bug that’s going to be fixed in the next software update. They’re a fundamental feature of how current language tools work. The tools are getting better — newer versions fabricate less frequently, and some now include source links or flag uncertainty — but the underlying architecture means this will remain a factor for the foreseeable future.

That’s not a reason to avoid AI. It’s a reason to use it with your eyes open.

The people who get the most out of AI tools tend to share one trait: they treat AI like a very fast, very articulate colleague who occasionally makes things up. Useful for drafting, brainstorming, and explaining. Not the final word on anything factual.

That mental model — capable but unreliable on specifics — is the most practical one available right now. And it’s genuinely freeing, because once you stop expecting perfection, you can start using these tools for what they’re actually good at.


One Thing to Try in the Next 10 Minutes

Next time you’re using an AI tool and it gives you a specific fact — a date, a name, a statistic, a source — take 30 seconds to check it. Copy the claim into a search engine. See if the source exists.

That’s it. One check. One fact. It builds the habit faster than any framework, and you’ll know within seconds whether the tool got it right.


Have you caught an AI making something up? I’d love to hear about it. The best examples make the best lessons — reply and share yours.


Disclaimer: This article is for general informational purposes only. Tools and capabilities described may change. Verify important outputs independently.


Sources Cited

The sanctioning of attorney Steven Schwartz for submitting AI-generated fictitious case citations was widely reported in June 2023 (Mata v. Avianca, Inc., US District Court, Southern District of New York). No other external sources cited — the remainder of this article draws on widely documented and observed behaviours of large language models.

The views expressed in this article are those of the author and are for informational purposes only.

Join for free

Get evidence-based articles delivered to your inbox. No spam, no ads — just good writing.