Last week, a founder sent me a screenshot. She'd asked Claude about her company and it confidently stated they offered a free tier. They don't. Never have. Claude made it up.
This happens constantly. We track AI responses about 1,000+ brands, and 37% of factual claims AI engines make about brands contain at least one error. Wrong pricing is the most common (24% of brands have incorrect pricing in at least one engine). Wrong feature descriptions are second (19%). Completely fabricated products are third (8%).
These aren't edge cases. This is the norm. And if you're not monitoring what AI says about you, you have no idea what potential customers are being told.
Why hallucinations happen
AI engines don't "know" things the way a database does. They generate responses based on patterns in their training data and, increasingly, real-time web search. Hallucinations happen when:
Training data is outdated. If you changed your pricing 6 months ago, the AI might still reference the old pricing from its training data. LLMs are trained on snapshots of the web, and those snapshots can be months or even years old.
Information is ambiguous or scattered. If your pricing page says one thing, a blog post from 2023 says another, and a G2 review mentions a third number, the AI has to pick one. It often picks wrong.
The AI fills gaps with plausible fiction. This is the classic hallucination. If the AI doesn't have enough information about your product, it'll generate something that sounds plausible based on similar products. "Most SaaS tools in this category have a free tier, so this one probably does too." That's the logic, and it's often wrong.
Brand confusion. If your brand name is similar to another company, or if you share a name with a common word, AI engines sometimes merge information from multiple entities. We've seen this with at least 40 brands in our index.
The real cost of hallucinations
This isn't just an accuracy problem. It's a business problem.
If ChatGPT tells a potential customer your product costs $49/month when it actually costs $99/month, that customer shows up with wrong expectations. If Claude says you integrate with Salesforce when you don't, that's a wasted sales call. If Gemini describes your product as an "enterprise solution" when you're targeting SMBs, you're attracting the wrong audience.
We surveyed 200 SaaS buyers and 41% said they'd used AI to research a product before purchasing. Of those, 67% said they trusted the AI's description without verifying on the company's website. That means two-thirds of AI-influenced buyers are making decisions based on information that might be wrong.
How to fix it
Step 1: Audit what AI says about you. Ask each major engine (ChatGPT, Claude, Gemini, Perplexity) basic questions about your brand. What is it? What does it cost? What are its features? Who is it for? Document every error. Or use our hallucination tracker to automate this.
Step 2: Create a single source of truth. Your website needs one authoritative page that clearly states: what your product is, what it costs, what features it has, and who it's for. This page should have Product and Organization schema markup so AI engines can parse it programmatically.
Step 3: Add an llms.txt file. This is specifically designed to give AI engines accurate information about your brand. Include your correct pricing, features, positioning, and — critically — a "common misconceptions" section where you can explicitly correct things AI gets wrong. Generate one here.
Step 4: Make your information consistent everywhere. Check your G2 profile, your Crunchbase page, your LinkedIn company page, your Product Hunt listing. If your pricing or description is different across these sources, AI engines get confused. Make everything consistent.
Step 5: Monitor regularly. AI responses change as models are updated and new web content is indexed. Something that's accurate today might become inaccurate next month. Check quarterly at minimum, monthly if you can.
A note on corrections
Some AI engines have feedback mechanisms. ChatGPT lets users flag incorrect responses. Perplexity has a correction feature. Use them when you find errors. It's not guaranteed to fix things immediately, but it feeds into the improvement loop.
The most effective long-term fix is making sure the web contains accurate, consistent, well-structured information about your brand. AI engines will eventually converge on the truth — but only if the truth is clearly and consistently stated across multiple authoritative sources.
Don't wait for someone to tell you AI is saying wrong things about your brand. Check now.