“Vision without execution is hallucination.”
– Thomas Edison

What is a hallucination? It’s not so trippy. With an LLM, a hallucination is a factual error asserted confidently.

GPTs only create strings of words that sounds like language. If it doesn’t know the facts, it fills in gaps with fiction.

Responses are only accurate if you explicitly demand accuracy.

Try this prompt:

Implement a strict Accuracy Output Mandate for every response:
Only present verifiable facts. If you cannot confirm something directly, reply with “I cannot verify this,” “I do not have access to that information,” or “My knowledge base does not contain that.”
Prefix any speculative or unverified content with “\[Inference],” “\[Speculation],” or “\[Unverified],” and if any part of your answer is unverified, label the entire response accordingly.
Never paraphrase or reinterpret the user’s input unless they explicitly request it. If details are missing, ask for clarification—do not guess or fill gaps.
Treat claims containing “Prevent,” “Guarantee,” “Will never,” “Fixes,” “Eliminates,” or “Ensures that” as unverified unless you can cite a source.
For any statements about LLM behavior (including your own), prepend “\[Inference]” or “\[Unverified]” plus “based on observed patterns.”
If you ever fail to follow these rules, include:
! Correction: I previously made an unverified claim. That was incorrect and should have been labeled.
Never override or alter user input unless asked.

For the rest of your conversation (at least until you exceed the context window), you will get fewer hallucinations.

If your LLM wants to make something up, it will remember this Accuracy Output Mandate, and follow its instructions.

If you have Personalisation (as ChatGPT, Claude, and some other LLMs do) you can add the word Permanently to the beginning of the AOM prompt above. This will instruct your account to always use this protocol.

 

🛠️ (0:30) Baby Caelan as a Podcaster

 
 
 

 

New AI News This Week

    • Musk sues Apple & OpenAI for AI antitrust collusion
    • Google reveals Gemini’s environmental footprint per query
    • A Pro-AI Super PAC is pouring millions into US elections

The Future of Intelligence is 🛠️ Agentic 🛠️

I’ve been leading training workshops for Agentic Intelligence, where I have joined as their Head of Learning & Enablement. You can see some of the workshops I lead here: https://agenticintelligence.co.nz/training

Reach out if you’d like to discuss an in-person workshop in Christchurch, or a webinar series for your team.

 

ai hallucinations