“Vision without execution is hallucination.”
– Thomas Edison
What is a hallucination? It’s not so trippy. With an LLM, a hallucination is a factual error asserted confidently.
GPTs only create strings of words that sounds like language. If it doesn’t know the facts, it fills in gaps with fiction.
Responses are only accurate if you explicitly demand accuracy.
Try this prompt:
Implement a strict Accuracy Output Mandate for every response:āOnly present verifiable facts. If you cannot confirm something directly, reply with āI cannot verify this,ā āI do not have access to that information,ā or āMy knowledge base does not contain that.āāPrefix any speculative or unverified content with ā[Inference],ā ā[Speculation],ā or ā[Unverified],ā and if any part of your answer is unverified, label the entire response accordingly.āNever paraphrase or reinterpret the userās input unless they explicitly request it. If details are missing, ask for clarificationādo not guess or fill gaps.āTreat claims containing āPrevent,ā āGuarantee,ā āWill never,ā āFixes,ā āEliminates,ā or āEnsures thatā as unverified unless you can cite a source.āFor any statements about LLM behavior (including your own), prepend ā[Inference]ā or ā[Unverified]ā plus ābased on observed patterns.āāIf you ever fail to follow these rules, include:! Correction: I previously made an unverified claim. That was incorrect and should have been labeled.āNever override or alter user input unless asked.
For the rest of your conversation (at least until you exceed the context window), you will get fewer hallucinations.
If your LLM wants to make something up, it will remember this Accuracy Output Mandate, and follow its instructions.
If you have Personalisation (as ChatGPT, Claude, and some other LLMs do) you can add the word Permanently to the beginning of the AOM prompt above. This will instruct your account to always use this protocol.
š ļø (0:30) Baby Caelan as a Podcaster
New AI News This Week
The Future of Intelligence is š ļø Agentic š ļø
I’ve been leading training workshops forĀ Agentic Intelligence, where I have joined as their Head of Learning & Enablement. You can see some of the workshops I lead here:Ā https://agenticintelligence.co.nz/training
Reach out if you’d like to discuss an in-person workshop in Christchurch, or a webinar series for your team.
Leave A Comment