Prompt engineering has moved beyond simple experimentation to become a vital application of engineering practices for developing inputs into generative models. Recent testing in 2025 has identified specific techniques that can reduce fabricated facts by up to 73 per cent, although they do not eliminate them entirely.
The most powerful individual technique involves explicit uncertainty instructions: by instructing an artificial intelligence to state “I am uncertain” when it is not completely sure, users can achieve a 52 per cent reduction in hallucinations. This single change is currently considered the most effective way to improve accuracy in models such as ChatGPT and Claude.
Another high-impact strategy is the request for source attribution. Instead of asking a general question, users should require the model to specify the type of source for each claim: such as research studies, theoretical frameworks, or common practices. This forces the system to consider the origins of information rather than merely generating plausible-sounding text, which reduces fabricated facts by approximately 43 per cent. Chain-of-thought verification is equally essential. This structure requires the model to think step-by-step about whether a claim is true, what evidence supports it, and what might contradict it. Testing reveals that this method catches 58 per cent of false claims that simple, direct queries usually miss.
The implementation of these safeguards requires a shift in professional workflow. Users must move from blindly trusting outputs to a process of generate, verify, and then use.
Further reliability can be gained through temporal constraints and scope limitation. Adding a specific knowledge cutoff date, such as January 2025, can eliminate up to 89 per cent of fake recent developments invented by the model. Similarly, instructing the model to only explain well-established aspects and skip controversial or uncertain areas reduces hallucinations by another 31 per cent. For data-heavy tasks, specific number avoidance is a useful safeguard: models often fabricate statistics because they sound authoritative, so requesting ranges instead of exact figures reduces false statistics by 67 per cent.
The implementation of these safeguards requires a shift in professional workflow. Users must move from blindly trusting outputs to a process of generate, verify, and then use. This includes requesting self-verification from the model regarding which parts of its response might be uncertain and asking it how those claims should be verified. Human oversight remains the final and most critical step, especially for high-stakes work in legal, medical, or financial sectors. By combining these engineering techniques, organisations can transform AI from a temperamental tool into a far more reliable research partner.
The team at Academii are always happy to discuss all your training and education needs, help your organisation attract and train new talent, and build a resilient workforce. Please drop us a line here to know more.













































































