What happens when an AI learns to question itself?

Our latest research from the Machine Intelligence Lab explores exactly that. Virtual Socrates¹ introduces SPRI™—Socratic Prompt Response Instruction—a new method for improving reasoning transparency, self-critique, and trustworthiness in large language models.

In Phase 1 of SPRI™ testing, we eliminated 80% of AI-generated hallucinations across two authority-laden prompt types. In Phase 2, an enhanced SPRI™ configuration stopped 100% of induced hallucinations across six categories of complex, high-risk prompts.

SPRI™ is a lightweight, platform-neutral method for dramatically increasing stability and reducing error rates in generative AI. It does not require mega-prompts, SDKs, or code-level intervention—and when combined with AI-native tooling like RAG, it significantly improves trust, reliability, and reasoning transparency.

If you're a researcher, AI practitioner, or enterprise decision-maker exploring the future of cognitive transparency, we’d be happy to share the findings.

📩 For a copy of the paper or to discuss how SPRI™ can be applied in your enterprise context, e-mail Brook Walker, Executive Director, Machine Intelligence Lab, Third Way Consulting, at research@twc-global.com.

Because building trust in AI begins with teaching it to doubt itself.

¹ Due to unauthorised use and unattributed replication of SPRI™, this paper has been withdrawn from public distribution. Access is now available through a licensed and verified model.

Next
Next

Welcome to Insights