Building on the company’s efforts to combat inaccuracies in AI, Microsoft has announced a new feature called “Remediation.” Customers who use Microsoft Azure to power their AI systems can now use the ability to automatically detect and rewrite erroneous content in AI output.
The remediation feature is available in preview as part of Azure AI Studio, a suite of safety tools designed to detect vulnerabilities, spot “hallucinations,” and block malicious prompts. Once enabled, the remediation system scans the AI output against a customer’s source material to identify inaccuracies.
From there, it highlights the mistake, provides information about why it’s wrong, and rewrites the content in question — all before the user even notices the inaccuracy. While this seems like a convenient way to deal with the nonsense that AI models often assert, it may not be a completely reliable solution.
Vertex AI, Google’s cloud platform for companies developing AI systems, has the ability to “ground” AI models by matching output with Google search, a company’s own data, and (soon) third-party datasets.
In a statement to TechCrunch, a Microsoft spokesperson said the “correction” system “uses small and large language models to align the output to the foundation documents,” meaning it’s not error-free. “It’s important to note that foundation detection does not solve for ‘accuracy,’ it just helps align the output of our generative AI to the foundation documents,” Microsoft told TechCrunch.