Microsoft Launches New Safety Tools for Azure AI
The tech giant aims to mitigate AI risks with features like Prompt Shields and Groundedness Detection.
- Microsoft introduces new Azure AI tools to enhance safety and reliability, addressing concerns over AI's potential risks.
- The tools aim to prevent prompt injection attacks and hallucinations in AI models, ensuring outputs are safe and grounded in data.
- Prompt Shields and Groundedness Detection are among the features designed to protect against malicious inputs and false claims.
- The initiative reflects Microsoft's commitment to responsible AI development, balancing innovation with risk management.
- Industry and government demands for AI safety measures are growing, with the White House issuing a policy for federal agencies.