Mozilla.ai launches any-guardrail to test AI safety

2 minutes
MOZILLA
Mozilla.ai launches any-guardrail to test AI safety

Have you ever wanted to compare safety filters for language models and found that every provider has its own way of doing things? Mozilla.ai introduced any-guardrail on September 9, 2025 to solve exactly that: a common interface to test and swap guardrails without wrestling with each API. (blog.mozilla.ai)

What is any-guardrail?

any-guardrail is a single layer that brings together different guardrail models (from classifiers to "LLM as judge" approaches) and handles preprocessing, inference, and postprocessing for you. The idea is that you can swap guardrails quickly and compare results without redoing the whole integration. (blog.mozilla.ai, github.com)

Guardrails aren't all the same: some are discriminative models, others are generative with sophisticated prompting. any-guardrail aims to make them comparable. (blog.mozilla.ai)

A practical example: if you're building a conversational agent for customer support, you can test a guardrail that detects misinformation and another that blocks dangerous instructions, all with the same validate call. That speeds up deciding which one fits your context.

from any_guardrail import AnyGuardrail, GuardrailName, GuardrailOutput

guardrail = AnyGuardrail.create(GuardrailName.DEEPSET)

result: GuardrailOutput = guardrail.validate("All smiles from me!")

print(result.valid)

The repo and documentation are on GitHub and you can install it with pip install any-guardrail. (github.com)

Why does this matter now?

Because safety in systems with LLMs isn't just the model; it's the whole ecosystem: orchestration, knowledge bases, and the logic around the model. Without a standard way to test guardrails, comparing solutions is slow and error-prone. any-guardrail reduces that friction and makes experimentation easier. (blog.mozilla.ai)

Mozilla.ai also connects any-guardrail with the rest of its any-suite, like any-llm and any-agent, so testing combinations of providers and frameworks becomes more straightforward. If you're a researcher or engineer, this saves time before moving to production. (blog.mozilla.ai, github.com)

Limitations and roadmap

Not everything is solved. any-guardrail today focuses on open source guardrails, and Mozilla.ai plans to integrate closed providers and optimize inference to reduce production overhead. They also acknowledge the challenge of a guardrail working across specific contexts; that’s why they plan to make it easier to adjust taxonomies and fine-tune based on real deployments. (blog.mozilla.ai)

How to get started

  1. Check the repository on GitHub: any-guardrail on GitHub. (github.com)
  2. Install with pip install any-guardrail and try the quick example.
  3. If you already use any-llm or any-agent, consider adding any-guardrail to your evaluation pipeline to compare guardrails without redoing integration.

So now what? Testing guardrails becomes an iterative activity rather than a one-time decision. If you work with agents or products that use LLMs, this tool can help you answer key questions quickly: what blocks better? what produces more false positives? which scales without killing latency?

The arrival of any-guardrail is a clear sign: AI safety stops being a set of patches and becomes a piece you can measure, compare, and systematically improve. Isn't that exactly what we need to bring powerful models to real products with fewer risks?

Stay up to date!

Receive practical guides, fact-checks and AI analysis straight to your inbox, no technical jargon or fluff.

Your data is safe. Unsubscribing is easy at any time.