Rasa

Enterprise AI agents with full control. We build Rasa solutions using CALM for organisations needing reliable, process-aware conversational AI.

Rasa is the leading platform for building trustworthy AI agents that combine LLM flexibility with business logic reliability. Unlike pure prompt-based approaches, Rasa's CALM engine (Conversational AI with Language Models) ensures agents follow defined processes while still understanding natural language. For organisations needing AI that works reliably at enterprise scale, Rasa provides the control that matters.

Keep control over processes

Deploy with data sovereignty in mind

Combine LLM understanding with deterministic execution

The CALM approach

Rasa has moved beyond traditional intent-based chatbots. CALM represents a fundamental shift:

Process calling, not just tool calling: The LLM invokes and collaborates with stateful processes. Your business logic drives the conversation while the LLM handles natural language understanding.

LLMs for understanding, code for control: AI handles dialogue understanding; your defined flows enforce the rules. This separation means reliable behaviour without sacrificing conversational fluency.

No NLU training required: Modern Rasa uses LLMs for understanding, eliminating the need to build intent classifiers and entity extractors. You define flows, not training data.

Built-in reliability: Agents stay on track, follow business processes, and escalate appropriately because behaviour is defined in code, not hoped for through prompts.

Why Rasa

Rasa addresses specific enterprise needs:

Data sovereignty: Deploy on your infrastructure. Conversations never leave your environment. Essential for regulated industries and sensitive data.

Process reliability: AI agents that follow business rules consistently. No drift, no hallucinated processes, no unexpected behaviour.

Full control: See exactly what your agents do and why. Debug with visibility. Modify behaviour precisely.

Operational predictability: You can plan capacity and operations because you control the serving environment.

LLM flexibility: Use any LLM—commercial or open-weight. Change models without changing your application.

Getting started with Hello Rasa

Rasa's new Hello Rasa playground makes prototyping accessible:

No setup required: Build CALM agents directly in your browser.

Template-based: Start from Banking, Telecom, or Support templates and customise.

Built-in copilot: An AI assistant helps generate code, debug flows, and expand your agent.

Production path: Export directly to the Rasa Platform when ready to scale.

This dramatically reduces time to first working prototype.

What we build with Rasa

Our Rasa work includes:

Customer service agents: Handling enquiries, resolving issues, routing to humans when needed—with process reliability that pure LLM approaches cannot match.

Transactional assistants: Money transfers, account changes, bookings—processes that must be followed correctly every time.

Internal agents: Helping employees with HR, IT, and operational queries while enforcing proper procedures.

Voice-enabled agents: Low-latency voice experiences with built-in turn-taking and timeout handling.

Enterprise deployments: Production installations with high availability, analytics, and observability through Rasa Pro.

Rasa Pro for production

Rasa offers a free Developer Edition for prototyping and learning. For production at scale, Rasa Pro adds:

Security and access controls. Practical enterprise features for regulated environments.

Analytics and observability. Visibility into conversations, outcomes, and quality drift.

Team collaboration features. Workflows for building and maintaining larger assistants.

Production deployment support. Guidance and tooling for high-availability operation.

Higher throughput capability. Better handling of larger volumes with the right deployment model.

We help organisations design and deploy production Rasa solutions.

Our Rasa expertise

We have deep experience with Rasa, including the new CALM paradigm:

Flow design: Structuring processes that combine LLM understanding with reliable execution.

Custom actions: Building integrations with business systems and external services.

Voice integration: Deploying low-latency voice experiences with proper orchestration.

Enterprise deployment: Setting up production infrastructure with appropriate operations.

Migration: Moving from legacy chatbot platforms or upgrading from older Rasa versions.

Considerations

Rasa requires investment in:

Infrastructure and operations. Ownership for hosting, monitoring, and runtime stability (or managed deployment through partners).

The CALM development model. Teams need to learn the right patterns for flows, state, and process execution.

Continuous improvement. Like any conversational system, quality improves through iteration and monitoring.

We can provide these capabilities or help build them within your organisation.

Ask the LLMs

Use these prompts to define scope and ensure reliability stays front-and-centre.

“Which user journeys must follow strict process rules, and which can be handled with more flexible language understanding?”

“What are the non-negotiable guardrails: permissions, audit trails, and escalation points?”

“What test scenarios and success metrics will prove reliability before we roll out widely?”

Frequently Asked Questions

No. The CALM approach uses LLMs for understanding while keeping control in defined flows and processes.

When you need predictable behaviour, process adherence, and deployment control—especially in enterprise or regulated environments.

Often yes. Rasa can be designed to work with different model providers depending on your constraints and governance.

We encode processes explicitly, constrain tool access, and add safe fallbacks and escalation paths.

Monitoring, regression tests on real scenarios, and controlled releases for changes to flows and model configuration.