001 · Use case · Agentic Platform

Modernize customer support with conversational AI.

Conversational AI on the channels your customers use. Automated triage from the moment a ticket is created. On your infrastructure, with your governance controls.

TelecommunicationsGovernmentCustomer operationsIT operations
002 · The problem

The known choice between manual process and lost governance.

Enterprise customer support runs on rigid forms, manual triage, and inconsistent classification. Customers wait. Agents spend their day routing tickets. Ticket categories drift over time because no one has a feedback loop. And the obvious upgrade — putting an LLM on the channel — runs into the wall every regulated organization knows: customer data and ticket context cannot leave the perimeter, vendor cloud or no.

The result is that organizations either keep the manual process and absorb the operational cost, or move forward with a SaaS chatbot that compromises governance. Both choices are known. Neither is good.

003 · How Alquimia approaches it

An agent layer that stays on your infrastructure.

Agentic Platform is designed for this shape of problem: an agent layer that runs on your infrastructure, integrates with your ticketing and channel stack, and stays under your governance controls.

A typical deployment uses three platform components together. Studio is where the agents are designed — one for conversational support, one for triage — configured by prompt and tool selection rather than by code. Runtime is where they execute in production, with agent-to-agent orchestration when one agent needs another. Observability is what makes every classification, routing decision, and customer interaction inspectable in real time. RBAC, secret management, and tenant isolation are enterprise primitives that ship with the platform.

The whole stack runs on Kubernetes or Red Hat OpenShift AI in your datacenter — or any private cloud where your team operates. No customer data leaves the perimeter unless you decide.

Read the full architecture
004 · Implementation example
ArSAT

Two AI agents in production on sovereign infrastructure.

ArSAT, Argentina's state-owned telecommunications company, deployed two AI agents in production on Alquimia Agentic Platform running on Red Hat OpenShift AI in their own datacenter.

Agent 01

Conversational support over WhatsApp

Customers ask, in plain Spanish, "¿en qué estado está mi ticket 482?" or "¿cuándo arreglan mi línea?" — without forms, without commands, without rigid menus. The agent retrieves real-time ticket status from ArSAT's internal systems and responds in natural language.

Agent 02

Automated incident triage

Every new incident, regardless of channel, passes through it at creation. The agent classifies the ticket against ArSAT's category taxonomy, prioritizes it based on context, and routes it to the right technical team. Each decision is logged with the prompt, model, and timestamp.

Both agents were designed in Studio, not in code. Each agent is defined by three pieces: the prompt that captures its behavior in plain language, the tools it can use to interact with internal systems, and the guardrails that constrain what it can and cannot do. The configuration lives at the platform layer, not inside the agents — so a change in policy or scope is a configuration update, not a release.

Three architectural choices that made it possible
01

On-prem deployment

All sensitive information — tickets, operational data, customer context — stays inside ArSAT's perimeter. No exposure to external services.

02

Multi-model architecture

Models are managed and versioned through the platform. ArSAT can swap any model at any time without changing the agents.

03

Auditable decisions

Every interaction is logged for governance, traceability, and compliance.

Why Red Hat OpenShift AI as the runtime layer

Running the deployment on Red Hat OpenShift AI gave ArSAT three operational advantages that the architectural choices above depend on.

i

Resource and cost optimization

OpenShift AI manages GPU-intensive workloads natively and supports running smaller, more efficient models alongside larger ones — including exact control over the small guardrail models that protect the LLM perimeter. Time-Slicing lets multiple inference workloads share GPU resources concurrently.

ii

Observability deeper than the agent layer

Alquimia provides observability for agent decisions — prompts, tools, outputs, traces. OpenShift AI adds a layer below: hardware-level metrics, model-level performance, and visibility into the neural network behavior under load.

iii

Explainability via TrustyAI

Compliance in regulated environments is shifting from "did the agent give the right answer?" to "why did the model reach this conclusion?". TrustyAI integrates with the on-prem deployment and provides the tooling to satisfy that requirement.

006 · Expected outcome

Four indicators ArSAT measures.

ArSAT measures the deployment with four operational indicators that both monitor performance and feed back into the triage system, increasing precision over time:

  • 01
    Misrouting rate

    The percentage of tickets initially assigned to the wrong team. Reducing it is the headline quality metric of the triage agent.

  • 02
    Channel mix

    The share of incoming tickets per channel (email, WhatsApp, web). Tracks how the conversational channel relieves load on traditional ones.

  • 03
    Complaints tied to delays or classification

    Direct customer-experience signal, used to refine prioritization criteria.

  • 04
    Top ticket categories

    Visibility into recurrent demand, used to drive structural improvements upstream.

Specific figures will be reported as the deployment matures. The press release announcing the production rollout was published in April 2026.

008 · Get in touch

Bring your support pipeline. We'll walk you through it.

We work with enterprise teams modernizing customer operations on their own infrastructure. A short call is enough to see if Agentic Platform is the right fit for your case.

Get in touch