Inside one of many first manufacturing deployments of Lakebase: LangGuard’s agentic workflow governance engine


The invisible downside with agentic AI

Most enterprises are experimenting with autonomous AI brokers. Only a few are deploying them safely at scale. In keeping with McKinsey’s “The State of AI in 2025” survey (November 2025), in no enterprise operate have greater than ten p.c of firms scaled AI brokers into manufacturing. The failure is never a scarcity of ambition; it’s a lack of visibility.

Not like conventional software program, autonomous brokers generate their very own logic on the fly. They bypass standard safety screens, invoke instruments and entry knowledge in methods which can be tough to audit after the very fact, and function throughout complicated multi-agent workflows the place a single misconfigured permission or coverage hole can cascade into a major safety incident. What enterprises want is a brand new class of management infrastructure: one which operates in the meanwhile a choice is being made, not after the harm is finished.

That’s the downside LangGuard was constructed to resolve.

Runtime enforcement meets platform governance

LangGuard acts as a runtime enforcement layer for agentic workflows, monitoring and implementing coverage throughout the end-to-end chain of actions, selections, instruments, credentials, and intent that spans each system an agent touches. Databricks supplies unified governance via Unity Catalog and AI Gateway—the system of file for knowledge, fashions, and entry insurance policies. As enterprises deploy brokers into manufacturing, the workflow itself additionally wants a runtime enforcement layer that extends these platform-level controls into each step of agent execution. That’s the place LangGuard matches in. LangGuard’s governance engine, the GRAIL™ (Governance AI Run-time Hyperlinks) knowledge cloth, captures each agent motion as multidimensional hint knowledge and constructs a reside data graph of workflow conduct and context. When an agent makes an attempt to invoke a instrument, entry a dataset, or name a mannequin, LangGuard evaluates that motion in opposition to coverage earlier than it executes, throughout each system the workflow touches, no matter the place it runs.

The size of a manufacturing enterprise agentic deployment makes this genuinely exhausting. A single workflow might contain tens of coordinated brokers, a whole lot of instrument invocations, a number of basis fashions, and insurance policies managed throughout fifteen or extra enterprise Techniques of Report, together with IT ticketing methods like ServiceNow, IAM and IDP platforms, CRM methods like Salesforce, HR platforms like Workday, cloud safety platforms like Wiz and CrowdStrike, contact middle platforms like TalkDesk, MCP Gateways, and API Gateways. Governing this in actual time, with out impacting agent efficiency, calls for infrastructure purpose-built for the issue.

Why we selected Lakebase

The LangGuard crew spent years constructing IBM QRadar, a multiple-time Gartner Magic Quadrant chief and one of many world’s most generally deployed enterprise SIEM platforms. QRadar ingests and correlates petabytes of safety telemetry per day below strict latency and reliability necessities. That have taught us a tough lesson: database structure is future. Once we designed LangGuard’s workflow governance engine, we confronted the identical problem we had solved earlier than: operational safety knowledge that arrives in unpredictable, high-intensity bursts, the place each millisecond of resolution latency issues and idle infrastructure spend is unacceptable. Conventional databases that couple compute and storage power you to provision for peak load and pay for that capability across the clock. Lakebase’s serverless mannequin, which totally decouples compute from storage and scales to zero between bursts, was the reply we had at all times wanted however did not have entry to once we had been constructing QRadar. It matched the issue precisely.

What makes Lakebase the fitting match

Lakebase is a brand new class of operational database structure that disaggregates compute from storage, permitting compute to scale elastically with workload demand whereas sturdy state lives independently in a replicated storage layer. Constructed on the open basis of PostgreSQL, the lakebase structure preserves every part builders depend on in a confirmed relational database whereas eliminating the infrastructure constraints that make conventional, monolithic RDBMS the mistaken selection for the pace and scale that trendy apps, brokers, and AI demand.

Serverless autoscaling and scale-to-zero

Agent conduct is notoriously bursty. An agent workflow is likely to be utterly dormant for hours after which out of the blue generate a whole lot of hint writes and enforcement reads in a matter of seconds. Lakebase dynamically provisions compute sources the precise second these traces flood our system, and shuts down utterly when exercise stops. As a result of sturdy state lives within the storage layer, not within the compute node, spinning up a brand new compute occasion requires no knowledge motion. It merely attaches to the present database historical past and begins serving queries instantly.

For a startup working at enterprise scale, that is the distinction between infrastructure that matches precise utilization and infrastructure that penalizes you for having quiet durations. Our operational prices keep completely aligned with the workloads we are literally serving.

Millisecond learn latency for warm operational knowledge

The pure concern with any disaggregated database is learn latency. Lakebase addresses this via a caching layer between compute and storage that retains scorching knowledge near compute.

For LangGuard’s enforcement queries, tight listed lookups in opposition to GRAIL™ context and coverage tables, we count on the lively working set to suit comfortably in compute-local reminiscence. This structure provides us the arrogance that governance selections may be enforced at workflow pace, with out including significant latency to agent execution.

Immediate database branching for governance coverage testing

Lakebase’s immediate database branching is one among its most operationally useful capabilities for a governance product. Once we create a department, no knowledge is bodily copied. The department diverges from the present database state utilizing copy-on-write semantics, consuming storage just for new or modified knowledge. Our builders can create an remoted, actual reproduction of our manufacturing hint knowledge in seconds, check new governance insurance policies in opposition to real-world agent conduct, and validate enforcement logic with out risking the soundness of the reside atmosphere.

PostgreSQL: a confirmed basis

Lakebase is constructed on PostgreSQL, the world’s most superior open-source relational database, with many years of manufacturing hardening throughout each trade. For LangGuard, this implies full compatibility with the instruments, libraries, and extensions our crew already is aware of, with no proprietary question language or migration danger.

How LangGuard and Databricks Work Collectively

The joint LangGuard and Databricks structure is designed to control enterprise agentic workflows end-to-end whereas preserving all operational knowledge on a single, trusted knowledge and AI platform. On the left of the structure are the enterprise agentic workflows themselves: AI brokers and their orchestrators interacting with dozens of methods of file reminiscent of IT service administration, CRM, HR, identification, safety, contact middle, and API/MCP gateways. Every agent motion, instrument invocation, and knowledge entry request generates wealthy hint occasions that move into LangGuard in actual time.

On the middle of the diagram is the LangGuard Governance Workflow Engine, powered by the patent-pending GRAIL™ knowledge cloth. GRAIL captures each agent motion as multidimensional hint knowledge and constructs a reside data graph of workflow conduct and context. When an agent makes an attempt to name a instrument, entry a dataset, or invoke a mannequin, LangGuard performs a coverage analysis in opposition to this reside context and the related governance guidelines, returning an permit/deny/modify resolution earlier than the motion executes. This provides enterprises a single management level for implementing coverage throughout each system the workflow touches, no matter the place the underlying brokers are operating.

On the fitting, Databricks Lakebase serves because the operational system of file for LangGuard’s hint and coverage knowledge. Lakebase’s serverless, PostgreSQL structure disaggregates compute from storage, enabling elastic autoscaling and scale-to-zero between bursts of agent exercise whereas preserving scorching operational knowledge in a low-latency cache close to compute. LangGuard constantly writes hint occasions into Lakebase and performs low-latency reads for governance coverage lookups and contextual queries, guaranteeing that enforcement selections may be made at workflow pace with out over-provisioning database capability.

As a result of LangGuard’s operational knowledge lives natively in Lakebase, it’s instantly out there to the broader Databricks Information Intelligence Platform for analytics and AI with out further ETL. Databricks AI, Mannequin Serving, and MLflow can practice and deploy anomaly detection fashions instantly on GRAIL hint knowledge to determine brokers that deviate from their established behavioral baseline. These predictive alerts feed again into the LangGuard Governance Engine, closing the loop between real-time enforcement and predictive monitoring and enabling enterprises to maneuver from reactive controls to proactive, behavior-based AI governance on a single platform.

What comes subsequent: predictive governance for agentic workflows

LangGuard’s engine as we speak enforces established insurance policies at runtime throughout the total workflow. The following evolution is predictive: coaching behavioral fashions on historic GRAIL hint knowledge to detect anomalous agent conduct earlier than it manifests as a coverage violation.

As a result of our operational hint knowledge already lives throughout the Databricks ecosystem, as described above, we will transfer instantly from enforcement to prediction with out constructing separate ETL pipelines or standing up a second analytical platform.

If an agent begins performing erratically or deviating from its established baseline, these fashions will flag it as an anomaly earlier than any harm is finished. This convergence of real-time enforcement and predictive machine studying is the way forward for enterprise AI governance, and it’s the structure we’re constructing as we speak.

KEY TAKEAWAY
LangGuard is among the first startups constructing manufacturing infrastructure on Databricks Lakebase. The selection was pushed by a selected set of non-negotiable necessities: low-latency enforcement, elastic burst dealing with, and governance coverage testing in opposition to actual knowledge. Solely a serverless OLTP database may fulfill all of them. Lakebase is the primary database to fulfill all of them.
For enterprises that want to control agentic workflows end-to-end, throughout each agent, instrument, credential, and system of file within the chain, this structure means enforcement that operates at workflow pace, scales with deployment complexity, and evolves towards predictive behavioral safety with out requiring a separate knowledge platform.

Prepared to control your agentic workflows end-to-end? Go to langguard.ai to learn the way LangGuard secures, controls, and operates enterprise agentic workflows with full coverage compliance, or discover Databricks Lakebase to see how serverless OLTP infrastructure powers real-time AI governance at scale.

Study extra about LangGuard Discover Databricks Lakebase

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles