Introducing the OPN Model Family
Computer-use models trained on proprietary enterprise IAM trajectory data, and the foundation of a training loop that compounds with every deployment we run.
Ready to Govern Every Application?
See how Opnova can automate identity governance for your disconnected applications in weeks, not months.
We started Opnova to do something that's genuinely hard: automate enterprise IT security operations in environments that are regulated, legacy-heavy, and with zero tolerance for error. Translation: environments where automation has historically failed.
Building a platform capable of that work was the first challenge. Building the AI backbone to run it entirely on our customers' own infrastructure, trained on real-world Identity and Access Management (IAM) agent traces, is what we're announcing today.
The Opnova OPN model family is a line of proprietary computer-use models purpose-built for enterprise back-office automation. The first, OPN-1, is an 8-billion-parameter vision-language model fine-tuned during our participation as an NVIDIA Inception member in NVIDIA Innovation Lab. By giving select Inception members hands-on access to NVIDIA resources, this program helps accelerate AI development, and it gave us access to dedicated GPU infrastructure so we could do this work at the pace a startup needs.
OPN-1 is trained on data generated within Opnova Range, our controlled simulation environment for enterprise IT security operations. Range reproduces IAM workflows across the full breadth of the enterprise applications we support and is built on the operational knowledge we’ve accumulated in production.
“NVIDIA Inception gave us hands-on access to an 8x H100 node through the Innovation Lab. Fine-tuning a vision-language model on proprietary agent trajectories at startup speed isn't possible without that kind of infrastructure.”
Intelligence Trained on the Work Itself
Most of our 20,000+ production executions have been in banking — SOX-governed, OCC-regulated, and with access review cycles that run quarterly. The bar for "certified" in that environment is real.
Every deployment taught us what enterprise IAM workflows actually look like in practice: the screen states, the edge cases, and the failure modes that only show up under real pressure. That’s the proprietary operational knowledge that Opnova Range is built from and how it reproduces those workflows with production-grade fidelity. It generates the trajectories we use to fine-tune and evaluate OPN models without any customer data entering the training pipeline.
“The OPN model family is the output of a training loop built on operational knowledge, not customer data. Every deployment deepens our understanding of how enterprise IAM workflows behave. That understanding flows into Opnova Range. Range generates the agent trajectories that train the next model.”
The Compounding Data Loop

Every workflow deployed to production reaches 100% completion rate before it touches a live environment. Prompt tuning by our Field AI Engineers optimizes the path, and our Reflexive Memory system converts validated action sequences into deterministic cached execution over time.
But what a model does before any of that optimization tells you something important about the quality of its training.
We measured OPN-1 on our internal Action Performance Benchmark under exactly that condition: zero-shot accuracy on enterprise IT security operations workflows it had never seen, with no prompt tuning and no cached optimization. OPN-1 hits 82.2% — up from 74.4% on the base model, a 10.4% relative gain from fine-tuning alone. That's the floor we're working from. In production, every workflow reaches 100% completion rate.

A Closed Loop, From Screen to Action
Our agents work by observing application screens: taking a screenshot, reasoning about the current state, and predicting the next action. In enterprise IT security operations, those screenshots are sensitive by definition because they include employee records, access entitlements, role assignments, and identity attributes mid-change. The screen showing an access provisioning workflow in progress contains exactly what a CISO tracks on their watch list.
For enterprise security teams, the question that comes up in every vendor review is where the data goes for model inference. We've seen this most directly working with financial institutions that run every vendor dependency through risk committees with detailed questionnaires covering data flows, subprocessors, retention policies, and geographic boundaries. Vendors operating in IT security face an additional layer of scrutiny because what's being processed touches the access control fabric of the organization.
What Security Teams Are Asking
Major cloud providers have made meaningful progress on zero-data-retention policies for inference. The challenge is that "current policy" isn't the same as "auditable, versioned, and contractually locked." When a CISO or vendor risk committee asks whether operational screenshots are being retained by a third-party AI provider, the answer has rarely included a technical guarantee backed by infrastructure the customer controls. For organizations with access review obligations or SOX-scoped systems, that gap matters in an audit.
OPN models close this at the infrastructure level. In an OPN-powered BYOC deployment, the model runs on the customer's own GPU, inside their VPC, behind their firewall. Screen capture, state reasoning, action prediction, execution, and audit logging all happen within the customer's network boundary.
The model itself is a versioned software artifact — subject to the same change management, access controls, and audit trail as anything else in their security stack.
The compliance posture of the deployment changes in concrete ways. No AI subprocessor to disclose, no data processing agreement required for inference, no cross-border transfer exposure to evaluate. The vendor risk review for the model layer gets handled the same way as any internally deployed software. For our banking customers, that's a meaningful shift. The same logic holds across any regulated environment where third-party model inference is a procurement obstacle.
Stability as a Production Requirement
Automation workflows are certified against specific model behavior. When an upstream provider pushes an update, even an improvement, the behavior a workflow was certified against may have shifted. The recertification cycle runs again, against a model the customer doesn't control and can't hold at a fixed version.
With OPN models, that control belongs to the customer. Workflows run against the version they were certified on. Upgrades happen on the customer's own change management timeline. For security operations teams already juggling audit schedules, certification windows, and change advisory processes, that predictability is what makes automation something you can put in the critical path.
A Model for Every Release, Not a One-Time Bet
We're building this into how we ship. Every major Opnova platform release will include a companion OPN model, trained and benchmarked against that release's specific capabilities, supported application types, and workflow patterns.
No single training run makes OPN models defensible; it's the knowledge loop behind them. Every production deployment expands the range of applications and workflow patterns we understand. That understanding flows into Opnova Range, which generates increasingly comprehensive training trajectories.
We’ve invested in the simulation environment, the validation methodology, and the fine-tuning pipeline. The open-source vision-language model ecosystem is advancing rapidly, with stronger base models emerging regularly. Each OPN generation starts from a higher baseline, and the enterprise specialization we layer on top compounds with every release. We're already working on OPN-2.
"Every deployment makes the dataset richer. Every richer dataset makes the next model more capable. The compounding is structural, not incidental."
What This Means for Enterprise IT Security Operations
The back-office workflows that security operations teams run — access provisioning, entitlement changes, offboarding — have long been obvious candidates for automation. They’re high volume, rule-governed, and consequential when wrong.
What's kept full automation out of reach in regulated environments is the combination of strict accuracy requirements and deployment constraints that leave little room for inference dependencies that can't be audited, versioned, and controlled.
OPN-1 is our answer to that. Trained on validated trajectories from production banking environments, deployable entirely within a customer's own infrastructure, and version-stable across its lifecycle. It's an AI backbone built for the operational and compliance realities that enterprise security teams work within. The OPN family is our commitment to shipping that capability alongside every platform release going forward.