
Presented by Oracle NetSuite
When a company says it’s delivering its most significant product release in nearly 30 years, it’s worth paying attention. When that claim comes from the founder of the world’s first cloud computing company, it’s time to really listen.
At SuiteWorld 2025, Evan Goldberg, founder and EVP of Oracle NetSuite, did exactly that, calling NetSuite Next the company’s most substantial product evolution in almost three decades. Yet behind that bold statement is a more subtle transformation — one focused on how AI behaves, not just what it’s capable of.
“Every company is experimenting with AI,” says Brian Chess, SVP of Technology and AI at NetSuite. “Some ideas work, some don’t, but each attempt teaches us something. That’s the nature of innovation.”
For Chess and Gary Wiessinger, SVP of Application Development at NetSuite, the real challenge is governing AI responsibly. Instead of rebuilding its platform from scratch, NetSuite is extending the same principles that have guided it for 27 years — security, control, and auditability — into the AI era. The aim is to make AI actions traceable, permissions enforceable, and outcomes fully auditable.
This mindset supports what Chess describes as a “glass-box” model for enterprise AI, where decisions are visible and every agent operates within clearly defined, human-set guardrails.
Built on Oracle’s foundation
NetSuite Next is the culmination of five years of development. It runs on Oracle Cloud Infrastructure (OCI), the same platform trusted by many of the world’s leading AI model providers, and it weaves AI directly into the core of the system rather than bolting it on as an add-on.
“We are building a fantastic foundation on OCI,” Chess says. “That infrastructure gives us far more than raw compute power.”
Because it’s built on the same OCI base that underpins NetSuite today, NetSuite Next provides customers with Oracle’s latest AI advances, combined with the performance, scalability, and security of OCI’s enterprise-grade cloud.
Wiessinger sums up the team’s philosophy as “needs first, technology second.”
“We don’t start with the technology,” he says. “We start with what customers need and then figure out how to apply the latest technology to solve those problems better.”
That approach runs throughout Oracle’s ecosystem. NetSuite’s collaboration with Oracle’s AI Database, Fusion Applications, Analytics, and Cloud Infrastructure teams enables capabilities that standalone vendors can’t easily replicate, he says — an AI environment that is open to innovation yet anchored in Oracle’s security, scale, and governance.
The data structure advantage
At the core of the platform is a structured data model that provides a major edge.
“One of the great things about NetSuite is that, because the data comes in and gets structured, the relationships between the data are explicit,” Chess explains. “That means the AI can start exploring the knowledge graph the company has been building over time.”
While general-purpose LLMs typically wade through unstructured text, NetSuite’s AI starts from structured data, mapping exact connections between transactions, accounts, and workflows to deliver context-rich insights.
Wiessinger notes, “Our data spans financials, CRM, commerce, and HR. We can do more for customers because we see more of their business in a single system.”
Combined with embedded business logic and metadata, that breadth allows NetSuite to generate recommendations and insights that are both accurate and explainable.
Oracle’s Redwood design system supplies the visual experience for this intelligence, creating what Goldberg called a "modern, clean and intuitive" workspace where humans and AI can work together naturally.
Designing for accountability
A major drawback of many enterprise AI tools is that they still operate like a black box — they output answers without revealing how they got there. NetSuite is taking a different path, designing its systems around transparency and making visibility a core feature.
“When users can see how AI arrived at a decision — tracing the steps from A to B — they’re not just checking accuracy,” Chess says. “They’re learning how the AI knew to do that.”
That visibility turns AI into a mechanism for learning. As Chess describes it, transparency becomes a “fantastic teacher,” helping organizations understand, refine, and ultimately trust automation over time.
But he warns against uncritical reliance: “What’s unsettling is when someone shows me something and says, ‘Look what AI gave me,’ as if that alone makes it correct. People should be asking, ‘What is this grounded in? Why is it right?’”
NetSuite’s response is traceability. When someone asks, “Where did this number come from?” the system can reveal the full chain of reasoning behind it.
Governance by design
AI agents in NetSuite Next operate under the same governance framework as employees: roles, permissions, and escalation paths. Role-based security embedded in workflows helps ensure that agents only act within their authorized scope.
Wiessinger explains it simply: “If AI drafts a narrative summary of a report and it’s 80% of what the user would have written, that’s fine. We’ll learn from their feedback and improve it. But posting to the general ledger is different. That has to be 100% correct, and that’s where controls and human review are critical.”
Auditing the algorithm
Auditing has always been central to ERP, and NetSuite is now extending that rigor to AI. Every agent action, workflow change, and model-generated code snippet is captured within the system’s existing audit framework.
As Chess notes, “It’s the same audit trail you’d use to see what humans did. Code is auditable. When the LLM generates code and something happens in the system, we can trace it back.”
This level of traceability turns AI from a black box into a glass box. When an algorithm speeds up a payment or flags an anomaly, teams can inspect exactly which inputs and logic led to that decision — a crucial safeguard for regulated sectors and finance organizations.
Safe extensibility
The other side of trust is flexibility — the ability to extend AI while keeping data protected.
The NetSuite AI Connector Service and SuiteCloud Platform enable that. Using standards like the Model Context Protocol (MCP), customers can plug in external language models while ensuring that sensitive data remains secure within Oracle’s environment.
“Businesses are eager for AI,” Chess says. “They want to put it to work. But they also want confidence that those experiments won’t spin out of control. The NetSuite AI Connector Service and our governance model let partners innovate while preserving the same audit and permission rules that apply to native features.”
Culture, experimentation, and guardrails
Governance frameworks only matter if people apply them thoughtfully. Both executives view AI adoption as a combination of top-down direction and bottom-up experimentation.
“The board is telling the CEO they need an AI strategy,” Chess says. “At the same time, employees are already using AI. If I were a CEO, I’d start by asking: what are you already doing, and what’s actually working?”
Wiessinger agrees that balance is essential: “Some companies centralize everything in one AI team, while others let everyone do whatever they want. Neither approach works alone. You need structure for big initiatives and room for grassroots experimentation.”
He offers a straightforward guideline: “Writing an email? Go wild. Touching financials or employee data? Don’t go wild with that.”
Both stress that experimentation is non-negotiable. “No one should sit back and wait for us or anyone else,” Wiessinger says. “Start experimenting, learn fast, and be deliberate about how you make AI work for your business.”
Why transparent AI wins
As AI becomes more deeply embedded in enterprise operations, governance will shape competitive advantage as much as innovation itself. NetSuite’s strategy — extending its long-standing ERP controls into the era of autonomous systems, on top of Oracle’s secure cloud and structured-data backbone — positions it to lead on both fronts.
In a landscape full of opaque models and risky claims, the organizations that succeed won’t just build more powerful AI. They’ll build AI that can be trusted.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.