
About business but still a New York City startup Augmented Intelligence Inc (AUI)which tries to go beyond the popular "Transformer" This is the architecture used by most of today’s LLMs such as ChatGPT and Gemini It raised $20 million in a Bridge Safe round at a $750 million price cap, bringing its total funding to nearly $60 million.VentureBeat can exclusively reveal.
The round, completed in under a week, comes amid heightened interest in AI that communicates precisely and is ahead of a massive surge in advanced stages.
Oi relies on a fusion of Transformer tech and a new technology called "Neurosymbolic AI," Described in more detail below.
"We realize that you can combine the brilliance of an LLM in linguistic skills with the guarantees of symbolic AI," said Oh, Elhellofor , for , for , . Co-founder and CEO of Oi In a recent interview with VentureBeat. Alhello launched the company in 2017 as well Co-founder and Chief Product Officer Ori Cohen.
The new financing includes participation from Agateway Ventures, New Era Capital Partners, existing shareholders and other strategic investors. At a $350 million valuation cap in September 2024, after a $10 million raise, combined with The company announced a go-to-market partnership with Google In October 2024. Early investors include Vertex Pharmaceuticals founder Joshua Bodger, UKG chairman Aaron Ein, and former IBM president Jim Whitehurst.
According to the company, the pull-round is a precursor to a significantly larger increase in already advanced stages.
AUI is the company behind Apollo 1, a new foundational model designed for task-based dialogue, described as "Economic half" Conversational AI-Chat differs from the open-ended dialogs handled by GPT and Gemini-like LLMs.
The firm argues that current LLMs lack the decision-making, policy implementation and operational certainty required by businesses, particularly in formal sectors.
Chris Varlas, co-founder of Redwood Capital and an advisor to AUI, said in a press release provided to VentureBeat: “I’ve seen some of today’s top AI leaders shake their heads after interacting with Apollo-1.”
A specific neurosymbolic architecture
Apollo-1’s primary innovation is its neurosymbolic architecture, which separates linguistic fluency from task reasoning. Rather than using the most common technology underpinning most LLM and discussed AI systems today – the vented transformer architecture described in the seminal 2017 Google Paper "Attention is what you need" – The Oi system integrates two layers:
Neural modules, powered by LLM, handle the idea of: encoding user inputs and generating natural language responses.
A symbolic reasoning engine, which has evolved over the years, interprets structured task elements such as intents, entities, and parameters. This symbolic state engine determines the appropriate next steps using precision logic.
This hybrid architecture allows Apollo-1 to maintain state consistency, enforce organizational policies, and reliably trigger tool or API calls—capabilities that Transformer-only agents lack.
The design emerged from a multi-year data-gathering effort: “We built a user service and recorded millions of human-agent interactions across 60,000 live agents. From this we abstracted a symbolic language that describes the structure of task-based conversations, separate from their domain-specific content.”
However, enterprises that have already built systems built around Transformer LLM need not worry. Oi wants to make it as easy as possible to adopt its new technology.
"Apollo-1 deploys like any modern Foundation model," Alhello told VentureBeat in a text last night. "It does not require dedicated or proprietary clusters to run. It works in standard cloud and hybrid environments, leverages both GPUs and CPUs, and is significantly more cost-effective to deploy than frontier reasoning models. Apollo-1 can also be deployed across major clouds in isolated environments for increased security."
General and domain flexibility
Apollo-1 is described as a foundation model for task-based dialog, meaning it is domain-agnostic and common across verticals such as healthcare, travel, insurance, and retail.
Unlike consulting-heavy AI platforms that require bespoke logic per client, Apollo-1 allows enterprises to define behaviors and tools in a common symbolic language. This approach supports rapid onboarding and reduces long-term maintenance. According to the team, an enterprise can launch one working agent a day.
Importantly, procedural rules are encoded at the symbolic layer—not learned from examples. This enables implementation of det biases for sensitive or regular tasks.
For example, a system can prevent the cancellation of a basic economy flight not by inferring intent but by applying hard-coded logic to a symbolic representation of the booking class.
As Alhello explained to VentureBeat, the LL.M "Not a good method when you’re looking for certainty. It’s better if you know what you’re sending (to the AI ​​model) and always sending, and you know, always, what’s going to come back (to the user) and how to handle it. “
Availability and developer access
Apollo-1 is already in active use within Fortune 500 enterprises in a closed beta, and a general availability release is expected before the end of 2025. Previous report by Informationfor , for , for , . That broke the initial news from the startup.
Enterprises in conjunction with Apollo 1 either:
A developer’s playground, where business users and technical teams jointly shape policies, rules, and practices. or
A standard API, using OpenAI compliant formats.
The model supports policy enforcement, rule-based customization, and steering through guardrails. Symbolic rules allow businesses to restrict certain behaviors, while LLM modules handle open text interpretation and user interaction.
Enterprise Fit: When Reliability Beats Fluency
Although higher general purpose dialogue and creativity are enhanced in the LLM, they remain potential.
Apollo-1 addresses this gap by offering a system where policy adherence and deterministic performance are first-class design goals.
Alhello puts it bluntly: “If your use case is task-based dialog, you have to use us, even if you’re ChatGPT.”