
Presented by Flexible
As organizations scramble to implement agentic AI solutions, access to proprietary data from all the nooks and crannies will be key.
By now, most organizations have heard of agentic AI, which are systems that autonomously “think” using tools, data, and other sources of information to return answers. But here’s the rub: reliability and relevance depend on providing the right context. In most enterprises, this context is scattered across various unstructured data sources, including documents, emails, business apps, and customer feedback.
As organizations look forward to 2026, solving this problem will be key to accelerating agent AI rollout around the world, says Ken Exner, chief product officer at Resilient.
"People are starting to understand that for agent AI to perform properly, you have to have relevant data," Exner says. "Consistency is important in the context of agentic AI, because the AI is taking action on your behalf. When people struggle to build AI applications, I can almost guarantee you that the problem is compatibility.
Agents everywhere
Struggles can enter a make-or-break period as organizations strive for competitive advantage or to create new efficiencies. A Deloitte study Prophecy That by 2026, more than 60 percent of large enterprises will have deployed agentic AI at scale, marking a major leap from experimental stages to mainstream implementation. and researcher Gartner Prophecy By the end of 2026, 40% of all enterprise applications will incorporate task-specific agents, down from less than 5% in 2025. Adding task specialization capabilities evolves AI assistants into context-aware AI agents.
Enter Context Engineering
The process of getting the relevant context into agents at the right time is known as context engineering. Not only does this ensure that an agent application has the data it needs to provide accurate, in-depth responses, it helps the large language model (LLM) understand what tools are needed to find and use that data, and how to call those APIs.
While there are now open-source standards such as the Model Context Protocol (MCP) that allow LLMs to connect and interact with external data, there are few platforms that allow organizations to build precise AI agents that consume your data and combine retrieval, governance and orchestration in one place.
Elasticsearch has always been an important platform for context engineering. It recently released a new feature within Elasticsearch called Agent Builder, which simplifies the entire operational lifecycle of agents: development, configuration, implementation, customization and monitoring.
Agent Builder helps build MCP tools on private data using a variety of techniques, including a flexible search query language, filtering, transformation, and a piped query language for analyzing data, or workflow modeling. Users can then take various tools and combine them with indicators and LLMs to create agents.
Agent Builder offers a configurable, out-of-the-box conversational agent that allows you to interact with the data in the index, and it also gives users the ability to build one from scratch using various tools and prompts on top of private data.
"Data is the center of our world in flexibility. We’re trying to make sure you have the tools you need to work with that data," Exner explains. "The second you open Agent Builder, you point it to an index in Elasticsearch, and you can start chatting with any data you connect to it, any data that’s indexed in Elasticsearch – or from external sources through integration. “
Context engineering as a discipline
Instant and contextual engineering is becoming a dispersal. It’s not something you need a computer science degree in, but more classes and best practices will come up, because there’s an art to it.
"We want to make it very easy to do," Exner says. "What people will have to figure out is, how do you drive automation with AI? This is what drives productivity. People who are focused will see more success."
Beyond this, other patterns of context engineering will emerge. The industry has quickly moved from engineering to retrieval generation, where information is transferred to the LLM in a context window, to MCP solutions that assist the LLM with tool selection. But it won’t stop there.
"Given how fast things are moving, I’ll guarantee that new patterns will emerge very quickly," Exner says. "There will still be context engineering, but they will be new paradigms for how to share data with LLM, how to develop it into the right information. And I foresee more patterns that make it possible for an LLM to understand private data that has not been trained."
Agent Builder is now available as a Tech Preview. Start with one Flexible Cloud Trialand check the documentation for Agent Builder Here.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and is always clearly marked. For more information, contact sales@ventorbet.com.