Join our daily and weekly newsletters for the latest updates and special content related to the industry’s leading AI coverage. Get more information
Openi is To develop a set of main updates Its new reaction API, which aims to make developers and businesses easier to build intelligent, action -based agent applications.
These increase include remote model context (MCP) support for servers, integration of image generation and code spokesperson tools, and files to file search capabilities. All this is available today, May 21.
The first was launched in March 2025, API acts as an open -up toolbox for third -party developers to develop some basic features of its hit services chat GPT and developed agent applications above some of its first party AI agents and some of the main features of the operator.
In the months after its launch, it has taken trillions of tokens and supported a wide range of use from market research and education to software development and financial analysis.
Popular applications built with API include Zenkader’s coding agent, Revie’s Market Intelligence Assistant, and Magics School’s Educational Platform.
API’s answers base and purpose
The API started with the API, along with Open A’s open source agents in March 2025, as part of providing access to the third -party developer to the same technologies, to provide power to Open AI agents such as Deep Research and Operator.
In this way, startups and companies out of Openi can connect the same tech as it offers in their own products and services through Chat GPT, whether they are internal for employees or external for consumers and partners.
Initially, the API developed the Chat completion and the combined elements of the Assistant API to provide built -in tools for the web and file search, as well as the use of computer, as well as the developers to create an autonomous workflow without complex orchestration logic. Openi had said at the time that the completion of the chat would be deported by mid -2026.
Answers API model decisions, access to real -time data, and integration capabilities, which allow agents to retrieve, reasoning and following information.
This launch identified the developers for the domain with minimal friction, the formation of specific AI agents related to the domain, identifying a change by giving a unified tool cut.
Remote MCP Server expands integration capacity
An important increase in this update is auxiliary for remote MCP servers. Developers can now connect Openi models to outdoor tools and services such as strip, shops, and Two, which only use a few lines codes. This capability enables agents to create measures that can take steps and interact with the system that users already rely. To support this developed environmental system, Openi has joined the MCP steering committee.
These updates add new built -in tools to the API that add to what agents can do within the same API call.
A type of Openi Hut GPT-4O and ancestral image Generation model, which affected a wave of “Studio Unseen” styling mobile phones around the web and buckled Open Servers with its popularity, but obviously many other image styles can be created by the GPT-1. This includes potentially helpful and quite impressive new features such as real -time streaming previews and multi -turn refinement.
This enables developers to make applications that can mobilize and edit images in response to the user’s input.
In addition, the code interpreter device is now connected to the API, which allows models to handle data analysis, complex mathematics and logic -based tasks in their reasoning process.
This device helps improve the performance of the model in various technical standards and allows the agent’s more sophisticated behavior.
Handling from better file search and context
File search functionality has also been upgraded. Developers can now find in multiple vector stores and only apply feature -based filtering to recover highly relevant content.
This improves the use of informational agents, which increases their ability to answer complex questions and work in large knowledge domains.
The reliability of new businesses, transparency features
Many features have been specially designed to meet the needs of the enterprise. Background mode allows long -running unpredictable tasks to address timeout or network intervention issues during extreme reasoning.
Summary of reasoning, a new addition, describes the natural language of the model’s internal thought process, which help in debugging and transparency.
Confidential arguments provides an additional privacy layer for users to maintain zero data.
These models allow re -use of previous reasoning measures without storing a data on open AI servers, improving both security and performance.
The latest capabilities are supported by Openi’s GPT -4O series, GPT -4.1 series, and O -series models, including O3 and O4 -MINI. These models now maintain a state of reasoning in numerous toll calls and applications, which leads to a more accurate response to a low cost and delay.
Tomorrow’s price is worth today!
Despite the extended feature set, Openai has confirmed that the response to new tools and capabilities within the API will remain in accordance with current rates.
For example, the cost of the code interpreting device is 3 0.03 per session, and the file search use bill is $ 2.50 on calls, which costs 10 0.10 per GB per day after the first free gigabite.
Web search pricing is based on the size of the model and search context, of which 1,000 calls from $ 25 to $ 50. GPTIM IME -1 tool also charged the image generation according to resolution and quality terrace, which starts at 0.011 per photo.
The use of all tools is chosen at the model’s per token rates, with no additional markup for new addiction capabilities.
What is next to API?
With these updates, the open answers continue to increase what is possible through API. Developers access many sets of tools and enterprises, while enterprises can now build more integrated, capable and safe AI-driving applications.
Details of pricing and implementation through Open AI documents are directly directly until May 21.