Where OpenAI’s technology can be seen in Iran.

by SkillAiNest

It’s not clear what OpenAI’s motivations are. It’s not the first tech giant to accept military contracts it once promised not to enter, but the speed of the pivot has been remarkable. Maybe it’s just about the money; Open is spending heavily on AIAI training and is looking for more revenue (from sources incl Advertisements). Or perhaps Altman truly believes in the ideological framework he so often invokes: that liberal democracies (and their militaries) must have access to the most powerful AI to compete with China.

The more consequential question is what happens next. OpenAI has decided it’s comfortable operating precisely in the dark heart of war, just as the US ramps up its attacks against Iran (in which AI is playing a bigger role than ever before). So where exactly might OpenAI’s tech figure in this fight? And what applications will its users (and employees) tolerate?

Targets and attacks

Although it has a Pentagon contract, it’s unclear when OpenAI’s technology will be ready for a classified environment, as it must integrate with other tools used by the military (Elon Musk’s xAI, which recently signed its contract with the Pentagon, is expected to go through the same process with its AI model Grok). But it is under pressure to act quickly because of controversies over the technology used to date: President Trump ordered the military to stop using its AI when Anthropic refused to allow its AI to be used for “any lawful use,” and the Pentagon labeled Anthropic a supply chain threat. (Anthropic is fighting the designation in court.)

If the Iran conflict continues until OpenAI’s tech enters the system, what could it be used for? A recent conversation I had with a defense official suggested that it might look something like this: A human analyst could feed an AI model a list of potential targets and ask it to analyze the information and prioritize who to target first. The model can calculate logistics information, such as where specific aircraft or equipment are located. It can analyze many different inputs in the form of text, image and video.

A human would then be responsible for checking these outputs manually, the official said. But that raises an obvious question: If someone is really double-checking the AI’s outputs, how is it speeding up targeting and strike decisions?

For years the military has been using another AI system, called Maven, that can handle things like automatically analyzing drone footage to identify potential targets. It is likely that OpenAI models, such as the Anthropics Cloud, will offer a conversational interface on top of this, allowing users to ask the intelligence for interpretations and recommendations on which targets to target first.

It’s hard to overstate how new this is: AI has long performed analytics for the military, drawing insights from oceans of data. But using generative AI to advise on what steps to take in the field is being seriously tested for the first time in Iran.

Drone Defense

In late 2024, OpenAI announced a partnership with Anduril, which makes both drone and anti-drone technologies for the military. The agreement states that OpenAI will work with Endorel to conduct time-sensitive analysis of drones attacking US forces and help shoot them down. An OpenAI spokesperson told me at the time that it didn’t violate company policies, which prohibit “systems designed to harm others,” because the technology was being used to target drones, not people.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro