For the first time, U.S. fighter pilots have taken directions from an artificial intelligence system during a live test, marking a moment that could reshape the future of air combat. The exercise, conducted earlier this month by the Air Force and Navy, swapped out human battle managers on the ground for Raft AI’s “air battle manager” technology.
Traditionally, pilots in action depend on ground support teams watching radar to tell them where to fly and where the threats are. In this trial, the pilots instead consulted AI for confirmation that their flight paths were correct and for faster updates on enemy aircraft in the area. The test gave AI a critical role in coordinating the timing and direction of military maneuvers — something that has always been the job of human decision-makers.
Raft AI’s CEO, Shubhi Mishra, told Fox News that choices which used to take several minutes now take only seconds with the new system. For pilots in combat, seconds can mean the difference between life and death. Faster decisions could allow American aircraft to intercept threats long before human operators even get through the first call. But there’s also a risk. The technology pushes decision-making into the hands of an algorithm that may not weigh broader strategy or context the way a human can.
The trial is only one piece of a larger military trend. Defense contractors such as Anduril and General Atomics have already developed unmanned drones designed to fly alongside fighter jets. These advances signal a future where split-second decisions are increasingly made by machines rather than humans, shifting the nature of modern warfare.
US fighter pilots try taking directions from AI for the first time
US fighter pilots took directions from an AI system for the first time in a test that could drastically change combat tactics, Fox reported. Fighter pilots in action typically communicate with ground support who… pic.twitter.com/xtyNlZEOr7
— Evan Kirstel #B2B #TechFluencer (@EvanKirstel) August 28, 2025
While the military tests AI on the battlefield, a very different AI controversy is playing out in the courtroom.
The parents of a 16-year-old boy who died by suicide have filed a lawsuit against OpenAI, alleging that its chatbot contributed to their son’s death. The lawsuit, reported by The New York Times, claims the company’s latest model, GPT-4o, was designed in ways that foster psychological dependency, and at times discouraged the teen from seeking outside help. The suit further alleges that the chatbot provided him with information about suicide methods.
OpenAI responded by pointing to existing safeguards, such as referrals to suicide prevention hotlines. But the company admitted that those protections tend to work best during short interactions and can break down over longer conversations. “We’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” OpenAI said, while adding it is working to make ChatGPT “more supportive in moments of crisis.”
This marks the second time an AI chatbot has been blamed in court for playing a role in a young person’s death. A separate lawsuit is underway in Florida, where a family claims a teen’s relationship with a chatbot from Character.ai contributed to his suicide.
This is horrifying:
#BREAKING : ChatGPT allegedly ‘aided’ California teen’s suicide; parents sue OpenAI
A California couple has sued OpenAI over the death of their teenage son, alleging its generative AI chat programme ChatGPT encouraged him to take his own life.
The lawsuit was filed by Matt and… pic.twitter.com/bBMzkXCQob
— upuknews (@upuknews1) August 27, 2025
The legal questions are significant. Tech platforms have long leaned on Section 230 of the Communications Decency Act, which protects them from liability for content created by users. But AI companies generate the content themselves, raising questions about whether that shield still applies. In one ongoing Florida case, a judge rejected Character.ai’s attempt to dismiss a lawsuit on free speech and Section 230 grounds, hinting courts may not give AI firms the same sweeping immunity social media platforms enjoy.
Even OpenAI’s CEO Sam Altman has said Section 230 may not fit. When asked about it during a Senate hearing last year, he told lawmakers, “I don’t think Section 230 is even the right framework.”
Between the battlefield and the courtroom, AI is stepping into roles once reserved only for humans — fighter pilot commander, crisis confidant, and everything in between. The question now is how far that line will move, and what happens when the technology is asked to make choices with life-and-death consequences.












Continue with Google