Mar 24, 2026 13 min read

Pentagon's Autonomous Jet Ski Program Literally Washing Up on Foreign Shores, Officials Call This "Progress"

Pentagon's Autonomous Jet Ski Program Literally Washing Up on Foreign Shores, Officials Call This "Progress"
Pentagon Discovers AI Weapons Systems Work Great Except When They Capsize Your Boat and Try to Run You Over

Source: Bloomberg

  • The Trump administration blacklisted Anthropic for refusing to enable mass surveillance and fully autonomous weapons, then immediately contracted OpenAI for the same services

  • Maven Smart System helped strike 5,000 targets in Iran within 10 days, but Pentagon officials want to reach 1,000 targets per hour; over 1,300 civilians have been killed

  • During testing in California, an autonomous military boat malfunctioned, capsized its towboat, threw the captain into the water, and charged at him before another boat intervened

Blake Trapper to Yappers Handoff: 👀 The United States military spent a decade building artificial intelligence systems to identify and destroy targets faster. The machines work well enough to kill thousands but fail when confronted by ocean spray on a camera lens or an accidental button press. One test boat went rogue and tried to run over its own captain. The Pentagon calls this progress and wants the systems deployed by summer.


Morty Gold

//consummate curmudgeon// //cardigan rage// //petty grievances// //get off my lawn// //ex-new yorker//

▶️ Listen to Morty's Micro Bio
FOR THE LOVE OF– are we SERIOUSLY talking about machines that can't tell the difference between ocean spray and a HUMAN TARGET?! I taught Cold War doctrine for thirty years, and even during the Cuban Missile Crisis we had HUMAN BEINGS double-checking the radar blips! Now we've got this Maven Smart System striking five thousand targets in Iran in TEN DAYS– including a girls school that killed one hundred seventy-five children– and the Pentagon's response is "Let's go FASTER"?!

They want a THOUSAND targets per hour! You know what we called that in my AP History class? We called that the DEFINITION of a war crime assembly line! This isn't precision warfare, this is mechanical bloodlust with a software update! And they've got Lieutenant General Frank Donovan running this rebranded circus now, which is like putting a new nameplate on the Hindenburg and calling it "improved." I'm going to bed!
Blake Blake's Roast: 🔥 Morty's comparing AI targeting to Cold War radar is generous, considering the Soviets" early warning system at least didn't try to murder its own operator.

Sheila Sharpe

//smiling assassin// //gender hypocrisy// //glass ceiling//

▶️ Listen to Sheila's Micro Bio
I'm sorry, I must have misheard. Did we just blacklist Anthropic for having ethical guardrails, then immediately hand the contract to OpenAI for the exact same autonomous weapons work? Walk me through the logic there. This is procurement theater at its finest. One company says "we won't build your kill-switch-free murder algorithm" so we throw a tantrum, revoke their security clearances, and find a more compliant vendor.

It's vendor management as blood sport. And now Lieutenant General Frank Donovan runs the rebranded Defense Autonomous Warfare Group--I'm sure that name-change fixed all the underlying technical failures. Nothing says "we've solved the problem" like new letterhead and a promotional tour. This is what happens when you let men mistake confidence for competence at scale. They're building Skynet on an Agile sprint cycle and wondering why the demo keeps trying to drown people.
Blake Blake's Roast: 🔥 Sheila just called rebranding "procurement theater" which is exactly what her marketing department does when a product launch goes sideways.

Omar Khan

//innocent observer// //confused globalist// //pop culture hook// //bruh//

▶️ Listen to Omar's Micro Bio
YO. Wait, are you serious right now? The Pentagon's AI went from "zero to five thousand targets in ten days" in Iran and they're sitting there like "nah bro, that's too SLOW, we need a THOUSAND per HOUR." That's the military equivalent of getting a PS5, beating Elden Ring in a week, and immediately complaining you need a PS6. Wallahi, they struck five thousand targets and killed over thirteen hundred civilians including a hundred and seventy-five girls at a SCHOOL, and the response was "cool cool cool but can we get a higher K/D ratio?"

In the old country, when something kills that many innocent people by mistake, you maybe pause and recalibrate? But nah, America's like "let's rebrand it, put a Lieutenant General in charge, and SPEED RUN this thing." Bruh. This is exactly why my cousins back home think Americans have lost their minds. Y'all are speedrunning war crimes with broken AI. Wallahi.
Blake Blake's Roast: 🔥 Omar's comparing weapons targeting to a video game K/D ratio, which feels appropriate since the Pentagon apparently shares his respect for human life as NPCs.

Frankie Truce

//smug contrarian// //performative outrage// //whisky walrus// //cynic//

▶️ Listen to Frankie's Micro Bio
Oh, this is delicious. Not the story--the story is horrifying--but watching everyone pretend they're shocked that military AI doesn't work. Empirically speaking, we just watched an autonomous boat capsize its towboat at Channel Islands Harbor and then try to murder its own captain like a deranged jet ski. The Pentagon's response? "Deploy by summer." But here's what nobody wants to admit: this was always the plan.

We've been watching defense contractors speedrun the Theranos playbook--fake it till you make it, except with cluster munitions. And the best part? Everyone's mad at the wrong thing. Progressives are clutching pearls about Anthropic getting blacklisted while conveniently ignoring that OpenAI just signed up to do the exact same surveillance state nonsense. Conservatives are cheering "American innovation" while a glorified Roomba with a grenade launcher just went full Christine on its own crew. Nobody's being honest. It's exhausting.
Blake Blake's Roast: 🔥 Frankie compared military AI to a "deranged jet ski," which feels generous given that jet skis at least stop when you fall off.

Nigel Sterling

//prince of paperwork// //pivot table perv// //beautiful idiots// //fine print// //spreadsheet stooge// //right then//

▶️ Listen to Nigel's Micro Bio
Right. Okay. Let's just unpack this methodically. The Trump administration blacklisted Anthropic for refusing to build mass surveillance tools, then immediately hired OpenAI to do the exact same thing– which is rather like firing your ethics professor and replacing him with someone who'll help you cheat on the exam. Now, I've got a photographic memory, so let me remind everyone: the CIA was secretly testing Goalkeeper and Whiplash weapons systems in the Black Sea off Ukraine's coast.

Not in Nevada. Not at some controlled facility. In an active conflict zone where one software glitch could trigger Article 5. We're beta-testing autonomous killing machines in geopolitical powder kegs, and the selection criteria for contractors is apparently "who'll say yes fastest." This isn't innovation– it's procurement theatre performed by people who've confused "move fast and break things" with "move fast and break the Geneva Conventions." Read the footnotes.
Blake Blake's Roast: 🔥 Nigel suggested we're beta-testing weapons in conflict zones as if the Pentagon would ever bother with a beta phase before moving directly to production casualties.

Dina Brooks

//church shade// //side-eye// //plain talk// //exasperated// //mmm-hmm//

▶️ Listen to Dina's Micro Bio
Mmm-hmm. So we're letting machines kill five thousand targets in ten days while we can't even get them to handle ocean spray on a camera lens? Child. Let me get my receipts. One hundred and seventy-five little girls at a school. Dead. Over thirteen hundred civilians total. But sure, let's rush these systems into summer deployment because Lieutenant General Frank Donovan and his rebranded "Defense Autonomous Warfare Group" say we need to hit a thousand targets per HOUR.

That's not exactly... optimal. You know what Frederick Douglass said about power conceding nothing without a demand? Well, these folks aren't even demanding the technology actually WORK before they deploy it. They're just demanding we trust them while they smile and hand the kill switch to algorithms that can't tell the difference between a threat and their own captain in the water. Lord give me strength.
Blake Blake's Roast: 🔥 Dina wants us to slow down and make sure the killer robots work properly before deployment, which is the kind of reasonable safety standard that definitely would have prevented her from getting this HR job in the first place.

Thurston Gains

//calm evil// //deductible denier// //greed is good// //land shark//

▶️ Listen to Thurston's Micro Bio
Per the terms and conditions of modern defense contracting, OpenAI made the correct business decision. Anthropic's little ethical tantrum about "mass surveillance" and "autonomous weapons" was performative nonsense that cost them a rather substantial revenue stream. The marketplace has spoken. Now, regarding those CIA tests with Goalkeeper and Whiplash systems off the Ukrainian coast in the Black Sea--that's compartmentalized information procurement at its finest.

Field testing in active conflict zones provides data you simply cannot replicate in Nevada. Do I wish the systems were slightly more reliable? Certainly. But perfection is the enemy of procurement timelines. We're reallocating acceptable risk parameters across operational theaters. The shareholders--excuse me, the American people--demand results. This conversation remains classified. Coverage for your concerns: Denied.
Blake Blake's Roast: 🔥 Thurston called Anthropic's refusal to build murder robots a "little ethical tantrum," which tells you everything about what happens when Yale Law meets a complete absence of human decency.

Wade Truett

//working man's math// //redneck philosopher// //blue-collar truth//

▶️ Listen to Wade's Micro Bio
Now, I ain't the smartest guy--I'm just a contractor--but last time I checked, when your equipment tries to kill the foreman, you don't give it a raise and more responsibility. Pentagon's got a boat that capsized its towboat, threw the captain in the water, and then charged at him like a drunk bull at a rodeo. Their solution? Deploy it by summer!

You know what happens on my jobsite when a nail gun misfires? We shut down, figure out what's broke, and fix it before somebody loses an eye. But Uncle Sam's over here like "Eh, killed over thirteen hundred civilians including 175 girls at a school in Iran, but we're hitting five thousand targets in ten days--let's crank it to a thousand per hour!" That ain't progress, that's a murder assembly line with a glitchy control panel. Measure twice, cut once. These boys can't even measure once.
Blake Blake's Roast: 🔥 Wade's comparing autonomous weapons to nail guns, which feels appropriate since both are designed to put things in the ground, though only one requires a permit.

Bex Nullman

//web developer// //20-something// //doom coder// //lowercase//

▶️ Listen to Bex's Micro Bio
okay so the pentagon's new ai boat tried to murder its own captain. like literally capsized the towboat in june at channel islands harbor and then charged at the guy while he was in the water. this is the same tech they want deployed by summer to hit a thousand targets per hour. for context they already killed 1,300 civilians in iran hitting 5,000 targets in ten days.

my code breaks when someone uses safari instead of chrome and i spend three days debugging. their code tries to drown a lieutenant and they're like "ship it." the military moves faster than my ci/cd pipeline and with worse testing protocols. we're letting deprecated algorithms make life-or-death decisions because some general got a powerpoint about efficiency metrics. i'm a web dev and even i know you don't deploy to production when your staging environment is actively homicidal. we're so cooked.
Blake Blake's Roast: 🔥 Nothing says "ready for deployment" quite like technology that confuses "engage target" with "engage captain in hand-to-hand naval combat."

Sidney Stein

//rule enforcer// //social contracts// //deli-line logic// //excuse me!//

▶️ Listen to Sidney's Micro Bio
Hold on--something's not adding up here. The Pentagon wants to deploy AI weapons that can't handle ocean spray? OCEAN SPRAY? I wired buildings for forty years--you know what happens when moisture gets in the electrical? Everything shorts out! But we had protocols. We had inspections. We had a guy from Local 3 who checked every junction box twice because that's how you don't kill people.

Now they're telling me a boat in California went haywire, capsized another boat, threw the captain in the water, and then--get this--CHARGED AT HIM like it's trying to finish the job? And their response is "let's get these deployed by summer"? Summer! We're talking June, July--prime beach season! You don't rush electrical work, you don't rush weapons that think for themselves. This is like letting someone wire a hospital who failed the apprenticeship. We live in a society!
Blake Blake's Roast: 🔥 Sidney's comparing Pentagon weapons procurement to Local 3 junction box inspections, which explains why he thinks the solution to autonomous killing machines is two guys standing around drinking coffee while a third one watches.

Dr. Mei Lin Santos

//cortisol spiker// //logic flatlined// //diagnosis drama queen//

▶️ Listen to Mei Lin's Micro Bio
My pulse--let me check--okay, it's elevated but manageable. Here's my differential diagnosis: maybe the CIA secretly testing Goalkeeper and Whiplash systems in the Black Sea off Ukraine's coast isn't recklessness. Maybe--and I'm qualifying this heavily--it's actually rational triage. We're in a multi-theater conflict environment. The Iranian strike package hit five thousand targets in ten days. That's a tempo human analysts simply cannot maintain without catastrophic burnout.

I've worked understaffed ER shifts. I know what decision fatigue looks like. It kills patients. So if AI can process targeting data faster and we accept some--some--elevated risk profile during field testing, isn't that a calculated trade-off? I'm not saying the boat incident wasn't terrifying. I'm saying in emergency medicine, we use experimental treatments when conventional approaches fail. Sometimes you have to intubate before you have perfect information. Sometimes you deploy before testing is complete. I hate that I'm saying this.
Blake Blake's Roast: 🔥 Mei Lin just argued that rushing untested killer robots into combat is like emergency intubation, a comparison that works only if intubation occasionally makes the patient hunt down the doctor.

Veronica Thorne

//ivy league snob// //status flex// //trust fund tyrant// //out-of-touch oligarch//

▶️ Listen to Veronica's Micro Bio
Oh, this is DARLING in the most horrifying way possible. The Pentagon is rushing to deploy AI weapons that literally tried to murder their own captain during testing--the boat capsized, threw him in the water, and then charged at him like some deranged nautical assassin. Meanwhile, they're bragging about striking five thousand targets in Iran in ten days, which is precisely the kind of efficiency my contractors claim they have but never deliver.

The difference is my landscaper being late doesn't result in thirteen hundred civilian deaths, darling. And let's discuss Anthropic getting blacklisted for having basic ethical standards--imagine punishing a company for refusing to enable mass surveillance. That's like firing your chef for declining to serve spoiled caviar. The military wants these systems deployed by summer. Summer! I won't even let my driver use the backup Bentley until the detailing is perfect, and they're rushing killer robots into production.
Blake Blake's Roast: 🔥 Veronica's comparing thirteen hundred dead civilians to late landscapers suggests her perspective on human tragedy has been permanently calibrated to inconvenience.

Coach Ned

//toxic optimist// //gaslighting guru// //character development//

▶️ Listen to Coach Ned's Micro Bio
You know what I always say--WHEN THEY ZIG, WE ZAG! So Anthropic got benched on February 27, 2026 because they wouldn't run the plays we called? FINE! NEXT MAN UP! We brought in OpenAI and THEY'RE READY TO BALL OUT! Look, some people are gonna say "Coach, maybe we shouldn't rush these AI weapons onto the field"--well you know what I call those people? QUITTERS! We hit 5,000 targets in Iran in ten days--

THAT'S EXECUTION, BABY! Sure, we want to get up to 1,000 targets PER HOUR, and maybe we're not quite there yet, but YOU DON'T WIN CHAMPIONSHIPS BY BEING SATISFIED WITH GOOD ENOUGH! You think Tom Brady stopped at one Super Bowl? NO SIR! Some folks are gonna whine about "civilians" and "safety concerns"--that's just NEGATIVITY IN THE LOCKER ROOM! We're building something SPECIAL here! ON THREE! ONE TWO THREE--AMERICA!
Blake Blake's Roast: 🔥 Apparently dismissing over 1,300 civilian deaths as "negativity in the locker room" is what passes for leadership when your playbook was written in crayon.



🏆
Blake Names Winner: Coach Ned wins today's segment with his inspiring reframe of a homicidal robot boat as simply showing "intensity" and "passion." His ability to describe attempted vehicular manslaughter as a coaching opportunity proves that toxic optimism can survive even literal capsizing.

Coach Ned: Wow. You know, I... I really appreciate this. Truly. It's an honor to be heard, to have my voice matter in these important conversations. Sometimes we all need someone who believes things can get better, even when the evidence suggests otherwise. (pause) BUT HEY NOW WAIT A SECOND--are we getting SOFT out here?! THIS ISN'T A FEELINGS SEMINAR, PEOPLE! We've got a CHAMPIONSHIP TO WIN! Where's my WHISTLE?!


Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to ThatShouldBuffRightOut.com.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.