Library/Spotlight

Back to Library
Wes RothCivilisational risk and strategySpotlightReleased: 26 Feb 2026

Anthropic might be DONE (48 hours left)

Why this matters

Auto-discovered candidate. Editorial positioning to be finalized.

Summary

Auto-discovered from Wes Roth. Editorial summary pending review.

Perspective map

MixedGovernanceMedium confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 22 full-transcript segments: median -3 · mean -7 · spread -280 (p10–p90 -210) · 14% risk-forward, 86% mixed, 0% opportunity-forward slices.

Slice bands
22 slices · p10–p90 -210

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.

  • - Emphasizes governance
  • - Emphasizes safety
  • - Full transcript scored in 22 sequential slices (median slice -3).

Editor note

Auto-ingested from daily feed check. Review for editorial curation.

ai-safetywes-roth

Play on sAIfe Hands

Episode transcript

YouTube captions (auto or uploaded) · video qMWeIXdyE2Y · stored Apr 2, 2026 · 571 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/anthropic-might-be-done-48-hours-left.json when you have a listen-based summary.

Show full transcript
That's one reason why I'm, you know, I'm worried about the, you know, the the the autonomous drone swarm, right? So, you know, the constitutional protections in our military structures depend on the idea that there are humans who would, we hope, disobey illegal orders. With fully autonomous weapons, we don't necessarily have those protections. >> All right. So, Anthropic is in some hot water with a lot of different people for a lot of different reasons. But there's one sort of entity that you really don't want mad at you and Anthropic managed to tick them off. That entity is the US sort of war machine and the Department of War, the Pentagon. These are the people you really want on your side. So, here's exactly what happened. And make sure you watch this video all the way through because what's happening in the next 48 hours. As I'm recording this, like literally just under 48 hours, whatever happens will be historic. The clock is ticking. It runs out on Friday. Let's dive in. So, Anthropic was built with a single pretty radical promise. They're going to build the most powerful AI in the world and they're going to build the safest AI in the world. The Amadea siblings left OpenAI in 2021 specifically with this mission. They believed their former employer, their former company was not taking AI safety seriously enough. And for the 5 years Anthropic's responsible scaling policy, this was a public commitment to halt AI development if they find any serious dangerous liabilities, risks, or flaws in it. In other words, if the safety didn't keep up with the capability, they're going to kind of shut it all down. In the span of the last two weeks, that identity collapsed. And today, that AI safety policy, it's ended. It's gone. Okay. So, what happened? Let's rewind the clock. Just two weeks. So, Claude, everyone's favorite chatbot. Or maybe not everyone's favorite, but a lot of people have a kind of a soft spot for Claude. It's an excellent coder. And there's some sort of a personality to it. I don't know how to describe it, but I feel like most people who've experienced a lot of different chatbots, they kind of feel that Claude is it's special. Something about it just kind of draws out feelings. So Claude is used in a covert military raid. The Joint Special Operations Command, JOCK, they conduct a raid, a military raid in Caracus, which is in Venezuela, and capture the former president Maduro. This was a highly kind of guarded secret up to that point, that operation. But then Wall Street Journal and Axios leak some details about how that was achieved and that is that claude anthropics flagship AI it was used to carry out the assault. So Anthropics technology was used through Palanteer. So Anthropics partnership with Palanteer. Palanteer works with the US government for a lot of spying and military stuff. So very cloak and dagger but Anthropic was employed as part of that mission through the use of Palanteer. Top Palanteer is a defensive contractor. They have this Maven smart system. They provide data analytics and various targeting and location support for the department of war. That was announced in 2024. And that made Entropic the first AI company that was allowed to offer the services in these classified military networks and operations. So cloud's used for real time kind of operational planning, real-time data analysis. The exact role is classified. We don't know. But we know that it was used in that mission. So what does that mean? That means that Claude was used for lethal military operations. That was precisely the scenario that was not supposed to happen. That's what anthropics policies were designed to prevent. So according to semaphor there was a meeting between some of the people from anthropic and some of the people in Palunteer an employee of anthropics raised the question like hey was claude used for this operation a palunteer executive interpreted that as potentially disapproval again we're not 100% sure what happened here but he kicks it up to chain to you know the US government the Pentagon and this creates or at least kicks off kind of a rupture in the relationship between the Pentagon and anthropic By the way, Pentagon confirmed this tension. So, this isn't just hearsay. So, Pentagon comes out. Chief Pentagon spokesman Sean Parnell is saying that the Department of Wars relationship with anthropic is being reviewed. Our nation requires that our partners be willing to help our war fighters win in any fight. So, on February 19th, so less than a week ago, Pentagon CTO email Michael publicly urgent to cross the Rubicon on military AI use. If you're wondering what cross the Rubicon means, it's actually kind of this Roman reference back to the Roman Empire. Rubicon was a river, sort of separating the Roman Empire from the province of Gaul and Roman law prohibited any general from crossing that river with an army. And Caesar just crossed it anyway with an army, famously saying the die is cast, right? So basically means just like commit to the military action, you know, go forward. Basically, there's no going back to your safety first identity. You got to commit. So, the Pentagon is very clear. They're like, "There's no room for this safety identity. It's unacceptable for a defense partner." Meanwhile, early in January, Defense Secretary Pete Hagf. So he and team already released a new AI document that requires all of the defense partners, these AI contractors from having their own special company specific safety guard rails when it comes to the services that they provide for the US government. So basically everyone has to be okay for the US government to do any lawful use of their technology, right? So any lawfully permitted use of that technology should be open. A company can't have their special little sort of guard rails or or things that they're like, "Well, you can't do this specific thing with it." Companies had 180 days to comply. Keep in mind that was early in January. That timer is running out in less than 48 hours. As I've said, by the way, that kind of puts Anthropic directly in the crosshairs, right? So, that wasn't uh, you know, I mean, it's for everybody, but really it's targeting right now Anthropic. That's the company that that needs to be nudged in the right direction, you know, from Pentagon's point of view. That document is there to put Anthropic in line. Then yesterday, yesterday morning, Dario Amade, you know, the main founder of Anthropic, he gets summoned to the Pentagon to speak with Defense Secretary Pete Hexth. And by the way, you know, based on the reporting, some of the quotes, you know, this wasn't supposed to be like a nice discussion. This was kind of a like, hey, you know, we need you to play ball here or, you know, things are not going to be good for you type of discussion. Give them an offer they can't refuse. That type of thing. Now, what's important to understand here is that if Anthropic refuses, it's not like they just walk away. It's not like they lose a contract. This this isn't the situation. So, the Pentagon has three weapons ready to use against Anthropic. One is the Defense Productions Act. This is like a cold war era law that gives the the president or or his delegates the authority the legal authority to take these private companies and force them to accept contracts that are deemed necessary for defense. So this would effectively force Enthropic to provide their AI regardless of their own feelings about it. they will have to legally give that technology to the Pentagon or at least provide whatever services with the technology that the Pentagon demands. Two is the supply chain risk designation. So if any company if anthropic for example gets labeled as a supply chain risk, this would effectively ban them, blackball them from any federal network. So they can't get any federal work. Any company doing work with the federal government can't use them in their work. So, it really just shuts off a lot of things for them. And I'm sure a lot of companies that want to win those contracts that that federal work, they're also not going to be too nice to Anthropic, right? So, they kind of become a pariah almost. And of course, the Pentagon is willing to cancel the contract they originally had. It's up to 200 million. So, Anthropic signed a contract, got 200 million, and they're supposed to provide the service. So, the third weapon is just canceling that contract, clawing back the money. But again, that's probably the the least scariest outcome. The the first two are possibly a lot scarier for Anthropic. Now, Anthropic has maintained its stance. It doesn't want autonomous weapons that use AI to make final targeting decisions. There has to be a human making those decisions. A human in loop. So, fully autonomous killing machines are a nogo for anthrop and also the second kind of big thing is no surveillance, no domestic surveillance of American citizens. So, that's anthropic stance. And the Pentagon's stance is we can use this for any lawful uses. That's our decision. That's not anthropic. Right? So if it's lawful, if it's under law, we can do it. It's not for anthropic to decide. So the deadline is Friday, less than 48 hours from now. Friday, 1 p.m. I guess Eastern it would be. Now, if you're wondering who's going to budge, again, we don't know. But here's the thing. Anthropic on the same day that Dario Amade sat down with Haggath. Again, that's the person that um I mean, you know, probably was tightening the screws on Dario. That's the person that put out that sort of that memo, that document about rolling back company specific safety policies. He's he's he's kind of like like the final boss in a video game. Anthropic updates their RSP. So, it's now RSP 3.0. RSP is that responsible scaling policy. It's their mission, their goal to build AI safely. So, they published the 3.0 O version and the flagship pledge that they had, it's gone. It's not there anymore. So since 2023, Enthropic's RSP, responsible scaling policy has contained this categorical commitment. The company would never train an AI model unless it could guarantee in advance that safety measures that they had in place that they were adequate. That was their kind of a big differentiator, right? So that's that's not there anymore. That's gone. So the new RSP 3.0, O it replaces that kind of like hard limits with kind of a softer dual condition. So they will consider pausing development if either, you know, they believe they are the leader of the AI race, right? So if they break through and they're well in advance and everybody else is behind at that point, that's when they might consider slowing it down or pausing it, right? So instead of racing ahead, they'll wait for maybe others to catch up, they'll consider pausing at that point. Or B, and I apologize if I misspoke earlier. So So these are dual conditions as in both have to be met. So A and B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B B is the risk of a catastrophe is is so high it's deemed material, right? So they believe that the risk of catastrophes is real and present. So both conditions must be true simultaneously. So what does that mean? That means if you know there's five labs and four of them develop some new AI model that has potentially catastrophic abilities and they deploy those then Anthropic will also deploy that same capability model. So it's not going to hold back if everyone else is publishing models that are, you know, dangerous. So Anthropics chief science officer Jared Kaplan explained a decision to Time magazine explaining that we felt that it wouldn't actually help anyone for us to stop training AI models. We didn't really feel with the rapid advance of AI that it made sense for us to make unilateral commitments if the competitors are blazing ahead. So, the new policy explicitly states that if one AI developer paused development to implement safety measures, while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe. By the way, let me know if you agree with that down below. Comment down. Does that reasoning make sense to you? Do you agree that kind of in this sort of race condition, everybody needs to be developing and kind of working on safety and and publishing better models? or on the other side, does it feel like, you know, maybe that the wheels are coming off the bus a little bit? And in exchange for dropping that limit, they're promising some of the things to kind of offset it. Frontier safety road maps, risk reports, external reviewers, right? So, they're trying to kind of like counterbalance with other things, but as you can imagine, this is kind of a a compromise. It doesn't offset the risk. So, Chris Painter, he's the director of policy at Meter. We've covered their research in two videos ago, I believe. So that's that kind of agentic AI, you know, capability. So that huge curve or if you're looking at the logarithmic curve, it's sort of accelerating how much AI models are able to do. If you give them agentic abilities and you put them to doing certain tasks, they're kind of like time horizons getting longer and longer and longer. anthropics model currently is leading and it's just absolutely off the charts you can say because it really is pointing to this idea that that chart that that AI progress is accelerating even faster than we previously thought. So anthropics model opus 4.6 is leading and it's able to take care of a 15-hour task a task that would take 15 hours for a human expert to complete. Opus 4.6 six is now oneotting those and that's at like a 50/50 capability rate. I'm not going to go into the details because we covered it all in the previous video, but the point is they're getting better and they're getting better faster than we sort of anticipated. But this director of policy at Meter, Chris Payner, he's saying that after reviewing this RSB 3.0 zero from Enthropic. He's saying that the change shows that Anthropic believes that it needs to shift into triage mode with its safety plans because methods to assess and mitigate risk are not keeping up with the pace of capabilities. This is more evidence that society is not prepared for the potential catastrophic risks posed by AI. So, what does this mean? This publishing of the RSB 3.0, Oh, either this is just a big old coincidence that it's published on the same day as uh Daario meeting, you know, the final boss of the Pentagon or it's a signal that maybe Anthropic is preparing to bend. By the way, Anthropic is insistent that this was, you know, the policy was being created a long time ago. It has nothing to do with this current military conflict or conflict with with the military, whatever you want to call it, with the Pentagon. But, you know, the critics of course see this as kind of a preemptive surrender, right? That kind of makes sense. I mean, that's what it looks like, right? If you're on the sidelines observing this, you're like, "Oh, could that have been a coincidence?" Probably not. So, the reason this is kind of like a big point in time is because we're going to see where this ethical well-meaning AI company, can it maintain its ethical red lines when it's facing, you know, the government? you know the expression when people are you know when when somebody wants you to do something and uh you're like oh yeah well you and what army used to be this little saying kind of like you know who who's going to help you you know force me to do this you know the the the answer that you don't want to hear to that question is like well the the US army or like more specifically just kind of like the US or you know the the militaryindustrial complex as it's sometimes referred to without going too deep into it there's a sort of a flywheel effect there's this feedback back loop and there's just a lot of power concentrated there. So, it's literally one of those kind of entities that you just don't want conflict with. Not not even necessarily military conflict. Just you just you just don't you don't you don't play with it. You don't mess around with it, if you know what I mean. And importantly, like it doesn't even have to do anything outside the boundaries of the law. Like the law is there. It can seize whatever it needs. If it's deemed important to national defense, they can they can seize it. They can comply people to do what what they need them to do. They can cut Anthropic off from a lot of the supply chain stuff that Anthropic needs, right? I mean, if you're black ball, there's probably a lot of other providers, whether it's like Amazon or Google or whatever. I'm sure they'll be hesitant of working too closely with people that are deemed, you know, like the enemy of the nation or whatever. Like, whatever you call it, it's not going to be good. And of course this defense production act it was meant for you know steel mills for ammunition factories during the cold war right if you're at war and then some person that manufactures steel is like I'm against the war I don't want this to be used for that particular purposes well okay whether you agreed or disagree that you know that kind of I guess makes sense you need certain production things to you know if it is really like a national defense thing okay you can explain that this is kind of different we're in a different territory this is AI I we've never seen this apply to, you know, AI models. And this would kind of apply this legal precedent that no AI model company, Frontier Lab, their preferences or policies. They can't override national security demands. So, Enthropic maintains two red lines again, no autonomous weapons and no domestic surveillance. and the next two days decide not just whether or not Anthropic is willing to kind of hold those red lines but also if they can like it's it's not just about the will. Do they have the ability to do so? So meanwhile, you know, Anthropics, their valuation is growing rapidly. They're now at $380 billion valuation. So Wall Street investors, you know, this is signaling that, you know, a lot of people are betting on their revenue continuing to grow. It's been 10xing every year. It's absolutely incredible. So, they can walk away from the defensive contracts. This isn't about the money. That's an important thing to understand, right? So, they're valued at 380 billion revenue growing at 10x per year. That's enterprise. That's coding. So, just a a a smart business move might be just in terms of like on paper money, cash flow, etc., might be to walk away from the 200 million, right, and just focus on enterprise applications. But with the federal government, you do have a lot of advantages. There's a lot of data sets that that you could be used to train the models that maybe are not available to everybody, right? So, if every other company, every other AI lab is working with the federal government and you're the only one not doing so, well, that might be a problem, right? Because Elon Musk and XAI and and Grock, they're doing a lot more with the federal government. So, you know, the more that anthropic is wishy-washy about what it's trying to do, the competitors are going to get in there and they're going to do what it takes and that will give them a huge leg up. Not not just in terms of money. you know, the federal government has a buck or two to spend on various defense contractors, but also there's probably almost certainly a lot of data sets that are just gold mines for AI labs. And obviously being nice and close and cozy to the federal government is I mean there's a million benefits to it. being deemed a national asset, a national security asset, part of the kind of the crucial supply, you know, a US interest, so to speak. I mean, it's powerful. I mean, for the oil companies over the last however many decades, their interests align with the interests of the US and therefore they were very well protected. And if you messed with some oil industry interest, you might find yourself being, you know, invaded by the US army, the US military complex. So being on their good side comes with a lot of advantages. Being on their bad side, as we've just talked about, is really, really bad and scary. And yeah, two days, Friday, 100 p.m. Eastern time, I assume, but whatever the case is, by Friday afternoon, we're going to know how this story sort of wraps up. Friday is 2 days away. I remember just a year or two ago when I was starting this channel, a big portion of the AI discussion was like, "Oh, it can't think. It can't do anything. You're so silly. AI is just it's a mirage. It's fake. Don't fall for the AI tricks. Like people basically called you an idiot for thinking that AI could be useful in any shape or form. What an interesting difference just 2 years makes. By the way, a lot of people are saying that right now they're saying that the US government's going to start nationalizing AI labs because they're going to realize what a big deal it is and stuff like that. I I don't think so. The US government doesn't nationalize things. they do bring them into this fold. It's not official but it's there. You know, if you look at how, you know, big tech companies get treated, how Google gets treated, how the oil companies get treated, they are US national interests. They do get special protections and special privileges and they do have various back channels. So, if the government wants them to do something, they're going to play ball. But nothing is nationalized, you know, on paper. It's all separate entities. So, it's, you know, same same but different. If you made it this far, thank you so much for watching. My name is Wes Roth.

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.

More from this source