Library/Spotlight

Back to Library
Wes RothCivilisational risk and strategySpotlightReleased: 2 Mar 2026

Claude kill count going up

Why this matters

Auto-discovered candidate. Editorial positioning to be finalized.

Summary

Auto-discovered from Wes Roth. Editorial summary pending review.

Perspective map

MixedGovernanceMedium confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 26 full-transcript segments: median 0 · mean -4 · spread -330 (p10–p90 -80) · 8% risk-forward, 92% mixed, 0% opportunity-forward slices.

Slice bands
26 slices · p10–p90 -80

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.

  • - Emphasizes governance
  • - Emphasizes safety
  • - Full transcript scored in 26 sequential slices (median slice 0).

Editor note

Auto-ingested from daily feed check. Review for editorial curation.

ai-safetywes-roth

Play on sAIfe Hands

Episode transcript

YouTube captions (auto or uploaded) · video Hzm3D7i3NFk · stored Apr 2, 2026 · 684 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/claude-kill-count-going-up.json when you have a listen-based summary.

Show full transcript
This anthropic situation keeps getting crazier and crazier. So at this point, you've seen the joint USIsrael strike on Iran cenamed Roaring Lion by Israel and Operation Epic Fury by the United States. There was a question that I think a lot of us were asking during that time. Is Claude being used in some way for that operation? And the answer is yes. This isn't just a rumor. It's been confirmed by numerous outlets, multiple sources. At this point, we're pretty sure this is happening. And according to the Wall Street Journal, SenCom was using Claude during this operation for number one, intelligence assessments, number two, target identification, and three, simulating battlefield scenarios. And by the way, this was after Trump, you know, quote unquote, banned Anthropics Technologies use from the federal government. So, Sentcom declined to comment on what systems they're using in these operations, but sources confirming that Claude was used in the Iran strikes are Wall Street Journal, Axio, The Guardian, Financial Express, W, multiple others. There's no major outlet that's contradicting these core claims. So, at this point, we have multiple pretty much confirmations that Anthropics technology is being used in lethal military operations. And it's also beginning to look like that Claude is so deeply embedded in the military infrastructure in that network that it's going to be very very difficult to just pull it out. It's too deeply embedded to just be able to be pulled out overnight. Since then, Barade came out with another interview and a lot of the views that we've been hearing, they're getting a little bit more refined. So Anthropic's position on all this, as far as I can tell, it never shifted, but we're sort of slowly refining what exactly it is that they're saying. They're saying this that they support all lawful military/ncurity uses except for two, and those are their sort of red lines. So they're saying the government can use these for any lawful purpose. We just have two specific reservations that we're not willing to cross. One is that the current AI models aren't reliable enough for autonomous weapons, right? There's a risk of friendly fire, civilian casualties, etc. This is where we see kind of this like refinement process because in the beginning kind of like the quote was, we don't want them to be used for autonomous weapons. Then as people are asking more probing questions, the more kind of refined version is now, well, so they're not reliable enough currently, which also seems to be different from like we don't want it to be used for this purpose to it's not ready to be used for this purpose yet. So that that's an interesting sort of pivot you could say. So it's kind of an an interesting distinction I think. And the second red line is that they can't be used for mass domestic surveillance. That this violates fundamental rights. In the interview of Daario, he was saying that basically due to this new level of technology, things that weren't a problem before could become a problem now. And we've discussed this idea on the show before and on the podcast. This idea that whenever you're going anywhere, you are sort of just like oozing data, right? There's some camera that picks you up. You walk past a neighbor's house, they might have the Ring doorbell. Your phone is pinging various devices, etc., etc., etc. A car drives by. You get picked up on the camera. Basically, every minute you exist, there's these little creds of information about you that is just kind of like wafting out there into the world and getting picked up and recorded. And maybe that wasn't such a big problem because we didn't have the technology to take that jumbled mess of data and organize it in any coherent way. But with AI, we now do. Now, those little shreds of data can probably track your location 24/7, build up a pretty good profile about you, and even start making some pretty accurate guesses about maybe things you believe, political preferences, just tons and tons of stuff that I think would all be surprised how accurately these models could predict it from that data. And that was kind of Daario's point that we kind of have to update our sort of understanding of the law when it comes to, you know, surveillance because we have this new technology that allows for ways to surveil the population that simply wasn't available before. Meanwhile, Sam Alman comes out and says that they've signed a contract with the Department of War for the use of OpenAI technology. And uh interestingly, it almost seems like OpenAI got the same exact safeguards that Anthropic was banned for requesting. At the same time, as Samman comes out and posts an AMA, ask me anything, basically taking questions from people on Twitter/X about what has happened. Couple very interesting points to understand. Number one, he says pretty straightforwardly kind of where he thinks the control should be. who should have the control over how these AI systems are deployed. Because if you missed kind of the original argument between anthropic and the Pentagon, there isn't an sort of an an irrational side or there isn't a side that's asking for something that's crazy. The government side is simply saying we can't have this third party private company dictating terms to the US government to elected officials which whatever side of the debate you're on kind of have to admit that makes sense in any other situation there would not be any debate about this. If Ford wanted to sell trucks to the military but wanted to have some governorship in there that restricts when and how those you know trucks can be used most people agree that's crazy. But if you look at it from the opposite perspective, you put yourself in the shoes of Dario Amade. He believes sincerely that he is building something that would be fundamentally worldchanging, something that will become the most powerful technology humanity has ever built. And there's some like debate about that, but if you've seen this channel before, I'm a lot more closer to where he's coming from. a lot of the stuff that's happening now with the market SAS apocalypse, with the $1 trillion melted down, with this fight between the Pentagon and AI labs. If you've been watching this channel for the past few years, I hope none of this is a surprise. And by the way, the next wave that's coming, how we have to kind of maybe rethink how the economic system works a little bit because of AI and automation. When that thing hits, I hope that won't come as a surprise as well. But the point being is if you're building something that you think can have just catastrophic consequences for the world, for humanity, that pose an existential risk potentially if you get it wrong. But not just that, because a lot of people think like if we build it, we're all going to die. That there's this Terminator scenario. And one thing that I've always really respected about Daario is that he isn't just looking at the X risk, the the alignment issue. He's also thinking about the other really bad scenarios that we have to think through. I refer to it as, you know, there's the P doom, like what's the probability of an apocalypse caused by AI. We also have to think about the P1984, right? That kind of Orwellian novel where the government is dystopian, takes over, just monitors everybody. You can imagine a future with robots and AI where a tyrannical regime will be the last in history, as in no one will ever be able to overthrow it. or even if it doesn't get there, you can still see how very powerful AI that's put in the hands of the wrong government leaders or the wrong government structures could really cause a lot of widespread suffering. So, you can definitely see Daario thinking about this here, I think. Right? So, he's like, what happens if we allow this technology to be used for surveillance of citizens that's completely unrestricted? This is kind of an obvious first step towards that P1984 scenario. So he's going, okay, we're not willing to cross those red lines. And it's if you understand his thinking, if you understand the mission that Anthropic is on, and by the way, Anthropic is an incredible company in the sense that they have less funding, less researchers, less capital than some of the other labs. And they've been able to create something that is in many ways could be argued the best model currently available. It depends what the use cases are, but just look at how incredibly difficult it is for the military of the United States government to just pull the plug and replace it with something else. Like again, they have Grock and Open AI and Google Deep Mind. I believe all those companies at some point have changed their terms of service to kind of reflect that yes, they will be using their AI models for some sort of military applications. We've covered it when Deis Hassabus posted a blog post explaining his reasoning for why he's doing that. We've covered when OpenAI did that. I don't remember at this point when it happened. And of course, we know Grock is all up in there as well and is very open and ready and willing to work with the government agencies, some of the military branches, etc. Each one of those companies has excellent excellent models. They they're brilliant in in their own ways. But for myself, since maybe December of last year, sometime last 3 4 months, I've pretty much completely switched over to clot. At first, Opus 4.5, now Opus 4.6. And boy, has it been sticky. It's hard for a lot of the purposes that I use it for. It's very hard to try to replace it with something else. There's nothing quite like it. And so I'd be kind of curious to see how this plays out because there's a non-zero chance that maybe there's some backpedaling that's going to happen. Again, I wouldn't bet on this, but I wouldn't be surprised that there's like a 10% chance that Anthropic somehow manages to continue working with the government. By the way, in one part, I think because of the efforts of Sam Alman, I that's probably not a very popular thing to say, but I do want to give Sam Alman some credit for what he's been doing. In number one, he was trying to work out some deal with the department of war of Pentagon. And when that deal was worked out, he's also asking for that deal to be available to all the other AI labs. So he's saying if openi can agree to this, can we also extend this to other labs and hopefully they can get the same treatment, the same deal. And number two, Sam Alden came out and said very straightforwardly, very directly that he believes that the government should not invoke that supply chain risk designation for anthropic. And there's a lot of debate right now how bad that would be for anthropic. I I personally believe that would be extremely bad, potentially an existential [clears throat] threat for it. I'll explain why in just a minute. But the point being is, as far as I can tell, Sam is trying to help. And I know a lot of you won't believe his efforts and say, "Oh, it's just a show and maybe I have no clue." But for whatever it's worth, I I I very much agree that anthropic should not be designated as supply chain risk. That seems just super aggressive and overreach. You can say they've kind of like already neutralized anthropic. There's no need to like terminate it just to show that you can do it. It almost kind of seems like they're maybe trying to make an example out of anthropic. like I try to be pretty neutral on these topics and try to highlight both sides of the issue. Uh but here if the US government does go through with it, I personally believe, my opinion is that would be incredibly stupid. There would be no benefit to it. It would be an abuse of power and it would be undermining, you know, the US's advantage by just taking out a company just because somebody got a little bit heated about an argument they were in. In game theory, stupidity as defined as somebody taking action that hurts others and they don't receive any benefit. Like it hurts them, it hurts others, and nobody receives any benefit. Crippling one of the top four AI labs in the US just cuz you're emotional about it would be stupid in the literal sense. Let me know if you agree. Sam was asked if Anthropic received the same terms that they received, would Anthropic have said yes. Sam said, "I don't know the details of what they received. If they received the same offer we did in the end, then yes, I think they should have done it. But that feels like an obvious thing to say. And of course, they can have a different stance. Another question that was given to Sam during this AMA is, you know, are you troubled by this idea of the Department of War effectively blacklisting Anthropic? Does this make you concerned about the future where things are going? Sam saying yes. I think it's an extremely scary precedent and I wish they handled it in a different way. I don't think Antropic handled it well either, but as the more powerful party, I hold the government more responsible. I'm still hopeful for a much better resolution. And again, this is why I kind of want to make sure we highlight what to me seems like Sam's efforts to maybe make the situation better because again, he he's got the contract. He's working with them, right? He doesn't really have anything to gain from pushing back on the government, from protecting Anthropic. I'm sure some of you think it's a PR stunt. I I feel like he's trying to help them out. As far as I can tell, the situation got heated. You had the Dario on one side, you have Hexf on the other side. And those two cultures, those two personalities, they do not work well. They're like oil and water. You got the alpha male military dude. You got Dario that's more like the nonneurotypical Silicon Valley tech nerd/ philosopher, whatever you want to call him. Like those people are not going to see eye to eye. They both strongly believe in their mission, right? So, yeah, things got heated. I'm sure Daario probably did some things and said some things that weren't rational, maybe a little bit emotional. Maybe he probably made some mistakes somewhere during that negotiation because again, he was one of the first people working with the government, with the military. So, it's not like they dragged them in there, right? So, he he signed the contract, he was in there. So, somewhere in the lines, I'm sure there were some mistakes that were made. At the same time, he held his ground. He held his beliefs. He did not back down from what is probably the most powerful entity in the world, right? Like the US military industrial complex is no joke. You don't want to mess around with those people. You know, you don't want to be on their bad side. So, two of them buted heads, fine. They don't see I2I, there's difficulties, there's miscommunications, there's emotions, fine. But I agree with Sam here. If the government just rolls over Anthropic, you know what? I I will hold them more responsible. I don't see that as a responsible use of power. Another question Sam received is why would open come out and say that they do not think anthropic should be labeled as a supply chain risk. Simone responds enforcing this SRC right the supply chain risk designation on anthropic would be very bad for our industry and our country and obviously their company Anthropic which I agree with 100%. It would be kind of a boneheaded move. Again, I'm not taking sides of the Pentagon and anthropic thing. I see it as like two people are having an argument. You might lean one way or another, but then one of those guys walks out, comes back with a gun, and shoots up the place. But he's wrong. Like, don't do that. That's what this SCR designation seems to me like it would be. Whatever you guys had a disagreement about, that's crossing a line at which point like, I don't care. That's just wrong. So, Sam Alman is saying, "We said this to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of a deescalation." and he announced this on Twitter prior to this whole thing happening. And again, some people would say it's political theater, whatever PR seme, right? He could have just made the contract and continue the contract. Maybe even said that, yeah, Anthropic deserves this, they're bad. No one would criticize them for that. This doesn't seem like he's trying to win goodwill points by just saying stuff. This seems like a genuine effort. So, he continues, I feel competitive with Anthropic for sure, right? with their competitor. But successfully building safe super intelligence and widely sharing the benefits is way more important than any company competition. I believe they would do something to try to help us in the face of great injustice if we could. So I guess if the situation was reversed, they'd help us as well. We should all care very much about the president. I saw in some other tweet that I must not be willing to criticize the Department of War. It said something about sucking their wow uh too hard to be able to say anything critical, but I assume that was the intent. But his point is well taken. If you're trying to not rock the boat, you don't say these things. You don't go on Twitter cuz this post at this point, I mean, probably in the aggregate approaching 10 million views or thereabouts. But for whatever it's worth, I'm with Sam Alman on this particular thing that he's talking about. I I'm with him. He's saying to say it very clearly, I think this is a very bad decision for the DO and I hope they reverse it. If we take heat for strongly criticizing, then so be it. at a conference in India like a week or so ago, Daario Amade, it seemed like he refused to hold Sam Alman's hand. The whole thing was kind of cringy and I think it would be very natural at this point to be like, "Oh, your competitor just got, you know, completely destroyed. Just let that happen. Just ignore it and you know, you got the contract that they lost. Just sit there, be quiet, and just enjoy the spoils of your efforts." That's the playbook. But he's not mincing words here. He's saying it would be horrible if the Department of War pushed through this thing. And I think he's doing it sincerely. I think he's trying to prevent this really bad thing from happening to Anthropic. If this designation goes through and there's a lot of sort of discussion about exactly what that means. A lot of people are saying, "Well, it's just limited to military contracts." I I don't know if that's quite the case because number one, it seems like other contractors that are working with the federal government, if something is a supply chain risk, then they would have to sort of make sure that the portions of their business that are providing government contracts are not intersecting in any shape, way or form with this supply chain risk company. So for example, AWS, right? Amazon, they would have to probably take money and effort and time and engineers to sort of quarantine any any data of anthropic any any work that they do with them from any portion of their business that can or will or is planning to provide any sort of services to the federal government. And by the way, Dario pretty much said that they will be finding that on legal grounds. They're saying that they don't have the legal authority to just blacklist them like that and kick them out not just for the federal government but also blacklist them from working for the companies that are federal contractors. So I think on the whole what Sam Alman is saying is number one that he's siding with the idea that democratically elected government they should be the ones that are making the final calls not unelected private companies. And while it's true that there's some companies that seem more responsible and trustworthy and their mission is a mission that we agree with and believe in, I think that in general in these situations, you do want democratically elected leaders to have the final say. And if we could pick and choose those companies, maybe that will be a different matter. But in reality, we can't. So it's more about the president. Do you want any company to be able to dictate terms about the military to the government? I like Daario, but if tomorrow he gets replaced with some lunatic, I have no way of voting that person out. The second point that some made is this that there's a question kind of behind the question that isn't really being articulated and that is what happens if the government tries to nationalize OpenAI or other AI efforts. Now, in one of my previous videos, I said that the US government doesn't really like nationalize things. is it kind of like brings them into the fold. The stuff that it needs kind of becomes part of US interests and there is some fusion some connection but it's not official. They didn't take over the oil industry in in the decades past but protecting those oil interest was part of the the agenda. Same thing with big tech and now same thing with AI. So Sam is asking what happens if the government tries to nationalize open AAI or other AI efforts. And in fact, he admits that he's wondered sometimes if it might not be better if AGI was being built as a government project. There may be some advantages to that potentially. And he's saying that said, I do think a close partnership between governments and the companies building this technology is super important. This is something that we've discussed on this channel a year or two ago, saying there's no way that super intelligence or AGI is going to be built in some hippie tippy Bay Area lab and none of the world governments just takes a notice. In what world would that have happened? They're going to be intertwined in some way. And we're already seeing kind of the carrot and the stick approach that sort of makes sure that everybody understands where where they're at, who's calling the shots, etc. And Sam's third point is that people take their safety in the national security sense more for granted than he realized, which I think is a good thing on balance, but I don't think shows enough respect for the tremendous work it takes for that to happen. So the best case resolution here, I think, is this. First of all, this supply chain risk designation just just goes away. I don't think it's reasonable. I don't think that that's what the law is there for. just the fact that claude is being used throughout the classified systems of the federal government. It's used to carry out attacks. There's a lot of agencies that rely on it, right? And at the same time, calling it a supply chain risk is kind of weird. Like, is it a supply chain risk or is it so good and reliable that all of you got addicted to it? Like, it's either one or the other. And number two, hopefully Sam Alman is able to maybe patch up the situation a little bit. Again, I know some people are not going to believe the efforts, but it does seem that he's taking a stand here. He could have just sat this one out, picked up the contract, and said nothing. He's actively saying to, you know, the hand that feeds, saying, "Hey, this is not cool. Don't don't do this." Also, there are rumors that there's a lot of other agencies in kind of the federal system, military agencies, etc., that really rely on Claude for a [snorts] lot of the work that they're doing. A long, long time ago, before Trump was president, he was pretty open about the way that he negotiates his his business deals. He comes out with a crazy over-the-top offer that is just insanely bad for the opposition. That offer is violently rejected. But then there comes a second offer, an offer that on its own would have seemed like it was unfair. It was over the top, but in contrast to the insanity of the previous offer, people are like, you know what, maybe this isn't so bad. I couldn't find the portion where Sam Alman was answering questions saying that basically he believed that Anthropic could have gotten the same deal had the cooler heads prevailed. I couldn't find it. I I forgot exactly how you phrase it, but it seems like in theory the deal that opening I got could be on the table for anthropic. The language in that contract, it does say that it can be used for all lawful purposes. And there's a certain language about how autonomous weapons are used, how surveillance is used, but a lot of it is still sort of pointing to the final decision maker as, you know, being the laws of the US, being the Pentagon, being the department of a war. Whereas, I think this is according to Sam Alman. He was saying that with anthropic, they they did want to have a little bit more policy control, and that was a step too far for the Pentagon. That's where they're not willing to budge. So I think the best possible outcome is that cooler heads prevail that a deal is worked out. Enthropic gets to continue doing its thing because keep in mind Antropic and Claude there was no issues in being involved in lethal military missions. We've seen that in operation in Venezuela. We've seen that in Iran. And I'm sure for those two that got leaked somehow there's a million more that we don't know about. And as Elon Musk said a while ago about this whole situation is that anthropic winning was never part of the possible outcomes. Them having a say of how the models get used or them just walking away and trying to build AGI independently. Maybe that's just not in the cards. But maybe there's some compromise that can be reached that that you know no one is happy with, which it means it's probably a fair deal. Let me know what you think. I'm very curious. If you don't agree with some of my takes, that's totally cool. Just let me know why. But if any of this seems black and white to you, like it's crystal clear who the good guys are, who the bad guys are, you might not be seeing the picture in its entirety. Because remember, only the Sith deal in absolutes. With that said, my name's Wes Roth and I will see you in the next

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.

More from this source