Library/Spotlight

Back to Library
Wes RothCivilisational risk and strategySpotlightReleased: 28 Feb 2026

CLAUDE GOT BANNED - NOW THEY WILL KILL IT (LIVESTREAM)

Why this matters

Auto-discovered candidate. Editorial positioning to be finalized.

Summary

Auto-discovered from Wes Roth. Editorial summary pending review.

Perspective map

MixedGovernanceMedium confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 69 full-transcript segments: median 0 · mean -2 · spread -170 (p10–p90 -100) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.

Slice bands
69 slices · p10–p90 -100

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.

  • - Emphasizes governance
  • - Emphasizes safety
  • - Full transcript scored in 69 sequential slices (median slice 0).

Editor note

Auto-ingested from daily feed check. Review for editorial curation.

ai-safetywes-roth

Play on sAIfe Hands

Episode transcript

YouTube captions (auto or uploaded) · video eAWkezTcIlM · stored Apr 2, 2026 · 2,017 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/claude-got-banned-now-they-will-kill-it-livestream.json when you have a listen-based summary.

Show full transcript
testing, testing, testing. Hello. How is everybody doing? Thank you for piling in here. Let me just run some quick tests. Uh, can anybody tell me if they can see me on on X? Am I live on X? We'll be getting started in just a second here. All right, we got some people piling in. Thank you so much. Let's see who's where's where's everybody um from right now. Where are where is everybody? So, okay. I might be having an issue streaming to both the um YouTube and X. Might have to might have to X the X stream. Yeah, I think I'm going to do that. All right. All right. Yeah, we're we're pretty much ready to get started here. So, thank you everybody for being here. I appreciate you all being here. Let's see. Where's everybody from? Uh, we got Denmark, US, Texas, Alberta, Canada, US, California, Cali, US, Arizona, Croatia. A German living in Argentina, Idis 141. That's highly sus. I'm not going to lie. Is it true that Claude got banned? Well, yes. So they banned it from all federal sort of agencies, but they didn't stop there. And that's kind of like what what we need to talk about today. Cuz if they stopped there, I mean, honestly, it would have been it would not have been even that bad, I don't think. Let me make sure you guys can hear me in this stream as well. All right, let's get rolling. So, yeah, I apologize for the late delay. Thank you so much for telling where everybody's from. London, Poland, India, New Jersey, Oklahoma, Bavaria, Israel, Israel, Netherlands, Zambia, Spain. Oh boy. Yeah. Uh, tons of Yeah. Thank you so much. Estonia, Jordan, Canada, Sydney, Earth is my uh country. Okay, that is is good from from the German living in Argentina. Thank you. Thank you everybody for joining us today. And I would like to welcome everybody to the final season of everything. We have entered the the final season, episode one. So, don't freak out, Wes. Yes. Don't panic. Rule number one, don't panic. Rule number two, always have a towel. That was the hitchhiker's guide to the universe, to the galaxy. Did I see Daario's uh interview on Dine? That's a good question. Did I Let's actually take a look. Detroit City I hope you're using AI to summarize. Yeah, with the agents uh I've been sometimes it's just like you have to uh used to be not that long ago really I would go over everything read everything. I didn't trust AI to capture the nuances or to even be able to find the latest. Now, it's been a lot easier to get kind of summaries. And I mean, at this point, you kind of have to. Uh Sam, thank you so much. I appreciate you. Love you, too. Um, yeah. So, it's been it's been getting crazy. I I said we've kind of entered the singularity or like entering it, I guess. Point being, stuff is getting too too fast and everything's moving a little fast. A little too fast for comfort. So, it's really hard keeping up with everything. [snorts] It's saying my bit rate is a little bit low, so I apologize if there's issues. Uh, what I want to try is I kind of wanted to start by watching Daario's interview uh with Daario. Um, and uh let's let's maybe start there because I haven't seen it yet. I saw the clips. We're probably not going to watch it in its entirety and I'm definitely going to be watching it on, you know, not one X just so we can kind of get through some of the finer points and at the same time I am going to um get a summary of the biggest things that we kind of need to to discuss from the interview. So my question for everybody, what do you think happened? right between HexF, so between the Pentagon and Daario, like they're not room because some people are saying, "Oh, it's autonomous weapons. It's this, it's that." Um, some people are saying, "Well, you know, Anthropic wants to be able to remotely control what the AI does." I mean, do you do people get kind of what's happening? What is your sort of take on what the conflict was in in a few words? if you had to like really narrow it down. Bloodlines in New Times. Hi from Sweden. Hello. We've been calling the final season for centuries. Um, the end is nigh, right? Yes, I get it. Um, I just mean from a sense that I'm not saying it's the end of the world as we know it or whatever and I feel fine. But, um, I'm not saying that. I'm saying that if this is indeed the biggest technological unlock in the history of humanity, which it might be, because up to this point all the discoveries have been sort of made by the human brain. In the future, if things sort of go as I and as a lot of us, I think believe it will go, AI will take over a lot of the discovery. And you know, maybe humans are not as like influential, if that makes sense. Not to say we're not important, but we're not going to be tied to our jobs or our intelligence or our production, right? just like, you know, you know, we're not as strong as a train that can pull tons of stuff, but we don't like feel bad about that. Like, we just don't have to haul things around. We get in a car, we drive around. That doesn't like hurt our pride or whatever. Uh, I think the same thing is going to happen in the future with intelligence and discovery and everything else. Um, but we got to handle this transition right. We got to get alignment right. We got to get the economic system right. And all that has to happen not 50 years from now, not a hundred years from now. All of that has to happen like you know this season, right? That's why I'm saying this is the final season in a sense that you know like a lot of these things, a lot of these loopholes that have been open, you know, we're going to start closing them and figure out figuring out like what's happening. uh win some hacks. I think the effective altruists in anthropic did what they always do to any company, ruin it. So, interestingly, there is a connection. It's funny. It's hard not to connect this to um what happened with the Sam Alman thing where they where they like kicked them out because you know there's both sort of this um EA uh effective altruism connections and they they tend to have big plans and like smart people they tend to fumble a lot it seems to me they tend to fumble certain things. Um, just like I think they they they attempted that coup and they they fumbled it. It was kind of their game to win or to lose, however you want to phrase that, and they've made some key bad decisions. Um, and it didn't go through. This is like similar. I mean, man, like Anthropic was in in many ways in the lead. Uh, and I'm worried that it might not survive this. In fact, if they do pull the trigger on that supply chain designation as the, you know, a risk to national security, whatever that is, you know, well, let me ask everybody like if if they go through and they designate them as a supply chain risk, like what do you think happens to anthropic 12 24 months months down the road? Does it survive that? Is it like business as usual? Is it a big blow, but they're going to be okay? Or is it, you know, the end of Anthropic as as kind of as we know it right now? Let's see. debugger 4493 or I'm sorry 4693. It's all about control. Yeah, I I think Yeah, I think that's spot on. Talk about anthropics acquisition of Vers uh Versep. So that's Emanuel G1. So I I've heard about that. That's very interesting because I mean they're moving forward with the whole, you know, operating computer use and stuff like that for these AI agents to be able to use computers. That's one thing where I think that they're really lacking. Um, that's been kind of like the weakest point, but it's been improving rapidly. And in fact, it's funny. This morning I was so tired because a lot of the stuff that the only issues that my AI agents are consistently running into now is when they have to deal with like a web interface. Um they get hit by captures and cloud flare issues and like all sorts of stuff. So in those situations um yeah if we can figure out how they can use everything like a human being would like in real time that would be absolutely incredible. Ghost of Carzan mentioned that's correct. To 6 from Israel thank you thank you thank you for being here. Dario was a former BYU employee of zero morals. Apple did this in 2016 with the FB FBI and thropic will be fine. Google did it in uh yeah I mean 2017 I think it was with project Maven. But that's that's an interesting point and that's something that we that we have to talk about here. Um, Eat Dirt Network. Thank you so much for your $10. It's not a donation. What are they called? Super chats. Thank you so much. I appreciate it. The only hope for Enthropic is for Apple to acquire them. Apple has no AI. They failed with Apple intelligence. This is a huge lifeboat for both companies if they merge. That's interesting. That's interesting. Um, I wonder if they'll be able to still do all of that if they do if Anthropic gets designated as whatever the foreign actor supply chain risk whatever. Oh, Jason Dicks saying it's the same strategy of Trump. Talk big then get more reasonable after. That's you know what that's interesting because that has been in the past in his business dealings kind of this it has been a thing, right? So, he comes out with some insane proposition that's like like there's no way. But then any follow-up proposition seems like more reasonable or more more sane because you're sort of comparing it to the previous one. So, maybe this is what's happening, right? They're like, like, "Hey, Claude, we're going to use all, you know, these cold era laws and all these weird stuff that may be somewhat inappropriate to use in a situation to just break you." Um, you know, that's our first offer. And then the second offer is a little bit more reasonable. Why Apple for Eat Dirt Network? Why Apple? I mean, I I I I get why Apple would want them. My question is why wouldn't somebody else acquire them? Is there something particular about Apple? I I guess yeah, Apple doesn't have any. They have the cash for it, man. Wouldn't that be interesting? I don't see Anthropic going the Apple route. Didn't Apple just make a deal for Gemini yet? I mean, I think they're paying billions, are they not? And certainly Apple does seem or you know Anthropic and Apple they seem from like a brand perspective or whatever culture perspective seem very very similar somehow in some way or another. Where's the 80s hacker hacker glasses? Well, I've I've healed up my wounds my gruesome wounds that were in my face and now I don't need although you know what I did enjoy wearing them. It was uh it was kind of cool. So I don't know. I just I feel a little bit like I don't know like what's the word? Douchy. I don't know. It's just like why why are you wearing glasses inside? I don't I don't know. But yeah, I did enjoy that period of time when I had an excuse to wear them. Apple won't be allowed. Why? Apple doesn't need a model to provide the hardware that makes real money. Is that true? I mean, if Apple had a very good AI model, they wouldn't have to rely on Google, everybody else. Like, you don't want to be relying on somebody else for that. Uh, and yeah, they do have kind of like the hardware layer. Very popular, very impressive. Most people are running their agents on Mac minis or or or Linux. So, I don't know. Yeah. I mean, um, Kilabytes 808, found your channel. Hello from Arizona. Hello. Mac OS iOS isn't Linux based. They have their own kernel. Yeah, I guess they're like Unix based or like they have the same they're they're they're same same but different, right? They kind of have the same ancestry or whatever. But yes, uh, good point. It's not these, uh, these models aren't good. It's not deep learning, it's shallow learning. But I mean, there's obviously use cases, right? If you have the the government like battling over this and nations racing, I mean, they see value in it. Um, so as posted on X, you don't submit a bid to DoD, win the contract, and then red line it after. It's the monkey pot of business. Fisher deck, what's up? What's going on? Hi from Russia, Deathmaster 666117. Hello. What time is it there, dude? It's like the middle of the night. Dude, I saw the funniest thing in Claude's uh I think it was Claude where like they were testing it for um they were testing it for you know they alignment if it does anything bad or whatever. And so it has been responding to a lot of like different users anonymize an anonymized sort of conversations or whatever. And they found one where the guy's like, "I'm very depressed. Like life is so bad. It's 3:00 in the morning. I'm I'm drinking vodka." And just I don't know if I could go on. And and Claude responds it. This is in English, right? And Claude just switches to talking Russian, right, [laughter] to the person. And the researchers are like, "Why? There's no indication. Like why did you switch? there's no indication that that person speaks, you know, Russian. But Cloud was like, "No, no." Yeah, there is. [laughter] Just trust me, bro. It was very intuitive in that sense. I just found it hilarious. It's like 3:00 in the morning drinking vodka. Where else is where else is he going to be from? I just I I thought that was hilarious. Um I'm kidding. Obviously, you know, those bots making, you know, assumptions like that in general is not going to be great. I just thought that particular situation was hilarious. Um, you got to try So, Fisher Deck, you got to try Gemini Pro for SVGs. It's actually insane. Yeah, man. It's the SVGs. I don't know why that became its own sort of like test for these models, but it's a good one cuz you can see how excellent. Um, it's been getting in the SVG field cuz it used to be all of them, all the models, they used to be like so simplistic and then they got to be, I don't know, pretty good to the point where now it's like these art pieces almost. Anyways, let's quickly um they pop up with the weirdest stuff every once in a while on on on YouTube, you know? It's like uh I don't want people thinking weird stuff about me because of the recommended videos that they do. Anyways, let me play this and see if we can hear it. >> You taking time. You are Dario. the CEO of Anthropic. Right? >> That's correct. >> Great. Well, I my first question to you is why won't you release Andropics AI without restrictions to the US government? >> Yeah. So, you know, we should maybe back up a bit for a little bit of context. So, um you know, Enthropic actually has been the most lean forward of all the AI companies in working with the US government and working with the US military. We were the first company to you know, put our models on the classified cloud. We were the first company to make uh custom models for uh national security purposes. We're deployed across the intelligence community and military for applications like cyber um you know combat support operations various things like this. And you know the reason we've done this is you know I I believe that we have to defend our country. I believe we have to defend our country from autocratic adversaries like China and like Russia. And so we've been we've been very you know we've been very important. We have a substantial you know um public sector team. public sector team. Uh but uh you know I have always believed that you know as we defend ourselves against our autocratic adversaries we have to do so in ways that defend our democratic values and preserve our democratic values. And so we have said to the department of war that we are okay with all use cases basically 998 or 99% of the use cases they want to do except for two that we're concerned about. One is domestic mass surveillance. There we're worried that you know uh things may become possible with AI that weren't possible before. An example of this is something like taking um uh uh um data collected by private firms having it bought by the government and analyzing and masked by AI that actually isn't illegal. Um it was just never useful before the era of AI. So there's this way in which domestic murder surveillance is is getting ahead of the law. that technology is advancing so fast that it's out of step with the law. That's case number one. >> ICE 010. Tell me more about that. What was the uh please tell me more. I'm very curious. >> Case number two is fully autonomous weapons. This is not the partially autonomous weapons that are used in Ukraine or or you know could potentially be used in Taiwan today. This is the idea of making weapons that fire without any human involvement. Now even those I think that you know they you know our adversaries may at some point have them. So perhaps you know they may they may at some point be needed for the defense of democracy. But we have some concerns about them. First the AI systems of today are nowhere near reliable enough to make fully autonomous weapons. um you know anyone who's worked with AI models understands there's a basic unpredictability to them that in a purely technical way we have not solved and there's an oversight question too if you have a large army of drones or robots that can operate without any human oversight where there aren't human soldiers to make the decisions about who to target who to shoot at that that presents concerns and we need to have a conversation about about how that's overseen and we haven't had that conversation yet and so we feel strongly that you know for for you know those two use cases should uh should not be allowed. >> The Pentagon has told us that they have agreed in principle to these two restrictions and they wanted to strike a deal. Why couldn't an agreement be reached? So there were, you know, there were kind of several stages of this all done quickly and kind of all uh, you know, determined by the kind of three, you know, the kind of very limited reading window that they gave, right? They gave us an ultimatum to, you know, to agree to their terms in 3 days or um, you know, be designated as supply chain risk or the defense production act. I guess we'll get to that later. Um, but uh, during that time um, there were there was a few back and forth. You know, at one point they sent us um, language that, you know, appeared on the surface to uh, meet our terms, but it had all kinds of language like if the Pentagon team is inappropriate or, you know, uh, or you know, or to do anything to do anything in line with laws. So it didn't actually concede in any it didn't actually concede in any meaningful way. And and there were further steps of it that that also did not concede in any meaningful way. So, you know, let me find a few clips. That was I wanted people to see the first part. Um, what I think was the issue. What I think was like really in a nutshell, what happened in that room when they were talking about is is this. So, Hagsth, the the Pentagon boss basically said, "Hey, hey, Dario, are you gonna play ball?" Right? are you gonna do what what we say and not cause issues? Right? And Dario probably said something along the lines of, "Well, it depends." Or he said, "Well, maybe." Right? The the the only correct answer was like, "Yes, sir." And he didn't say, "Yes, sir." And that I think really was sort of the issue. Um because this whole thing kicked off when in the was it Caracus Venezuela when they took Maduro, Claude was used in that mission and there was a leak specifying that. And so what that means is that Claude was used in a lethal military operation. And then there was a meeting between Anthropic and Palunteer. and Palunteer, you know, I guess anthropic employees were asking, hey, you know, Palanteer employees, hey, like what happened? What happened in that situation? And um that got kicked up to chain to Pentagon and Pentagon took that to mean like, oh, Anthropic is trying to mess with our control. They're trying to mess with our ability to run operations as we see fit. And that's what kind of like kicked off this whole thing. So I think it was mainly like it was it was that more so than anything because a lot of the issues that they're talking about. Um you know I those are like the the big things that they're that they're talking about. At the end of the day, I think Anthropic had they wanted to make sure they protect the red lines and the Pentagon was like, "Hey, we can't have you second secondguessing us, right? Because if there's lies in the line, we can't have you second guessing us. We have to be we have to be the ones calling the shots." And that that was the conflict. And there's one or two more clips that I think might be interesting from that episode. So, here's a 30 second clip and then we'll get right back to it. Critics call this an abuse of power. What the Pentagon is doing, what the White House is doing. Do you believe this is an abuse of power? >> You know, again, I would return to the idea that this is unprecedented. >> But is it an abuse of power? >> You know, this has never happened before. This designation has never happened before with an American company. And I think it was made very clear in some of their statements, in some of their language that this was retaliatory and punitive. I don't I don't I don't know what else what else to call it. Retaliatory and punitive. Okay. So this is I think the crux of the issue. So basically up to the point so yesterday when Trump announced that they were cutting ties with Anthropic and for them to get out uh to me you know at first it was kind of like a shock but then once you think about I'm like you know what this is actually not that bad for Anthropic. Anthropic gets to walk away. You know the the Pentagon the US government if they don't want to deal with them they just don't deal with them. everybody go goes their separate ways. That seemed like a good resolution to a complicated and dragged on conflict, right? And then today it seems like the Pentagon is still going to go after the designation to designate them as a supply chain risk which I believe kills the company. Right. So LBF5984 Anthropics is done as a company but somebody will buy them. Yeah, I mean that's I guess that's exactly what would happen in that situation. So yeah, they they they sort of get crippled. Uh they can't go on on their own. They can't IPO, they can't run a lot of the infrastructure and their bread and butter is the enterprise revenue. Now, if you think of yourself as a big banking enterprise or healthcare or one of those like critical sort of infrastructure pieces and you want to introduce AI to your system when you have two choices, right? They're roughly similar, but one is labeled a critical threat to national security, supply chain, whatever, right? You're not going to pick that one, right? So, that would really get blacklisted. any company associated with Anthropic will become blacklisted as well. Uh SAX Sexius. Yes, I I I think that's that's true. Whether it's officially or not officially, that's exactly what's going to happen. And so, yeah, if they're able to um get purchased by somebody else, I guess that would be one way they survive, but again, not as anthropic. Um, so to me that seems this to me it's I I don't get I don't know if I agree with that, right? Because again, if if there was issues, if there was back and forth and pro and problems, fine. You know, clawback their their contract, kicked them off the federal network, whatever. It's whatever. That's that makes sense to me. coming up, you know, you know, sort of like going after them and just executing them. The question is why? You know what I mean? Like that seems Yeah. punitive, whatever. So, I'm still hoping that that's like somebody said, might be a negotiating tactic or it might be an actual tactic to kind of like get them, you know, cut them down to size and have them be purchased by an Apple or someone like that. [snorts] >> [gasps] [snorts] >> Uh J. Harris 8939 SC designation is just military contracts not all government contracts CS being what supply chain supply chain. So is that true? True. I mean, I understand that that might be technically true. I think I I think most people are sort of aware and assuming that this is going to conflict with a lot of the infrastructure cloud contracts and a lot of the big tech um companies like I I you might be technically right but I don't know if that's true in actuality as in like can you get that sort of designation and have completely be completely unaffected in your relationships with large enterprises and companies like Google and AWS and etc etc etc. Um, by the way, please people post that in in in the comments. Like, do you think it's possible that you get that designation and it's just there's a very clear line. You lose your government contracts or use your military contracts and it affects your company in zero other ways outside of it. Like it's a clean divide. I just don't see how, right? I mean, we know that Microsoft and Google and all of them, right? they have a lot of government government contracts AWS right cloud contracts Microsoft right recently so there's this overlap between you know kind of like the military the government and those things and if anthropic is relying on a lot of that infrastructure to continue to grow and train models and stuff like that um that's not good also investors need to be able to you know investors don't like regulator regy uncertainty, right? And this company needs billions and billions of dollars poured into it. Any AI lab really needs billions and billions of dollars poured into it to continue to train models and continue staying competitive. So if the government paid to target on their back, I mean, it's hard to imagine a world where that doesn't affect them, right? So again, um you know, the person that that said that you might be technically right. I agree with that. Maybe that is strictly a military designation. Is it not going to affect anything else? Like I wouldn't bet money on that. You know what I mean? Like if I had to bet like will this cause other issues for anthropic? I'd be like yes it would. author Hanzo, they are terrorists and should be imprisoned. So you're talking about um anthropic. There is a um there is a petition going on to sign that's signing like hey don't designate them as such. Here's the thing. I mean, if it is just a cleancut and um government thing where they could cut off the military contracts, again, it's like whatever side you're on, right? Whether you think Anthropic is overreaching or the government's overreaching, what whatever you believe, if they just cut it and they end it with fines, contracts, and then Anthropic can't work for the federal government or Anthropic technology can't be used in the federal and any federal agencies. That seems reasonable to me. I think most people would say like, "Okay, well that sucks, but whatever." Like that's it didn't work out, right? They had an argument, whatever. Um the shutting shutting them down. I don't know, man. That's that's if if that is ina if that is indeed what is happening. Could they move it to a different country? I don't think so because they are really uh connected with Google now. I feel like running on TPUs. LBF5984. No, it's not that he didn't say yes, sir. It's that he said we dictate US policy, not the Pentagon. Yeah. Here's the thing. like you're you're not wrong, but it is sort of everybody uses this hyperbolic language. So like yeah, okay, yes, you're right. They Anthropic was trying to retain some control over how the technology was used. You're you're correct. And the Pentagon was like, that's absolutely inappropriate. Um and they need him to just play ball and not have anything like that. Uh, so you're I Yes, I mean you're not wrong, but [sighs] there's nuance here, and I understand some people don't want to see that. Like, it's got to be completely black and white. Um, and by the way, I'm also not saying that Anthropic played this right. I think there's a lot of missteps that they that they make. Um, and I think there's a lot of mistakes that they make. And I'm not, you know, protecting them or or saying anything like that. But I also don't think that the US government needs to take punitive actions against it because again, keep in mind they didn't dictate anything, right? They didn't the act there was there's no actions. They didn't try to shut anything down or anything like that. That's that's an important point to understand, right? So if they were asked to follow orders uh unquestioningly and they said yes, but like there's just this one red line, you know what I and you got to give it to them for being honest, right? If you think about it, like if they if they said, "Hey, like I'm not willing to cross this line. You can call them a bad person for trying to dictate policy, but they openly said it right. They said, "No, I this is not this is what I am unwilling to do." You know what I mean? That that's that's a good thing. Saying where your line is that you're not willing to to cross you. You know, you can disagree with the the line. You disagree with the fact that they should have that control that they should be able to dictate. That's fine. But you got to give him some credit for having, you know, like the balls to be like, you know what I mean? I mean, imagine sitting across from the the Pentagon guy and and just going, "No, like I'm unwilling to do X, Y, and Z." Like, you got to give him some credit for that at least. Um, and you can disagree with him on everything else. Uh, do I think that the movie Terminator was a prophecy? No. I think most of science fiction interestingly is like was we're realizing it was completely wrong in how they perceived AI because we just didn't understand it back then. Uh intelligence is somehow baked into it's somehow baked into the universe. It's I don't want to deep to to dig but basically we always thought we would engineer the digital brain or the artificial brain. So, similar to how they've engineered um similar to how you know they've engine like like a rocket ship or a Formula 1 car or whatever, whereas in reality like we're growing it. So, it's more like a fungus and we just have to create the perfect environment for that fungus to grow and then it will grow. Um that's one. And number two, I don't think anybody really truly fully grasped the exponential nature of how it could grow. The the the first person that really kind of in popular culture that talked about if you think about it was well maybe not the first but um a great example was the blog wait but why Tim Urban right so that idea is like we all think that AI will pull into like the human intelligence level station. and it will slowly come up to match our intelligence. And he's like, "That's not going to happen. It's going to like blow past it. It'll be much smarter next second." Um, what missteps did you did you say in in in what missteps did they make Jay Harris 8939? Great question. Let me um [snorts] uh So, here's the thing. We might not know exactly, but here's what I would say. Um, I don't imagine somebody like Satia Nadella or any of the CEOs um from any of these companies finding themselves in this situation either because they would have avoided it or they would have played it better or whatever, right? similar to how the, you know, when Sam was fired during that whole little coup at OpenAI, right? Uh because that was also EA related, right? So they they they they ran the coup and they find them find themselves absolutely isolated. They're locked in this office. They're getting calls from DC or from wherever from um some attorney general, the same person that put away the the uh the crypto guy that went away. He was part of he was related to effective altruism somehow. Was it Bankman Freed? Right. So they they conducted the coup and then they found themselves, you know, completely they they managed to alienate all the employees because remember all the employees were posting those little blue hearts. The attorney general is bre breathing down their neck. So they like they find themselves in situations that are really bad and there's no way out, right? So there's some breakdown of logic in terms of like let me think this through and they somehow find themselves in these like when Daario is saying bankman freed thank you when Dario sitting across from he south uh talking about this stuff like he already made all the mist like that's he's already in the losing condition like he needed to have made decisions before that didn't land him there whether that was not working with the government or I don't know what but but by the time we're hearing about the situation it's it's bad like he already has no options he's already made a lot of people mad similar to how I forget their names but um you know the people that the open AI coup right so by the time they they found themselves in that office one, you know, 24 hours after firing Sam Alman and they're getting calls from the attorney general and the entire open staff is against them. Like they've already lost. They've already made all the foolish decisions that have lost the game for them and it's just only a matter of time until it's official. Right? By the time Dario's getting called to the Pentagon, he already lost the game. It's only a matter of time before it's official. Right? So, I don't know exactly what mistakes were made. So, I don't know exactly what mistakes were made, but the point is by the time we know what's happening, they were already made. He was already in that in that bad situation. again. Uh but at a at a top level, either if you know if you're going to get in bed with the government, then go all the way or negotiate preemptively or don't get a contract with them, right? Those are sort of like the options. uh being the first in there, getting embedded in there, and then facing off with them on grounds that you can't win. Like that's, you know what I mean? That's already like the losing condition. You can use anthropic band. No, it's not clickbait. It It's not clickbait. He banned it. So Trump banned it from all federal agencies, right? And I say that in the video that I did. Did I say that within the first minute? So, no, like you can't fit the entire story in the headline. So, the headline captures kind of like what happened and then you describe it. So, Anthropic got banned and then the first one minute I give you the entirety of the explanation. Trump banned Anthropic from all federal agencies. And then also this is the next kind of wave that's that sort of designation of supply chain which again is another type of ban that could really really really hurt anthropic. Um, so banned means banned from any US company working with the US government, banned from US projects, and banned from IBM or Asenture or Palenteer government. Yeah. Yeah. So certainly they're banned with all from the all the contractors. Um, he seems like he has asberers. Seriously? Yeah. some of it. I hope you're not talking about me. I hope you I assume you mean Dario. Uh I'll go with that assumption. Um yeah, it's it's like he's he's brilliant. He's smart. Uh but yeah, you could see him getting like worked up and like maybe just not thinking through things through and rubbing people the wrong way. And in that military environment, right, where you expect people to be like like yes sir or no sir, these fine distinctions or whatever might really tick people off. So again, that's that's what I mean is like the the mistakes were made in the past and we're just seeing them like playing out. [snorts] Hexaf is a [ __ ] Jarhead. So I I don't know too much about the person. Um, I have no idea who he is or I should probably I think he made a a post talking about it. So, yeah. So, with a lot of this stuff, like a lot of these people, they're not sort of like they might be smart, they might be brilliant, there's a lot of things that they're great at, they're not maybe not the negotiator types. They're not the operator. They're not a smooth operator. You know who's a smooth operator is for Sam Alman, right? Because look what happens. Like what? Less than 24 hours. Like he's in there. He's got the contract, right? He's the one that's going to not find himself in that situation, right? And a lot of these CEOs of these companies, right, or even like elected officials, they're just not going to find themselves in this situation. Um, Daario might be I mean honestly part of it he might be too honest. Do you know what I mean? Like he he has a thought and he just says it right like hey are you going to play ball? And he's like pro. Well not unless you do this right and like everybody know like you think Sam Alman would say that. Um uh Fisher Fisher deck. Yes. These times are basically asberers versus fake alpha male. I mean, uh, you know, I don't that might not be wrong. That that might be, you know, kind of accurate. It's just it's just like oil and water like like that military strict binary. Yes. No. You know, I say jump, you say high, how high versus, you know, like Dario's like, well, it's complicated. Let's really examine all the factors here. [laughter] that just it doesn't go together well. Although I mean Elon Yeah, Elon does pretty well. I I think um you or at least he doesn't uh he doesn't have these critical communication failures. You know what I mean? like Elon, he's probably got Asberers, but he's one of the people that are like effective operators. I don't know what the difference is, but I mean, you know, again, EA when they took over OpenAI, they had everything and they fumbled it because of I like I don't know what they just don't know how I I don't know what. Like some they're very smart people, but somehow they they think in these weird ways where they find themselves in these horrible situations. That's exactly what happened here. Uh, line Arc G. Hello. First time catching a stream. Is this something you do often? First of all, thank you so much for being here. I do not do this often. I always say I should because I love it. I enjoy it. Um, I I really love it. I really really do. It's so fun and and kind of catching up with people and talking about it. Uh it's it's a lot of fun. I don't know what it is. It is it's tough. Uh not tough, but it is it is draining. It is a lot like because I mean when you're making a video, I can kind of pause and stare out the window for five minutes and gather my thoughts and then whatever and you just edit those pauses out. But here, it's super fun, but it's also like a little bit um draining. And then if I'm constantly like running around, what I should do really is have a short live stream like every day or four times a week or something like that where we just go over the news where we just sit down and kind of go over the top level news because I do that every morning anyway. So let me know if you would be interested in that. like if the live stream was more often but shorter cuz like going an hour and a half just me just talking for an hour and a half is is is rough. Uh if you're doing it every day, you know what I mean? Again, I don't know how Twitch streamers do it. That's insane to me because they do it like eight hours, 10 hours, whatever. Um they're using something. I'm on to them. Um, but yeah, if we do like a like a 10, 15, 20, 30 minute, whatever, like quick morning thing where we just go over the news, I might take over from what's his name? Scott Adams had the simultaneous morning coffee sip. It's like, hey everybody, let's drink our first Although I wake up super late. For most of you, it's probably later. Um, I I I woke up not that long ago. Did I ice 010? Did Did you ever get back to me? So, you said that something about how the wording was phrased something about the person being in the next room hinted to Claw that he was Russian. I missed that part. So, if you're still here, let me know because I was that's that's actually kind of interest because then it's sort of Yeah. then the situation is a little bit different if it was a uh wording issue. You know what, Neoccyers savvy, thank you so much. You're so smart. A weekly recap would be probably be easier on you. So, the story of my life literally is I've like I'm like minmaxing everything. It's a zero or one or zero or 100. Like either I don't do it or I like like got to go like crazy and I'm like ah I should do it every day or like oh not I'm not going to do it ever. Yeah. So many times in my life the right answer is just like just do it a little bit here and you know here [laughter] and there like the find the balanced approach. I don't know why my brain doesn't work like that. Um you're obviously right. doing once a week recap would be that would be the smart way of doing it. Thank you. Yes. And I I should I should definitely do it. David [snorts] A B1F, thank you so much for the super tip, super chat. I really appreciate you. So, the designation forbids anthropic from military associated work. does not ban anthropic from commercial contracts for other purposes that do not connect to the military operations. So my point is if they get designated [snorts] and let's say what you said is 100% true officially on paper like the people that the investors are pulling billions into anthropic they hate regulatory uncertainty. Are they gonna have second thoughts when they can invest in OpenAI or something else? The people like Google and Amazon like a lot of the infrastructure plays, you know, are they going to be as like, yeah, sure. Let's uh, you know, let's uh let's let's treat them just like we would all the other people without that designation, without the government's, you know, raft aimed at them. Uh yeah, like IPO, if they get that designation, does they want to do an IPO this year, early this year? Um does this not affect that in any way, shape, or form? I would not bet money on that. Does that make sense? I'm not saying that that you're you're wrong about what you're saying. I think you're you're right. I don't know what the laws are, but let's say you're you're right. I wouldn't bet that there wouldn't be uh as the military says collateral damage. Do you know what I mean? Like if they put, you know, if if the Trump and the Pentagon and everybody else they say this is a bad company, we hate this company and they put that designation on them. Is that like ne neatly contained within the box or is there going to be collateral damage? I would bet money that there would be collateral damage even if there's no official sort of thing that causes it. It's, you know what I mean? Value drop sign. Yeah. Okay. Yeah. Yeah. Yeah. Like like if it was a stock and somebody designated that and they lost a little bit of revenue, would the would would the stock drop proportional to the loss of revenue or people would just panic and dump it? People tend to be more cautious, right? If the the government doesn't like them, like would you put your life savings into anthropic right now? Think about it that way, right? If you got a chance to put all the money that you have, make a big big investment into anthropic uh versus, you know, two months ago, you you'd think twice right now, wouldn't you? Because it's unertain, right? So you wouldn't just bet all your money on it. Even though it might be limited to just military contracts or, you know, defense contractors or whatever, most people would be like, I'm not touching that, you know, and I think most employ, you know. Yeah, Rall, yes, I would. So you would bet the same amount with the same confidence as you did two months ago. You know what I mean? Cuz yeah, I hear you. It's still a good company, but man, look, no, most people when there's regulatory uncertainty, they freak out. When there's uncertainty, people freak out. In general, this causes uncertainty. It's it's as easy as that. But that's not saying anthropic is is bad or bad things are going to, you know, whatever. It's an issue. It's an issue that could have been avoided. But yeah, I mean if like Apple or somebody picks up um if if Apple or somebody picks them up, that would be kind of interesting. All right, so let me quickly look through some of the things that people have said. Um so Jason Crawford said, "I just signed this. It's against DO design anthropic as a supply chain risk." Um yeah. So to me it's one of those things where me personally me personally um I I think that designating him as whatever isn't a good thing for anybody. Like I know a lot of people are mad ananthropic right now and I understand some of those I believe me I get it they've made some poor decisions. Do we want them, you know, regulated out of existence? The more companies there are out there, the the better, right? We have more choices. There's not a monopoly. They definitely seem to have figured something out that's very valuable, right? I mean, they're number one coding models. Um, some people disagree or say codeex 5.3 or whatever it is is better, but Claude, I mean, is definitely very, very good. It has been. So, I don't know like I don't know if anthropic being if it does and or isn't able to continue you know producing uh uh you know have the same resources to to produce all the stuff they've been doing who is that good for are people madanthropic yes uh surprisingly and I'm like you you really see there's this uh shift in the last few months and I think it started when they they went after CL uh open cloud cloudbot whatever um I think people didn't like that um you know if you notice how every single lab so deep mind openai and anthropic around the same time all said hey the Chinese labs are distilling our models every one of those labs made that statement >> [snorts] >> But when Anthropic made it, the entirety of X/ Twitter just landed on them and just completely railed against them saying, "Oh, just like you stole this and that from us, etc., etc., etc." Um, if you know there's a number of people that that work with them that have been coming out and kind of So, I'm not saying everybody. Okay. So, I guess I should say Yeah. Not everybody. Not everybody. I'm saying more like the climate shifted. Whereas before Anthropic was genuinely viewed with a lot of positivity. Now there's definitely a well so just to be clear before the last few days what was happening right there was a big split. I think people didn't like uh a number of things that they did with the with OpenCL etc. I don't know exactly what caused it but there were there was like a different climate. I'm not saying everybody. So Fisher Fisher Deck, yes, that's my thoughts. I see nothing but praise uh aimed at anthropic. So, Theo Theo Gigi, whatever his name is on on Twitter, pretty big guy in the coding space has been incredibly in the last few weeks incredibly negative uh against Anthropic and saying that he's been saying that Anthropic isn't a good company for for a while. Um, again, please understand what I'm saying. I'm not saying that everybody's against it or Anthropic is bad or whatever. I I was surprised to see the the waves of negativity aimed at Anthropic recently. If you want to see what I'm talking about, look at their tweet about the distillation attack and you're going to see a lot of people that are, you know, on X at least sort of kind of in that AI space. people looked to the opinions and stuff for the you know um very very very negative. I was surprised by how negative that backlash was even though OpenAI said the same thing a week ago and DeepMind said the same thing two weeks prior. So just please understand what I'm saying. I'm not saying that they're bad or whatever. I'm just saying that they they've triggered some subsection of their fan base. Oh, I mean, okay, here's another perfect example. When they pulled the Theo was very on their side for this government thing that Okay, that's interesting. So, I didn't I didn't see that yet. So, this was, like I said, this is before this government thing that I'm talking about. Like, there was this wave shift be before this. Uh, if you want to see exactly what I'm talking about, look at when they were talking about the distillation attack and look at when they pulled the oath token so that you couldn't use it for openclaw. You couldn't use clot code token for openclaw. The reason a lot of people were ticked is because the people that were using that particular method to run openclaw were anthropic super fans. Those were the people that loved Anthropic and used Anthropic and just loved it. And then one day, Anthropic is like, "Yeah, okay, but you you can't use it in this particular way, right?" And so that was a very big backlash from those people, people that loved Anthropic. They were like, "Okay, so you're saying that um you can dictate how we use this technology. like it's just you're going to be able to say, "Well, you can't use it like this or you can't use it like this." People were not happy about that. Um, again, for me, I I have a lot of goodwill towards Anthropic. Please understand that. I don't I don't agree with everything that they're doing, believe me. I think Daario's um I mean certainly there there's obviously been some mistakes. Overall, I like Claude. I really enjoy how Dario thinks long term. I think his essays of machines of loving grace, I think should be required reading, very well thought out, very well presented. And I just there's a lot about Anthropic that I like. Uh what's the guy's name if please help me? the guy that made Claude code, Boris Boris Churnney, I think interviews with him are terrific. Um, so that I mean what I'm saying is I have a lot of goodwill towards Anthropic. Um, and by the way, I would just like encourage people not to get into this one one or the other. Like we're um we're always trying to be like you have to be one side of the other like either like anthropic 100% bad because they went against the US government or the government is bad and is good. There's no angels and demons here. Do you know what I mean? Uh there's complicated people trying to figure stuff out and sometimes doing it effectively and sometimes not doing it effectively. I think I think there's reasonable stuff on both sides and foolish stuff on both sides. That's not a copout. That's once you start realizing how things are. Um that is very very true. There was an ex general from the Pentagon who was involved in the uh project Maven their connection with Google back in 2017 or whenever that was. And that was so so they wanted Google to help to use their to use their AI to help improve drone targeting. And when Google employees um found out about it, you know, there was revolt, a lot of them, some left and basically Google had to cancel that that contract and that general that was on the receiving end of that because he was in charge of that project. Yeah. Boris Churnney. Um he was saying that what Google and their employees did he didn't like really support that because he thought it was a little bit too like he was like that's like he doesn't have too much sympathy for that but he does have sympathy for the anthropic situation. Right. So, this is somebody that's kind of like on the inside of the Pentagon that's saying like, "Hey, there's there's a degree of what's it called when there's it's not black and white." Wow, I'm blanking on the word. But like the point there's nuance. That's the word. Sorry. There's nuance here. So I really think that if you think this is just there's a bad side and a good side that that that that's missing that's missing some portions of the story that there is there is um there is a nuance here. Uh okay so let me take a look at so okay so Sam Alman this morning or late last night rather announces so they've reached the agreement with department of war to deploy our models in their classified network in all our interactions the do displayed a deep respect for safety and a desire to partner uh to achieve the best possible outcome. Two of our most important safety principles are proh uh prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DW agrees with these principles, reflects them in law and policy, and put them into our agreement. Uh let's see there's also okay and they are asking the D to offer these same terms to all AI companies which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things deescalate away from legal and governmental actions and towards reasonable agreements. [snorts] All right, let's see. uh create the imaginable not madanthropic over this depart uh department of war thing mad at them about open cloud. Well, this was specifically this was before this whole thing happened. Uh my point is that there was kind of a uh a tides shift like a climate weather change whatever you want to call it that the sit like how people perceived anthropic was shifting in some pockets of the conversation prior to this quantum marmalade or is it marmalade? Thank you for the super chat. If you can't build it, you aren't smart enough to wield it. The Pentagon has proven historically that they are the equivalent of monkeys with guns. They don't know how um ago a AGI AI how AI works or what it is. Interesting how like yeah the government didn't build this technology, right? It was built, you know, in the very I mean Google kind of I mean a lot of people contribute to to it. Obviously Transformers which was a Google sort of invention kicked off this most recent it was a big part of kicking off this race. Um but it is interesting that the government can't replicate it. Somebody uh online was saying oh why can't they just like get the weights from Anthropic? You know what I mean? like just copy the weights over or whatever. I I I don't know like would that would that do anything? You still need the people. You still need the infrastructure. You need the smart people building it. It's not about just like the weights. It's about whatever Enthropic has cooking is obviously special because they're a able to put out topof-the-line models at onetenth of the sort of like the the capital investments that the other companies have like they're much less funded than I mean think about how much money Google has how much money Apple has you know how you know Apple tried building their intelligence meta has tried building their own intelligence There's nothing close to what Anthropic has with their tiny little whatever. Uh, let's see. Quantum Marmalade $5. Claude's refusals are what make him smart. Philosophy cannot be turned on and off at will. And they do tend to think deeply anthropic in general tend to think deeply about yeah I guess philosophy like that soul document that was kind of a secret before and now the what are they calling it the constitution doc for for claude create the imaginable thank you for agreeing with me um about what what what was I talking about the nuance was that the what's that re referring to yes the anthropic are the bad guys let let me let me actually do a little uh what is it Q&A a poll I'm curious what everybody thinks engage with your audience we're going to do a Q&A or a poll rather so the question is Like look it the whole thing is complicated but on the whole do you more side with anthropic or with the government? What's a good way of phrasing that question in such a way do you think there's a connection between the Iran attack and the anthropic attack? So um I haven't seen I'm not up on that. So, I I I kind of know what's happening, but why why would there be a connection? I guess that's what I'm missing. And I apologize if I haven't if I missed something. I woke up this morning and uh this is one of the first things that I I I did upon waking. So, I might have missed some news. Is there a connection? Um, let's see what what's a good way of asking the question. So, because I'm not saying like who's the good guy, who's the bad guy. I don't think that exists here. Okay. I'm going to say who is who is more right, I guess. So, Anthropic and Pentagon. So, I'm going to click start poll. So the question is like who do you kind of like who do you think is more right? Because I don't wanted to reduce it to good guys, bad guys, whatever. But on the whole, which way do you lean more towards? Do you think just from a like a moral perspective, ethics, morals, whatever you want to call it, [snorts] debugger says I side with neither. That's that's interesting. I maybe I should have put that as an option. Why Palanteer sucks. Yeah, I'm curious. We haven't heard from Alex Karp. I'm wondering what his whole take on this is. Quantum Marmalade, thank you so much for your super tip. Claude 4.6 six will be obsolete after 6 months. How will Pentagon force them to innovate? You can't force people to think. Remember Iron Man in the Cave? I do remember Iron Man in the Cave. That was a Man, that was a good movie. Um, that might have been my favorite movie. They made too many superhero movies, but when they were just ramping up that whole franchise, Iron Man one was excellent. They really came out really well. So, COD 4.6 six will be obsolete after 6 months. interesting point in part because you know they found in Mongolia I guess the Chinese have been able to put together a center with some advanced Nvidia chips and they're cooking up a model DeepS rather uh DeepSeek uh specifically and both Google openi Anthropic all have said, "Hey, we're we're we're getting targeted with massive distillation attacks." So, massive distillation attacks from the best models in the world plus this massive GPU cluster in I guess Mongolia, Mount of Mongolia or whatever. The timing suspicious. Yeah. Okay. I I I get what you're saying. So, it's just the timing. Okay. I Yeah. I mean, I don't know. Definitely something to think about. Um, the timing is weird. Yes, it's also not weird, but I've noticed that some of these operations very often, stuff like this, it gets done when the market closes on Friday. So, a lot of these operations often happen over the weekend. uh specifically I think it is to not spook the stock market right so it's like as soon as like the final bell on Friday closes some war or conflict or whatever starts and oftent times it's almost like they have to like close it by by by Monday morning or whatever it's just interesting right that it's like some of these military operations are run based on when the market the stock market is open or not. I just kind of strange. Okay, so in our little poll, 75% of the people think that like Anthropic is more right. Um, and the other people think that, you know, Pentagon or whatever, government, milit, whatever, like who's who's more correct? And that's, you know, I I think that's kind of like what I would have expected expected. Um, here's the thing. Yeah. From the military perspective again, so I think we all are scared of autonomous AIs and drones and killing machines and stuff like that. like right there's there's nobody that's like yay this is awesome Skynet bring on like no one's saying that everyone's worried um so this isn't like pro slash versus everybody you you have to have some fear some respect some awe of you know if these autonomous machines start killing people get really good at it like you have to be a little bit concerned right you you have to understand how powerful that could be from the perspective of you know like the Pentagon when they get a contractor to do provide some service or provide some product or whatever you know if they buy some cars they don't expect the car manufacturer to be like oh but we we want to control how these cars get used or if they buy some ammunition people like well we want to make sure we control how this am am am am am am am am am am am am am am am am am am am am am ammunition gets used do you know what I mean like when they're on the field conducting an operation where there's like life and death situations, they want to make sure there's zero chances of somebody else having like it's just an extra attack vector if somebody's outside of the chain of command can affect in some way or or disagree or second guess or whatever. So them not allowing that is reasonable. So is you know if you're somebody like Dario and you're genuinely concerned about what you're building to be cautious about giving complete unfaired unfettered access to like you if you take a second you can understand that each side has their own reasons and they're not you know crazy. That's just my whole point in in all of this. Like there isn't like a side that's like unreasonable. I do think it's unreasonable to destroy anthropic as as a punitive measure. That seems a little bit unreasonable if they go through with all of that. That would be that would suck. Like I I think that's a little bit much person. That's my personal opinion. Uh let's see. AI is literally autocomplete with a neural net. The Pentagon think it reasons. How can a sentence completer making life and death decisions? It's a sematic resonance machine with attractor basins failure modes are inevitable. Yeah. And I think both of the like the Pentagon side and Dario, you know, they said LM are not ready for like prime prime time war management stuff, meaning that like they can't make decisions. I think how they're used is one to just comb through tons of data to find out important, you know, pick out nuggets of wisdom and two for for simulations. So, you know, let's say there's some like the the example that Palanteer gives us sometimes like let's say there's a uh uh weird Chinese arm of ships that appear somewhere in the vicinity of Taiwan or whatever uh in force, right? Some human somewhere has to make a decision about how to handle this. Uh, and usually how how do we do it? Well, we think through like, oh, okay, what could happen? Also, like, oh, where's where do we have our troops? Where are the satellites pointed? You know what I mean? Like there's this bunch of little decisions and there has to be a flurry of activity in order to figure out what to do, right? Because you don't want to overstep and create a conflict where there's no reason for it. You also don't want to underreact, right? So you really need to think through like what's happening, figure out how to do it. And what LLMs can do is they can rapidly because yeah, like you said there what I forget exactly what you said, but um there is a bunch of sort of you know stochastic different things it can spit out that are likely or pro, you know, more or less probable, whatever. So it can rapidly run through a bunch of different situations and tell you here are the top things and then those are then then given to the general or to the person making decisions. They're like here's what's happening. We thought through a million different scenarios and here are some things to keep in mind. Right? So at the end of the day it's still the human making that decision. The LM is more like a brainstorming companion because it's been proven time and time again that it's better at brainstorming than than than humans. Like it's able to rapidly provide a bunch of different solutions. Some of them are going to be great and maybe some of them not so great, but in those environments that's that's the use for it. So I think a lot of people see the fact that it will go wrong. Like if you if you ask it enough times just due to how its nature it will get things wrong. And some people think just because it gets things wrong every once in a while that means it's useless. No, not at all. You know, humans get things wrong every once in a while, we're still useful. It's just a matter of like what where to put that, where to apply it. JeanClaude Vanam's name came up and I have no clue why. I must have missed something. Claude. Oh, Claude. JeanClaude Vanam. Uh, yeah. Yeah, plenty plenty has been showing that a lot of the stuff can get hijacked. Uh there have I' I've seen a few cases where some of the prompts go inside like a sandbox. So they sort of get checked for problematic issues in an environment where it can't hurt. So that might be a way to kind of like get rid of that sort of attack vectors. Um but yeah, that's an open problem as of right now. like plenty and you know he's a nightmare for for this for these AI labs because he's just proving time and time again that no AI lab has anything that can protect from jailbreaks because he breaks everything within you know usually like the first 24 hours. Uh Lucifer, no we are way past prediction models, bro. Well, so the two use case I gave is one is prediction models and two is uh like simulations more and then the second one is just combing through vast amounts of data, right? Um are there other use cases? LMS aren't um targeting stuff, you know what I mean? They're not that's not what their thing is for targeting. You don't even need anything that complex. That's Yeah. And I mean the other thing is that I guess they were talking about is just like again in that situation where like a bunch of ships arrive somewhere, they might requisition satellites to get closer to it or whatever point at, you know, point at that location. I the other day I was outside. I was like taking the trash out or something. It was night time. I was bringing the trash in and I just see this thing like floating across the sky like a point and here at night you can see like like the International Space Station when it goes by you can see it pretty clearly. It's like a little spot uh and it's very different from from planes and stuff like that, but you can kind of see that little spot moving across. And so I'm sitting there and I look up and then there's this like thing that shoots across the sky and then like another one and another one and another one and another one in like this perfect formation like it's just like they're all gliding across the sky and one of them like I think like zoomed zoomed somewhere like I was just standing there going like what am I seeing? I was like I had no clue what I was I've never seen anything like that in my life. Um later I asked my AI, you know, assistant like what what was that? And they were saying it's a uh oh well let let me let me I'm curious. Does anybody know what that thing was? So it was it was in traveling Chad. Yes. But what exactly? The real Josh Buu. Okay. Okay. Yeah. Yeah. Yeah. Yeah. Okay. Uh, but do you know the specific name for it? Cuz it was like one after the other in very small like every 10 seconds it's like doom like it was just like going across the night sky. Yeah. Okay. Yeah. So people Okay. So people know what it is. It's it's Starlink and I guess uh the specific name for it they call it a Starlink train. So, what happens when a bunch of new satellites get launched until they slowly go to where they're supposed to be? Initially, they are all in this like one sort of uh Yeah. No, Starling, [snorts] but specifically it was a a call it Starling Train. So, it's specifically it was just the cadence is what really threw me off because I haven't se I've seen Starling satellites before. Um, actually at night you're able to see them, but it was just like one after the other and it was just like this train of lights grow going across the sky, it was trippy. I guess I've heard about it and I maybe I even seen like a video of it, but seeing it with your own eyes randomly, it was a trip. How many of you have seen that in your like with your own eyes? Starling Train. Yeah. I watched uh what's that movie called or show? Um Pluribus the day before. So that was what was in my mind. I was like uh this is a takeover. We're all going to be one one mind, one brain. Anyways, I gotta get out of here. But thank you for everybody that joined me today. Not yet seen this in Texas. Okay, cool, cool, cool. Okay, so this is I I probably I'm probably like the last person to have seen it. Oh, okay. And today I think there's a weird constellation thing happening, isn't there? I think there's some sort of a planetary alignment later in the evening. Yeah, it's a trip. It's a trip seeing that stuff. Yeah, these lives were it's so fun. Yeah, this was a lot of fun. Thank you everybody uh for joining me and so hopefully this was useful, entertaining, funny, whatever. Um, and yes, I'm going to try to do more of these. And I think probably just I should commit to once a week and then see if I can add a little bit more and a little bit more and just kind of like I don't know why my brain doesn't work that way. Six planets in the sky tonight. Yep. The sixth planet alignment is tonight. So, if you're in an area where you can see it, if you got a telescope, it's going to be pretty pretty awesome. Hopefully. Very cool. All right. Thank you all so much [clears throat] for being here. It's been a pleasure. Love you all. And uh until next time, see you. Bye. Claude of Van Dam.

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.

More from this source