LIVE CLIPS
EpisodeĀ 3-2-2026
Thing that's always been the best. Take the risk and then also do it in the right sequential way where you don't. I think one of the things that one of the biggest cardinal sins in retail and I didn't realize this coming from software, you actually don't want to move fast and break things. If you get into, let's say you get what seems like a dream partner in Walmart if you don't move, if the product doesn't move. And I didn't realize this until getting into retail and my co founder Williams, a genius on this stuff of just avoiding what makes you good. I said we say this internally a lot. What makes you good is what you do. What makes you great is what you don't do. One of the things that the trap that is a doom loop in retail is you expand too fast. Let's say you get Walmart, you're like hell yeah, we got Walmart. This is the 800 pound gorilla. And then it doesn't move and then you have to start putting in. It's not like Facebook, you launch an ad on Facebook, you just wind it down. No, now it's like Walmart's like hey, by the way, if you want to stay in here and you got one bite at Apple over the next 90 days you've got to spend 400k in spend and promotional spend like 400 we didn't expect, we got 270 grand in the bank. You all aren't going to pay us for six months because of the payment terms you negotiate and they're like, well then we're going to have to take you off the shelves. So then you go and raise some last, last ditch effort round that goes towards. You don't get quite to the 400, you only get like 300. And then it starts this slow doom loop and then you can't take it off the shelves because the worst graph in the world isn't flat. It's one that goes up and then down. Revenue is flatlining or going down. Then you can't raise any investment. And I would say conservatively, I'd say three out of four CPG brands get into that trap because they expand too fast. Everyone starts dtc. The retail transition is the make or break moment for like every new brand basically in my opinion. And the hardest part is always for me rethinking.
Like six months ago, right? Yeah, that's six months. Okay, six months ago seems about right. AGI age. Yep. Daniel says what we've really learned from the last five years is that Jack Dorsey runs extremely bloated companies. There was some, there was some other news on this with Block. Someone posted the mix of. Yes, so suck heat. Oh, it was deleted. Interesting. Well, we have it saved. We won't fully dox the account, but I don't know if this is real because it's been deleted. So it might be fake. I don't know. But most of you have heard about Block's 40% layoffs by now, but the numbers are even worse. Engineering was hit harder. We've lost close to 70% of our engineers. The company you once know is a prolific open source software contributor no longer exists. And so I was wondering, like they're laying off 40%, how will they be shifted? Because the AI narrative, the job displacement narrative, that could be back office people that are processing manual workflows, or it could be software engineers who now there's a smaller team that's getting more leverage out of AI tools. And so you write more off. There's also just the world where you're a mature software company and you have lock in and you're like, yeah, we actually don't need to ship that many more features. We can. We have sowed for so long, it is time to reap. But I am still, I am still bloat pilled. I still believe that this is somewhat of a unique driven. This is somewhat of a unique situation. But it didn't stop the market from absolutely puking on Friday. Amex at one point was down something like 7%. MWT says I'm fully on board with spiraling into a depressive episode over the rapidly approaching neo feudalist breakdown of society. But I worked at Square in 2017 and my job had no tasks. I sat on the roof eating free snacks all day with a MacBook. Ben Carlson also calling out that. He says maybe Block laying off a ton of employees is a sign that AI is going to destroy everything. Or maybe the Stock is down 80% from the highs and they overhied and AI is a convenient excuse. So yeah, we, I mean, we've just called this off, called this out so many times over the last year as companies did rounds of layoffs and said it was because of AI related efficiency. But again, it is oftentimes a the best possible reason. Yeah, if you're going to. Better. Better than saying, you know, we don't know what we're doing. And we've been running with 4,000 too many people for a while now. Yeah. At the same time, I mean, has the market continued to like it because the stock popped a bunch and it felt like that might cause a continuation? Yeah, I mean, it's stabilized. It's up 28% over the past five days. And so it does feel like this could have some sort of contagion effect. A lot of other CEOs looking at this and saying, okay, well, like I'm at least a theoretical victim of the SaaS apocalypse. I need to do something, I'll do it. So we could see more layoffs from tech firms. It doesn't seem unreasonable, but at the same time, the irony is that it's only Dorsey companies that have run these sort of mass layoffs. Right? Yes, yes, yes. I ran the numbers and this was the largest riff in S&P 500 history. Somebody was in my comments saying, you know, sharing Lehman Brothers and Lehman Brothers is actually interesting because they went bankrupt. They were delisted the same day. And then the rifts actually happened over time. But a lot of people were shifted around and transferred over to different jobs and ultimately the company just ceased to be in the S&P 500. BUCO. Oh, so it wasn't in the S&P 500 when the labor happened? He listed the same day. That's a good point. Victory for Jordy. Never questioned him. Buco says mostly he's talking about the cuts. Mostly about xyz, which is of course the ticker being poorly run. Not really about AI, but most other small to medium cap tech also poorly run. Expect many more cuts below. I tweeted that they only needed 60% of their company. That wasn't a random number. Pull up any FinTech SAS chart and you can see that employee count exploded. Demand exploded in 2020. But now these companies are way too bloated. I did not them. I did not expect them to cut 40% at once. I think it's basically impossible to identify the right 40% in one go. Yeah. So huge operational risk there, but maybe better for morale than multiple cuts. Who knows? Unprecedented. We now have two examples of this happening with Jack. So it's easy to say he runs a bad blended business, but I have been vocal about this. Toast and Clover should not be anywhere near the scale they're at. Title afterpay. Come on. Pretty sure he threw a $70 million party for the team last year. I think it was 68 million for some off site that they Did. I also think it's a mistake to define this purely as a jack issue. As I said, pull up the employees charts and the revenue charts. I'd say to pull up the earnings charts. But for many they are negative, which we all know. These companies are way too bloated and they are having their clocks cleaned by smaller, more nimble startups. They have to get lean to survive. I think the realistic average number is 20 to 25% for many of these companies. But there are plenty that could cut 40% too. I think this basically has nothing to do with AI, but there are some roles they can eliminate and somewhere they can increase scope. Let's call it 5%. So again, if that, if that now deleted post is real and that 70% of the engineering team, at least in that person's team were cut, but you don't know that could have just been the open source, kind of like the open source focused team. Right? And that's just like a hey, we don't have time to contribute to open source if your stock's down 80%. Yeah, yeah, own over. I do think that, I do think that most CEOs will maybe look at the block news and say okay, I need to right size the organization. I need to do some layoffs. But not all of them will be convinced that a 40% cut is the correct move. It might be, they might say actually like we think that 20% here and then 5% there and then 10% there is just a more better for morale because it's more clear who who, who's who's who's still on the team. Own says, I think using AI as cover for right sizing your bloated org is pretty unhelpful to be honest. This false data point will be cited by every anti AI campaigner within the next 24 hours. This is something I've said. I've seen a number of viral Instagram reels from people saying that the AI AI induced job loss is already happening at massive scale and they're pulling up quotes from CEOs that conducted layoffs in 2025 as evidence simply because the CEO said that they were getting efficiency out of AI. Well let me tell you about 11 labs. Build intelligent real time conversational agents. Reimagine human technology interaction with 11 labs. And let me also tell you about console. Console builds AI agents that automate 70% of it. HR finance.
Anyways, continue. Shopify is great. And as we get into the angel investing conversation, I bet a lot of ads will pop up because I've been fortunate enough to invest in a handful of great companies. But I went wide, honestly. Gusto did so well. And then a few others did really well. Mercury did really well. And then I'll tell you my biggest miss. What's that? OpenAI. And I'll tell you the story. It is. Can I curse? Can I curse on here? Total, total effing miss. And. And it was so brutal because. So I. Me and. Oh, it's all good. There'll be another open AI. Yeah, it's way different than missing. Just like a unicorn that comes up. Like, you can find one of yc. You're good. I did the math the other day. It's basically missing 20 Googles. Google went public at a $20 billion market cap, so it's missing 20 Googles now with their latest funding. It is straight up missing. It is missing 30 Googles. Yeah, 35, 40 Googles. So. Which is cool. That's all right. It's fine. I never calculated the amount you needed. That's why you need the magic mind to clear your mind lows. I don't need to scroll. And it's. There's so many things that I will in life where I'm like. Like the. No meetings. Where I'm like, I know I'm leaving money on the table. We couldn't. I couldn't imagine any other life. Any other life. I feel very fortunate. But I'll tell you the story. So I've only shared this once, but it definitely replays in my head pretty often. So Sam and I, Sam Altman and I, we were advising and building out YC research for universal basic income. This was in 2017. So every month we would meet in a conference room, this dingy little office. And the dingy little office, the conference room was where they were doing OpenAI research. Yeah. So every month, and I was a full time. We had sold my last company, Airbnb. I was a full time angel investor. Yeah. Full time angel investor. Waking up every day, working every day, thinking about, yeah, what are the next game changing company? And I'm like, yeah, Sam and the team, they're trying to do this AI stuff and it's research. And that can be a problem when you're too close to an operation. Whereas if you just get a pitch once, you're like, oh, this makes so much sense. But when you're getting all the information at all times, and they're like, yeah, we don't really know what this thing's going to be. Right. And we ran into this issue. Has that happened? No. Where you dinner? It sounds terrible. I ended up creating a rule of, like, friend starts a company, like a real friend starts company, just invest even. No matter if, you know, like, oh, they've got this issue with how they operate or they've got this blind spot, or, you know, all the problems that they're facing and you just. If you're too close to it, you can just overthink it, dude. I told Ahmad at Mercury when he was like, because we had built financial technology and he is graduated, we're at the Airbnb cafeteria. And he was. And he told me the idea for Mercury and wanting to start a bank for stripes. And I was like, don't do it, dude. And I spent 45 minutes trying to convince him not to do it. And then like two days later, he was like, I appreciate that. Because we kept texting about it. He's like, I appreciate the input, but I'm going to do it. And I was like, well, okay, I'll invest. There you go. This is not a good idea. And good Lord, was I a total idiot. But, yeah, the open AI one, I was sitting every month saying no again and again and again. And even with the whispers of, yeah, well, I think we're gonna have to spin it out, make it a for profit and would you want to invest? I was like, AI. And it's. I'll be honest, smart investors got in my head that were like, AI is so far off that it's like, that's like a 2035 thing. We're so far off. And I was a total idiot. Well, 40 Googles later, I said, googles later. But plenty of other opportunities, plenty of other investments, plenty of other products. Congrats on all the progress. It was good to be humble a little bit. You're on too much of a hot streak. It's good to be like, okay, I gotta lock in. That is the benefit of building, I think for anybody listening, I think it's what makes.
That's go to work at the Department of Justice in the recent past, but that's always been true. Yeah, it's a good experience. Yeah. Yeah, that makes sense. How is, in your view, AI impacting the legal industry today, not in the future? You know, how will the impact kind of evolve over time? But what are you seeing and hearing from colleagues or friends at other firms? Look, we are wordsmiths, right? We work with words, we write things. And so obviously, large language models are something that is going to change how we work. They may change the whole structure of law firms. Big law firms historically are kind of described as pyramid in structure. You have senior people, and then you have a lot of junior people. A lot of what the junior people have done in the past can now be done by large language models. It's not like they're going to give you a work product that you can then file with the court and use. I mean, there are hundreds of cases where lawyers have gotten in trouble with courts by filing things that cite cases and laws that don't exist. So that is really. That's definitely a thing. Hallucinations, that's a huge risk factor. And that's really on the lawyer. I mean, there's no excuse for a lawyer to said, oh, I relied on AI. No, you sign that thing and that brief, you're responsible. It's your integrity on the line. So we're not. We're nowhere near getting. I don't think, at least in our practice, maybe we do complex litigation. There may be practices where it's a lot of repetitive litigation where you can get output that's ready for filing, but we're really not seeing that yet. But we've developed in house at our firm a platform, a methodology for taking large masses of data that we work with, all the documents that are produced in discovery, all the testimony, all the contracts, and we organize that. It's built on the Claude Enterprise platform, and we've done this as lawyers. It's a system that's proprietary, developed by lawyers for lawyers. I think if you just turn a young associate loose with Claude or ChatGPT, you're not optimizing the technology. But we take all the data and we structure it for a way that lawyers work, so it creates work streams. So, like, what do we need to do? We know what we do in every case. We prepare examination outlines, we prepare expert witness reports and the like. We prepare opening statements. So we structure in the data that creates these work streams. And I really think that gives us a big advantage. It's not something engineers have created. It's lawyers knowing what lawyers need, having designed a way to structure the information. And we're using that with great success. I mean, in trials. Now, in the middle of trial, imagine somebody's on the witness stand. You can ask the AI what's the best evidence that so and so just lied about that. Wow. You press a button and you get a bunch. So most of those things you will have thought of, some of them make no sense. But there'll be a couple of gems in there that you might not have thought of, lines of attack that you haven't thought of. That's extraordinarily powerful. Yeah. So what we are. Our goal is to get to a point where the AI yields a work product that's like 80% or 90% there, and which is what an associate is typically doing today. Right. It's not. You're not like, the best associate is not hitting it out of the park. Every single output, they're getting it. They're getting a good, good solid chunk of the way. They're absolutely there. Right. And so lawyers can focus on what they do best. Making. Making sure that last mile, the last 20%, the last 10%, is as good as it can possibly be. How do you. How do you think this affects the job market?
Making sure that last mile, the last 20%, the last 10%, is as good as it can possibly be. How do you think this affects the job market for lawyers at the early stage of their career? Because in some ways, yeah, their work might be being replaced. But at the same time, given that AI is very good at generating words and will be able to generate entire lawsuits, you can kind of imagine dystopian world where the number of, you know, cases that get broad are, you know, 100 times higher than they are today. I think that's true. And that's something that people don't talk about a lot. There are AI companies, AI native companies out there that essentially identify claims. So they'll have a database that has information on businesses of all kinds. What licenses do they have? What licenses don't they have that they should have under the law? I mean, you can just imagine if you can boil the ocean, and they will, these companies, you can subscribe to them and they'll serve up. Here's a class action for your consideration. We've identified the claim, and here it is. So I actually think there's a potential that we may see more litigation as a result of AI. On the other hand, I think resolution of cases may be faster. Sure. Because both sides can understand quicker, you know, the merits of the case on each side and reach a resolution sooner. Yeah, yeah, no, that makes a lot of sense.
Star Point, what ways in which conflict is evolving? Are you guys like, willing to lean on and effectively bet the company around? It's obviously great to have the all the time actually enlisted and serving the country, but the battlefield's like evolving. You know, I'm sure you've been, you and the team have been watching all the footage coming out of the last few days, but where is this all kind of going in your view as far as where is like kind of conflict moving? Well, an example is like, I think we saw like the US version of Shahed, or I think like, so low cost autonomous systems seem to be coming online en masse and all these things are going to impact how you build the product. Sure. So, I mean, I think at the end of the day, the scaling and operationalizing autonomous systems is a large part of the future of warfare. Right. And so I think the concept that we talk about internally that we need to build and enable is what we would call intelligent autonomy. But how do you orchestrate all of these autonomous systems not just against tasks that they've been assigned to, but with each other and with the more exquisite, expensive systems? And how do you do that in a way where you still have a human in the loop? Right. I think there's been a lot of discussion about fully automating the kill chain. No one wants that. That's not even really something that I've heard anyone talking about. What fundamentally people are trying to do is have the right amount of human in the loop, have humans for high value human touch points. It's not everywhere humans are making decisions today. Like, we can't have humans involved in every single thing that they're involved in today. But a lot of the, a lot of the decisions are not high value decisions that are humans uniquely positioned to do so. It's intelligent time is about removing humans from low value human touch points, keeping them and bringing them back into the system for those touch points that they need to make the decision, whether for ethical reasons or for tactical reasons, and enabling them to make decisions that help move, you know, hundreds of thousands of autonomous systems and manned platforms and, you know, other types of unmanned platforms towards common goals across a, you know, what could be 100 million square mile theater. And I think that's really the spec that we have to build towards. Makes sense. What's the shape of the company today? Where are you guys based? What are your plans with this.
I still wasn't. I haven't, I haven't waited in this for a while and it's no fun, but it is what it is. Can you unpack a little bit more of that, that tweet that you posted where you did the find on the Dario article for Taiwan and saw that wasn't mentioned? Oh, I mean I've just got, I've sort of griped about this in general. I think that, do you just think he should be, he should be talking about the Taiwan issue more deliberately. He should be messaging that like why is it important that, why is it, why is it significant that he doesn't mention Taiwan? Well, I think the position about not selling chips to China is a totally legitimate one. I understand the argument. I could make that argument if I needed to. I have advocated the opposite that number one, not only should we, should we be selling chips to China and, and a generation or two behind, which has always been sort of our standard practice with chips, we should also be allowing Chinese companies to fab with tsmc. That is a restriction that has come down now these Huawei chips are somehow manufactured by tsmc. Let's not look too closely at it. But we should explicitly be allowing it. And the reason for that is I think it is a safer equilibrium to have China dependent on Taiwan than to try to cut them off from Taiwan. Well, we are dependent on Taiwan. Taiwan is 70 miles off the coast of China. It's not an ideal position in the world for us to have a dependency on it and China to not have a dependency on it. So this is a problem. All this stuff has. Everything going forward has massive trade offs. Yeah. The implication of letting China fab with TSMC or the implication of letting them buy Nvidia chips is that they gain these incredibly powerful AI capabilities that is driving this entire debate that is in a vacuum. Not a good thing. But nothing's in a vacuum. Everything is a trade off. And in that specific area, I think that just repeatedly, again and again being absolutist about the chip issue. Rep. When I am frustrated to not see any public comment about the. That's not quite fair. He has made comments about, oh yeah, that would slow down sort of the adoption of AI the long run if Taiwan got. Got bombed. I'm like, that's in my mind that's an insufficient consideration of the possibility of Taiwan getting bombed. Now again, I bias in that regard. I lived there for, for nearly two decades. But it's just the, the reason I brought it up in this context is if AI is what it is. The people with guns are going to want to have a say, whether that be domestically, whether that be internationally. That might be in the context of the US Government just taking it, trying to kill your company because they feel you're not cooperating. Or it might be the context of China deciding it has to act because the US Is becoming too powerful. Because, you know. And it's not a fun debate. It does. I do think the nuclear angle is a good one.
Cook is up. Wings. Team deathmatch. We are experts. Tripping. Let's just roll right? Market clearing order inbound. Surrounded by journalists. Hold your. Strike1, Strike 2. Activate. Go, go. The retriever mode. Trust. Market clearing order inbound. Vibe. I see multiple journalists on the horizon. Founder, You're watching TVPN today is Monday, March 2, 2026. We are live from the TVPN Ultradome, the Temple of technology, the fortress of finance, the capital of capital. Let me tell you about ramp.com time is money save. Both easy use, corporate cards, bill payments, accounting and a whole lot more all in one place. It was a massive weekend. So much news. We are very fortunate to be joined by Ben Thompson at noon. Let's pull up the linear lineup and show you the run of show today. Linear, of course is the system for modern software development. 70% of enterprise workspaces linear or using agents. We got Ben Thompson, James Bashar, John Quinn's coming back in person again. We're very excited to be joined by him. Talking about tariffs. A monster lightning round with five different guests joining. We got some acquisition news, we got some funding news. We got some takes on tech and AI and media. We're going all over the place. It's going to be a fun, fun show. But we missed you. We missed you on Friday. We were traveling. We went to Montana. Terrible day to be out. Terrible day to be out because it was every single time. Yeah, we've had an off day. It ended up being a massive news day. So lesson, yeah, never take a day off. Yes, never take a day off. Truly, what an absolutely crazy weekend. Of course there's the war with Iran. The big news in tech was the US Halts the use of anthropic AI after tension over guardrails. So this is in the Wall Street Journal. The federal government will stop working with artificial intelligence company Anthropic, President Trump said, marking a dramatic escal of the government's clash with the company over how its technology can be used by the Pentagon. Quote, I am directing every federal agency in the United States government to immediately cease all use of Anthropic's technology. We don't need it, we don't want it and we do not do business with them again. We will not do business with them again, Trump said Friday in a social media post. The Defense Department and other agencies using anthropics claude models will have a six month phase out period, the president said, adding that there would be be civil and criminal consequences if the company isn't helpful during the transition. Six months to switch from one LLM to another feels like a long time. But I guess a lot of this has to do with like fed ramp and actually getting. But this is, this is a lot more than, you know, switching to a new model to run deep research reports. So you're involving classified systems. Sure. The context that people didn't have last week was that the United States was headed to war. Right. And so even having that context, I feel like is pretty important. Right. It sort of explains the 5pm deadline. Anthropic had taken issue with how their products were used in the Maduro raid. There's a new conflict that's unfolding and so that makes the aggressive timeline make a lot more sense. It also makes the six month phase out make more sense. Phase out make more sense because national security is on the line. This morning, Scott Bessant said at the direction of the President, the US treasury is terminating all use of anthropic products, including the use of Claude within our department. Yeah. The American people deserve confidence that every tool in the government serves the public interest. And under President Trump, no private company will ever dictate the terms of our national security. Yeah. The U.S. u.S. Federal housing, Fannie Mae and Freddie Mac are also terminating the use of anthropic products, which this morning. Yeah. Which I think goes in line with the original direction. Trump said. I am directing every federal agency in the United States government to immediately cease all use of anthropic technology. So you would expect to see these, like, you know, these statements come out from sort of every different federal agency as they sort of get their transition plan together, figure out, you know, what are the requirements for their particular agency. Because I imagine some agencies are, aren't operating in classified environments. It's going to be much easier for them to onboard to a Gemini or an OpenAI or a Grok very quickly. Some of them. It's going to be a longer, longer plan, but they're all getting on board. And there's been a big debate over how Dario has handled this. Where is he in the right? Where is he in the wrong? Where has the government potentially overstepped? Have they been too aggressive or are they doing everything appropriately? Everyone is weighing in and we're going to take you on a whirlwind tour of everyone's opinion, share some extra content to try and dig into what's actually at stake, what's actually going on. It's, in many ways, Ben Thompson does a great job sort of painting the broadest picture around. Like, what if this is really nuclear level technology. What should we expect in that scenario? And then there's the more minor side which is, you know, you're talking about a $200 million contract for a company that does 10 billion in ARR. This is 2% of revenue. In many ways it's a bump in the road. And so I think a lot of people will be squaring. How serious is this for Anthrop? What does this mean for the other foundation model companies? What does this mean for the future of the relationship between tech and Washington dc? But there's a lot more context. So the way I processed this was interesting because I wasn't fully offline, but I was not surrounded by tech people over the weekend for the most part. And so I was following it and sort of wrestling with some of the same questions that people were wrestling with online. The big one was just how should a private company interface with the government? Like, I am an American, I've run businesses, I've never actually sold anything to the government, but hypothetically I could imagine the government coming and wanting to buy, I don't know, ads on TVPN or Lucy products or any other consumer packaged goods product that I've made. And my assumption is that the private companies should have very little say in how the government uses those products. And I was trying to zoom out and think about like AI is so complicated because it could be super intelligence, could be autocomplete, could be coding help, could be knowledge retrieval. There's a lot of different things that AI means. And in some scenarios it's like super critical, really complex. And in other ways it's just a product, it's just a service, like an Excel sheet, like Microsoft Windows installation, like a car. And so, yeah, so I was thinking like if I was the CEO of Ford and I make Mustangs and ford Explorers and F150s and the government comes to me and asks me to buy some cars, I should probably treat them like any other customer. I probably shouldn't say, no, no, no, I don't approve of this particular government, what the government's doing. So I'm just not going to sell you any Mustangs to drive around on the military bases because I don't like the military. But then if they ask me, hey, we love the Ford Mustang, we love the F150, we love the Explorer, but we're going to war and we want you to put bulletproof glass armor. That seems like a different discussion. That seems like, that seems like I might need to set up a different manufacturing line, I might need a different assembly line. Like the car's going to be heavier and if I put bulletproof plating on all the cars, well, a lot of families are going to be like, I don't want an armor. It's going to hurt my business. Yeah, it's going to hurt my business. Exactly. And so that negative externality probably needs to be internalized by the government who's asking for that particular contract. And there's actually a history of this, like the Humvee, of course, the Hummer is owned by General Motors and that brand has separated. And now most military vehicles are made by defense contractors. But there is some bleed over and there's some times when private companies do dual sourcing or dual use technologies. And so. But all of that is just like a discussion. And that cost should be part of a new contract, effectively in my case. And this was loosely what was happening. But yeah. And Dario in the CBS interview, quote, we are a private company. We can choose to sell or not sell whatever we want. There are other providers. Yes. Which feels like. Yes, like I'm dipping out of it. Now it is weird because he at the same time, and we'll get to the actual CBS interview, but he said Anthropic has been one of the most proactive AI companies in working with the US Government. We were the first to deploy models on classified clouds and the first to build custom models for national security. Which is odd because I feel like this was predictable from a lot of the writing that has gone into the AI community broadly, like what happens at the edge. And so this was sort of predictable that you would get to this question. Yeah, this was the moment he had been waiting for in many ways. And so it's weird that you would be able to predict that this would happen, that there would be this question of like, who gets to decide how the technology is used. And you wouldn't just be like, well, I know how it's going to play out, so I'm not even going to go in the the lion's den because I don't want to be in that scenario. Instead it was like, we're leaning in with the government, we're deploying classified clouds, training custom models, but we still want authority over the final last sticking point on how these models are deployed, what they're used for. And that feels a little odd. In the Ford example, if I sell them a Ford F150 and they say, hey, we're going to take it to Iraq and go do a military mission, I'm going to be like, look, it's not ready for that, it's not armored, you shouldn't do that. But if they do it, then it's kind of on them. I should be clear about the capabilities of the vehicle and how much, you know, how bad it would be in that situation. But it's on them to go retrofit it, figure out what's legal, what's most valuable to their strategy, to their mission, what's aligned. Maybe they'll use it just to drive around the base. Maybe they won't actually take it out on tours of duty. Right. Based on what you know about the capabilities of the model. And so I thought it was totally reasonable for Dario to say that anthropic models, in his view, are not capable enough to be deployed in certain Department of War contexts. Now, it's bad salesmanship. Most salespeople would just be like, yeah, everything's great. You can use it for anything. They over promise and then under deliver, he's doing the opposite. But it's certainly responsible if that's his true belief, like, if he believes that these models are not good for a particular use case. Telling your customer that, hey, like, it's just not ready for that, like, you're just going to have a bad time, it's not going to work. That's a fine thing to communicate as the CEO of a company who's selling a product. But at the same time, I still think the government has the freedom to assess the efficacy of those models, which are changing in capability rapidly. So he's saying, like, right now it's not good for X, Y or Z. Well, what about in two months? Like, it might be better. And then I think the government should be able to determine when and where they're effective. Now they can't break the law. And Congress and the American people by extension, are free to create new laws to restrict or encourage the use of technology in all sorts of ways. And that's like the way America works. That's the American project. But it's not unreasonable to share the capabilities of your product with the government, which I think is totally fine. So there were two main sticking points that they went back and forth on. No mass domestic surveillance and no fully autonomous lethal weapons. And there's been a question as to why OpenAI was allowed to include that language in their contract and say, like, hey, we don't think our technology is ready for that either. Let's do a deal that says that. And people are like, oh, what's different here? Why could open well, here's the thing, though. So we know that Anthropic took issue with the way that Claude was used in Venezuela. And the Department of War would have known that, hey, we're going to war. Right. You can imagine that Anthropic, a private company, does not know that. And so they have this deadline. There's this information deadline. Yeah, this information asymmetry. They have this deadline. The Department of War knows that they're going to war. They're like, we need reliable AI systems for this conflict. We now know the war. The president said this morning said the war is going to stretch four to five weeks. Right. I think on Friday, we all assumed that it was going to be, you know, in and out super quickly. So the timeline is extending, and the Department of War is sitting there being like, we need to know that our. The. The provider of these AI systems is going to be reliable. Just a little bit ago, they took issue with it. Right. Can we count on them? They start this kind of renegotiation process. Right. To try to build up confidence that, hey, we can rely on these systems in an active conflict, in a conflict that feels already much more serious and will have much greater implications than the Venezuela conflict. Right. And so Anthropic is looking at this in a different way and clearly is, like, leaning in and like, really, in some ways felt like they were kind of, like, stirring, like, really, really, like, not, not, not respecting the process. So, like, when I. When I. Or even the deadline, Right. So Emil Michael came out Friday night and said it was 5, 13, 13 minutes past the deadline. I'm trying to get in touch with Anthropic. I try to get on the phone with Dario. Dario says he's in a meeting. And I feel like in that situation, if I'm the Department of War and I'm about to lead the country into war, we can debate on whether or not the war is justified, should we go? But the Department of War is sitting there being like, you won't even jump on the phone. You're telling me there's a meeting that you're in that's more important than. And that just screams to me like, hey, we can't count on this. We can't count on this provider. Like, we need to take drastic action now. This whole supply chain, risk designation, we'll get into that later. That's a whole other thing. But I can see why the Department War came out of last week and was feeling like, hey, we cannot rely on this provider. We need an alternative solution. Yeah, yeah. If I'm shipping cars and I'm like, oh, I actually, I disagree with the latest decision. I'm not going to put the cars on the transportation. Like, that's an odd scenario to be in. There's also this question of, like, these. A lot of people were, like, really, really keen on boiling down the terms to, like, these two, like, buzzwordy lines. And Palmer Luckey did a great job explaining, like, how complex these terms are. What is autonomous, what is defensive? What about defending an asset during an offensive action or parking a carrier group off the coast of a nation that considers us to be offensive? And that's where you get into like, like the ideas of deals that stick. Basically. You can have the same exact contract, line item or terms of a deal signed agreement with two different people, and it can be a wildly different experience. Most entrepreneurs have felt this because they were like, yeah, I had a handshake deal with one VC, it was 20% and a board seat. And I had another deal with another VC, 20% and a board seat. And the one VC was like, suing me and threatening me the entire time. And the other person was very flexible and clearly very aligned. And so building up a relationship that shows that there's some trust, reliability, that when the hard decisions come, that they will be made in a legal, logical, you know, consistent with American values way is, I think, what you need to put forward if you want to work with the government effectively. So if you've developed, if you have a good working relationship with someone, it's much easier to give, to give on specific terms that will need to be cooperatively interrogated over time. And so Semaphore reported that Anthropic disapproved of its technology being used during the Maduro raid. And the joke was that the Department of War was probably just asking basic knowledge retrieval questions like, who is Nicolas Maduro? But I don't know how much of a joke that is. And I also. I don't know how bad of a thing that is. I actually think. Yeah. Tyler, do you have to say, on the context of Venezuela, like, specifically, like, what is actually reported is that after an Anthropic employee inquired with Palantir about Claude's role in the raid. Yeah. Palantir senior executive notified the Pentagon. Yeah. So I think it is, like, kind of blowing out of proportion to say that, like, Anthropic is against using cloud in Venezuela. Right. It's an employee. It's not an executive. Article about that too, though. Maybe it's like Dario telling an employee to Go check on that. But, like, we don't know. It'd be like a random employee. I think it's probably unfair to say that anthropic as a whole is like, we are firmly against Claude being used. What happened during the Maduro raid, we don't even know. And of course it's classified. So, like, I don't know if that will ever know because, like, should we know? I don't know, like, if it's an important capability, you don't necessarily want that to be public knowledge that then the adversary is instantly aware of. And so I was thinking back to that viral interaction between Ted Cruz and Tucker Carlson where. Where Tucker asks Ted Cruz, like, what's the population of Iran? And Ted Cruz doesn't know. And it was framed as like, well, how can he possibly have a reasonable take on Iran if he doesn't even know the population? And that's, like, somewhat fair. You could go either way on that. But I just think, like, LLMs are good for that type of thing. Like, we do it is what is reasonable is to, you know, expect civil servants, elected officials, military officials to be knowledgeable about the countries that they are operating in. And LLMs can help with that. And so I feel like that's just a good thing. If you just zoom out and just ask, do we want a more knowledgeable and educated government workforce across everything that they do. It seems like absolutely yes. And so I just think that that's something that is maybe lost as people go into more of the sci fi, more of the frontier stuff, that there isn't a lot of evidence that's happening yet. And on the supply chain risk, Ben Thompson, who's coming out at noon, makes a really strong argument for why government pressure like this is actually reasonable in this situation. He takes it a lot further, plays it out, and lays out a scenario that seems somewhat inevitable. But what I'm still wrestling with is just how real the supply chain risk designation is. Many reports are treating the supply chain risk label as an established fact. Yeah. Which all it is is a tweet from Hagseth. It's a tweet from Hagseth right now. Dario went on CBS and said that he has not received a letter, that there's no definitive ruling yet. Kalshee has the odds that this actually happens at 42%. And so. And by April 1st. So a full month for the DOD to actually roll this out. And then there's other nuance where the law says that there was a perception that this was, like, going to Kill Anthropic. Because if Nvidia has a government contract and they can't do any deals with Anthropic whatsoever, and that's not true. Apparently the supply chain risk is specifically, if you are a company and you're working on a government contract, you would not be able to use anything that's labeled as a supply chain risk on that contract. But you could use that product in a different piece of your business. And so it's still dramatic. Still, I think Dario said it was unprecedented. It's only been used for foreign countries. Kaspersky Labs was a Russian cybersecurity company that was deemed to be a supply chain threat. Huawei is a. Is a supply chain risk because of the 5G towers that could potentially have backdoors somehow. DJI still is not crazy that DJI isn't. And I think that a lot of people would be very upset if. If Anthropic got a supply chain risk designation before DJI based on just what we talked about last week, where DJI was found to have a whole bunch of back doors on robot vacuum cleaners and whatnot. So lots of nuance there, but we'll see where the supply chain risk discussion actually goes. It feels like the pressure's on and there's probably more negotiations happening as we speak. And so we'll be following the story. Yeah, Neil, Michael was going through the timeline. He said today at 9:04pm no response yet to my calls or messages to dario. Today at 8:25, anthropic writes, we have not received direction communication from the Department of War. Of course, Emil Michaels, the undersecretary of war. Today, 5 14, Secretary of War tweets supply chain risk designation. Today I called Dario's business partner, 502, asking to speak to Dario because he hasn't gotten back to me. She is typing while we speak and likely has lawyers in the room. With no notification to me, that's a guess. I called Arya 501. No answer. I messaged. Are you asking to talk as well? And anyways, he's just arguing like they're not negotiating in good faith. Yeah. Let me continue. But first, let me tell you about Figma ship. The best version, not the first one with Figma introducing Claude code to figma. Explore more options, push ideas further. And let me also tell you about cognition. They're the makers of Devon, the AI software engineer. Crush your backlog with your personal AI engineering team. So speaking of Dario on cbs, he did unpack some more of his logic, which clearly resonated with some people. There was a lot of supportive posts. There were a lot of, you know, anti posts, but it caused a discussion. I was left unsatisfied with his answer on one question. So he was basically arguing that LLMs as a class of technology, hallucinate and should not be used for autonomous weapons, which is. Which is clearly a commentary on using AI at the Department of War broadly. But I thought it would have just been better, like, much more stronger communication for him to say, hey, look, we're anthropic. We've built a system that's specifically good at answering questions, being friendly and helpful, writing code. Like, our system is awesome at that, but we don't make a product that we'd recommend using for autonomous weapons. And it's tricky to try and, like, twist arms here and sort of, like, because he's in a leadership position, act as the steward of what, like, he is an expert in LLM capabilities, but he's not necessarily an expert in DoD capabilities. And so it was odd to hear that he was like, sort of painting with a broad brush and clearly believes, which is fair. It's his belief, but he clearly believes that, that the Department of War should not be using AI broadly. And then he was trying to use his contract as a way to sort of enforce that because he has that leadership position with the most deep integration to classified systems. So I thought that was just sort of like a mistaken comms opportunity there. And there's also been some mistaken commentary floating around that America does not have laws that prevent mass domestic surveillance, which I thought was really interesting to hear. We do. We have the Fourth Amendment, which reads literally, the right of the people to be secure in their persons, houses, papers and effects against unreasonable searches and seizures shall not be violated. I think people maybe forgot about that. But there are obviously a lot of nuance and different things. Like, if public information can, does that count as surveillance? Does the IRS count as surveillance? Do automated traffic cameras count as surveillance? Like, there's a lot of things that, where surveillance is broadly popular, there's other things where it's massively unpopular. And of course, it gets into the actual definitions. 20 lines deep. To understand what happens in the court. There was a case recently of the government using a drone to surveil protests, and it was held up in court as acceptable, but the court gave notice that going forward, this should not be used and that the laws need to change. And the judge was like, this is, like, technically legal, but it's not in the spirit. And so, like, we need to Revisit this as a. As a country. And that's a lot of what's coming away from this, is that if you put. There's a view of, like, Dario as sort of like making this, like, last stand, which in the best case, sort of just actually kicks it back to the American people. Because the whole debate right now is, is Dario like the God king, corporate emperor of this private company that he has control over and you don't get to vote what he does versus democracy, America, government. Right. And the good case is probably that he makes this stink and his deal sort of falls apart, but then America responds and the populace votes for what they think responsible use of artificial intelligence technology brought broadly is. And that would be something that I would certainly stand by as a fan of American democracy. Let me tell you about Okta. Okta helps you assign every AI agent a trusted identity. So you get the power of AI without the risk. Secure every agent, secure any agent with Okta. And let me also tell you about Lambda Lambda is the super intelligence cloud, building AI supercomputers for training and inference that scale from one GPU to hundreds of thousands. Let's go back to the timeline. We have Ben Thompson joining us in about 30 minutes. There are other reactions and other breakdowns. We can actually kick off with this breakdown of Ben Thompson's piece because I think Dan Irldan B. Summed it up pretty well. Do you want to? You can go for it. I'll take a crack at it. Ben Thompson, as always, lays out the reality more clearly than I could have. Despite my attempts by Dario's own words, he's building something akin to nukes. He's simultaneously challenging the US government's authority to decide how to wield said power. As much as I like Claude and as much as I dislike Hegseth's extralegal might makes right maneuvering, I will ask you again, what did you expect? Vibes essays. This is the reality of all too many of my EA followers that they've been proclaiming for years now. They're seemingly upset that this reality has come to bear. And there's this interesting note that has been going around that one of Daria's favorite books is the Making of the Atom Bomb, the Making of the Atomic Bomb, and it tells the story of the scientists that built the atom bomb, and then eventually that technology was nationalized. And he apparently gives this book out to Anthropic employees and has sort of seen it as like a roadmap for what might happen with AI. And I was Struggling with it because I was like, is it a cautionary tale? Like, we haven't had nuclear war in 70 years. Like, the outcome seemed pretty good. Maybe it's a controversial to say, but I feel like we built the nuclear bomb, which like, probably like not the best technology. Pretty dangerous, pretty risky. I don't like the idea of nuclear war, but the, the system that we developed to prevent nuclear war has been successful, knock on wood. But it's been successful in my entire life and my parents life. Like the bombs haven't fallen since the 40s. And so this idea of the government having authority over something that is as powerful as nukes, I feel like, why fix it if it ain't broke? I don't know. Do you have any. I mean, a different, a different scenario where you have a bunch of private companies that have nukes and there's this constant ongoing. It seems crazy to me, I don't know, debate, defend McNukes? Well, no, no. I think it's kind of this like, weird contrast because like, like, basically until like last week. Yeah, Dario has been like the AI CEO that's been like, we need government regulation. He said this again and again, Ross. Death and whatever. But then it's like, okay, how do you square that with him saying we're going to do the stand against the dod? Like, it seems kind of like it is a little odd. It's contrast somehow, right? Yeah, yeah. It's like, I don't know, there's just a much better way to handle it. Which is, which is, you know, put up billboards, I don't know, like fun to pack. Like do more stuff to actually make the law happen. Yeah. And the way that I was personally processing it, I was. I saw that the CBS interview had happened. This was Friday night. Right. I went to the Paramount app to try to find the interview, couldn't find it. I went to the RSS, couldn't find it either. It's on YouTube and it has a million point three views. Yeah. So it went out over the weekend and then almost in the same session, I'm seeing that we are now at war as a country. And so all the kind of blowback against OpenAI. I was processing that of like, we want this technology to speak critical. The government clearly needs it. And now we want the labs leaning into working with the Department of War at this critical moment in time. Continue on this post. Yeah, One last thing on the nuclear weapons thing, it is very interesting to see the actual structure of the nuclear weapons industry because I think people don't realize where that industry wound up. Yes, it got nationalized, but there's actually a ton of private companies that work on nuclear weapons, which is crazy to say, but basically, the IP is owned by the Department of Energy. The warheads are manufactured at facilities that are owned by the Department of Energy by the government. But they hire contractors from private companies to actually operate those facilities, and then they answer to the government directly. So these are companies like Bechtel, BWX Technologies, Honeywell, and Battelle. And then in terms of actually building the missiles, those are built by Lockheed Martin, Northrop Gummen, Boeing, General Dynamics. They build the missiles that don't have the warheads on them, and then they sell them to the U.S. government. And so they wound up in the.
Was so sound. He said he could make $50 trillion and never get that decade back with the little ones. And so I was like, man, I want to hit 20,000 hours with my kids. And that was basically the moment where I was like, the weekend's no phone. And I'm just going to be with him in those hours. I'm going to be completely with them and not sucked in because I just know myself too well. How is no phone? Just Apple Vision Pro mostly. How is your. And I used I vibe coded auto scroll. So I don't scroll, it just scrolls. Yeah. And it has all the TikTok and the Instagram reels and the acts of moderating the situation virtually. They're with you. Exactly, exactly. I agree. How has your approach to angel investing evolved? Because I feel like angel investing can start out as like a fun hobby, a way to blow cash, but then you get caught up in this FOMO and. And there's. You're on other people's timelines. Like you're kind of fighting for allocation at different points. What's your approach? It used to be more is better. And it's true. So much of the game of investing is you go wide because the asymmetric returns, one investor, you don't know who the next Gusto is. Exactly. And that was angel and Gusto. Yeah, that was my first angel check. Sick. They were in our Bachelor Y Combinator. I know. It was nuts. That's amazing. Let me tell you about Gusto. The unified platform for payroll, benefits and HR built to evolve with modern small and medium sized businesses. Go sign up. Wow. That has never happened before where I've been able to mention angel investment and then get a live ad read. Live ad read. What's your website? Mine? Yeah. J.J. becerra. Yeah. But the company, Magic Mind. Oh, magic mind.com. is that on Shopify? It is. That's amazing. Shopify is the commerce platform that grows with your business and lets you sell in seconds online, in store, on mobile, on social, on marketplaces, and now with AI agents. Continue. Shopify is great. And as we get into the angel investing conversation, I bet a lot of ads will pop up because it is. It has been. I've been fortunate enough to. To invest in a handful of great companies, but I went wide. Honestly. Gusto did so well. And then a few others did really well. Mercury did really well. And then I'll tell you my biggest miss. What's that? OpenAI. And I'll tell you the story. It is. Can I curse? Can I curse on you total effing miss. And. And it was so brutal because. So I me and. Oh, it's all good. There'll be another open AI. It's way different than missing. Just like a unicorn that comes up. Like, you can find one. Nyc. You're good. I did the math the other day. It's basically missing 20 Googles. Google and public had a $20 billion market cap, so it's missing 20 Googles now with their latest funding. It is straight up missing. It is missing 30 Googles. Yeah, 35, 40 Googles. So. Which is cool. That's all right. It's fine. I never calculated the amount you needed. That's why you need the magic find to clear your mind lows. I don't need to scroll. And it's. There's so many things that I will in life where I'm like. Like the. No meetings where I'm like, I know I'm leaving money on the table. We couldn't. I couldn't imagine any other life. Any other life. I feel very fortunate. But I'll tell you the story. So I've only shared this once, but it definitely replays in my head pretty often. So Sam and I, Sam Altman and I, we were advising and building out YC research for universal basic income. This was in 2017. So every month we would meet in a conference room, this dingy little office. And the dingy little office, the conference room was where they were doing OpenAI research. Yeah. So every month. And I was a full time. We had sold my last company, Airbnb, and I was a full time angel investor. Yeah, full time angel investor. Waking up every day, working, thinking about, yeah, what are the next game changing company? And I'm like, yeah, Sam and the team, they're trying to do this AI stuff and it's research. And that can be a problem when you're too close to an operation. Whereas if you just get a pitch once, you're like, oh, this makes so much sense. But when you're getting all the information at all times and they're like, yeah, we don't really know what this thing's going to be. Right. And we ran into this issue. Has that happened? No. Where, you know, it's often. It sounds terrible. I ended up creating a rule of, like, friend starts a company. Like a real friend starts company. Just invest even. No matter if, you know, like, oh, they've got this issue with how they operate or they've got this blind spot or, you know, all the problems that they're facing and you just. If you're too close to it, you can just overthink it, dude. I told Ahmad at Mercury when he was like, because we had built financial technology and he's graduated, we're at the Airbnb cafeteria. And he was. And he told me the idea for Mercury and wanting to start a bank for startups, and I was like, don't do it, dude. And I spent 45 minutes.
That's a house. And previously they had Air Force airmen just sitting there, like, clicking. And they were like, okay, we're going to automate that. But it was still, like, scary. Don't be evil. Working with the government, military. And then there was a backlash. They pulled out. Then eventually they went back in and had a new head of Google Cloud. Yeah. This is, you know, it's hard to. I speak for myself personally. I obviously have the biased angle because of Taiwan. I have the biased angle where I think there, you know, just in general, there is this very naive view of the world that doesn't understand why militaries are important and necessary. And I think Silicon Valley got itself in a lot of trouble by giving in to this naive mindset that we have no duty to support the military. And there's this tension's been so. It's a tension that's been brewing for years. Yeah. Which is, are you an American company, subject to American law and even beyond law, just morally compelled to support the US Military or not? And there's an equally American sort of idea of moral consciousness. I'm able to say no. That's why we have the First Amendment right. This goes into the, can the government compel a company to do something? It goes back to some of the questions that happen, you know, with the first Trump administration. And, you know, I've been on both sides of this, like which I. And this is what I'm not going to sit here and say. In the. In CBS interview, he said, we are a private company. We can choose to sell or not sell whatever we want. There are other providers. He's already sort of, like, making this case. Yeah. Which, again, is a case that I support. Yeah. But the point here is there's always the question with, like, a bubble or whatever, is it different this time? Sure. And I guess that's sort of the question I'm raising. Is AI actually applicable to every other technology that's come along? Or if it is the potential to be a source of power going forward, it's going to be dealt with as such. Yeah.
Democratic process, yes. That is the ideal process. I understand why people are frustrated and skeptical about this. I used to have this debate a lot in the context of antitrust and aggregators. And one of my sort of theses about the aggregators and antitrust is that the, the antitrust laws are fundamentally unsuited to dealing with aggregators because antitrust law has historically been about control of supply, and the power of aggregators flows from control of demand. And so you end up with all these solutions that I call pushing on a string. You're just trying to get people to change how they behave. And that doesn't work very well. Like Google has always been. Right. Competition has always been just a click away. The problem is people aren't clicking. And like, like, so, so the solutions focused on the supply angle doesn't work in a world where the supply is there, just no one's choosing it. Yeah. And therefore my prescription is you actually need to pass new laws, not try to retrofit these old laws to this new use case where they don't work. And the reaction is always, that's impossible. We can't pass new laws. And okay, but realize the implications of what you're saying. I mean, I saw a tweet again, I didn't like it, so I lost it forever. One of the most inferior things in the world. But someone was like, I would definitely rather have Dario Amadei make these decisions than. And he, to this tweeter's credit, he wasn't limited to Trump because to me, this isn't a Trump issue. This is a any politician issue. He said, I would rather have Amade making these decisions than whoever comes out of our screwed up democratic process. And points for the honesty because that's the actual choice that is being put forward. And you could say, Congress isn't gonna do anything, therefore Amadei should just appreciate that is giving up on the democratic process and saying we should have unelected, unaccountable individuals making weighty decisions. And again, I understand the sentiment. It's hard to imagine Congress passing laws about anything, but just realize that's like, that implication is quite fraught. Yeah, it's a huge change from, I mean, I just spawn in and believe in democracy and then understand it and study economics and just have reinforced my belief in the American project throughout my entire career. And now it really is people discussing an entirely different world of governance, which has been not something people have talked about.
This like hairy relationship with the government adversarial to go on for a long time. I would like them to sell to the government now, like Congress to pass a law addressing these digital surveillance issues. Yeah. And a lot of people are like that's unrealistic. Which I'm amenable to. But at the end of the day, if you don't have its legal or not legal as your guiding standard, the only alternative is someone has to decide. And the implication of that not being a sufficient justification is that means a private executive is deciding. Yeah. And if AI is what it is, I think that's going to be, I use this word intolerable. I didn't mean intolerable to me. I meant intolerable to those with power to have a private executive making those decisions or not. And if you think about if power. If we're going to have this very sort of brute analysis that power flows from or laws flow from power. AI is a source of power. Yeah. So it's not just that. And I think this is where the supply chain again, which I'm not endorsing, but I think that's where the motivation is coming from. The goal isn't to find. We just won't use anthropic. I do think the goal is to hurt anthropic. Yeah. And you're. If you're not going to be subservient to us, you're not going to be allowed to build a power base. Period. And again, I'm not endorsing all this. Yeah. It's just a matter of. It's not a surprise this is happening. Yeah. And it be. This needs to be just a. Is a real risk factor. A real. That has to be considered in all these decisions. Putting on my.
Employees. Instant resolution for access requests and password resets. TBPN Simulator is here. Let's do it. TBPN simulator is here. You've been asking for it. There's a data center simulator. There's an insider trading simulator. There's a capybara simulator where you just do nothing and you just sit in the forest. But now you have TVPN simulator and we can play it here on the show. You start out outside of the TVPN ultradome and then you control a character who can walk inside of our studio. You see our bathrooms on the left, our couches on the right, our American flag up top. And once you get prepared to go into the actual studio, this is a real recreation of our. Here we go. TVPN is live now and you can experience the joy of being an in person guest on tvpn. If you're coming on the show in person, this is a good way to prep. This is a good way to prep. It's a great way to prep. You should put in 10, 20 hours in here for sure. Understand the layout? Yes. It also. I love the accurate of how many. Accurate. And it's also accurate to the more recent. We recently changed the desk setup for where people sit and this reflects the new setup. So thank you to Ben and our team who put this together. It is fantastic and remarkably accurate. Just a few hours, effectively one shot. Incredible. Incredible. I've never. Yeah. I love the details, the tracks on the ground, everything. Just fantastic work. TVP and simulator will be available everywhere. Video games are made possible, AKA the Internet. What happened at Little Caesars? Little Caesars arena had a malfunction tonight where their air horn was blaring for over five minutes straight during the.
For years now. They're seemingly upset that this reality has come to bear. And there's this interesting note that has been going around that one of Daria's favorite books is the Making of the Atom Bomb. The Making of the Atomic Bomb. And it tells the story of the scientist that built the atom bomb. And then eventually that technology was nationalized. And he apparently gives this book out to anthropic employees and has sort of seen it as like a roadmap for what might happen with AI. And I was struggling with it because I was like, is it a cautionary tale? Like, we haven't had nuclear war in 70 years. Like, the outcome seemed pretty good. Maybe it's a controversial say, but I feel like we built the nuclear bomb, which, like, probably like not the best technology. Pretty dangerous, pretty risky. I don't like the idea of nuclear war, but the system that we developed to prevent nuclear war has been successful. Knock on wood. But it's been successful in my entire life and my parents life. Like the bombs haven't fallen since the 40s. And so this idea of, of the government having authority over something that is as powerful as nukes, I feel like, why fix it if it ain't broke? I don't know. Do you? I mean, a different, A different scenario where you have a bunch of private companies that have nukes and there's this constant ongoing. It seems crazy to me, I don't know. Debate, defend McNukes? Well, no, no. I think it's kind of this like, weird contrast because like, like basically until like last week. Yeah, Dario has been like the AI CEO that's been like, we need government regulation. He said this again and again on whatever. But then it's like, okay, how do you swear that with him saying we're going to do the stand against the dod? Like, it seems kind of like it is a little odd. Totally. It's in contrast somehow, right? Yeah, yeah. It's like, I don't know, the. There's just a much better way to handle it. Which is. Which is, you know, put up billboards, I don't know, like, like fun to pack, like, do more stuff to actually make the law happen. Yeah. And the way that I was personally processing it, I was. I saw that the, the CBS interview had happened. Yeah. This was Friday night. Right. I went to the Paramount app to try to find the interview, couldn't find it. I went to the RSS, couldn't find it either. It's on YouTube and it has a million point three views. Yeah. So it went out over the weekend, and then almost in the same session, I'm seeing that we are now at war as a country. And so all the kind of blowback against OpenAI. I was processing that of like, we want our. This, this technology is critical. The government, like, clearly needs it. And now we want the labs leaning into working with the Department of War at this critical moment in time. Continue on this post. Yeah, one last thing. On the nuclear weapons thing, it is very interesting to see the actual structure of the nuclear weapons industry, because I think people don't realize where that industry wound up. Like, yes, it got, like, nationalized, but there's actually a ton of private companies that work on nuclear weapons, which is crazy to say, but basically the IP is owned by the Department of Energy. The warheads are manufactured at facilities that are owned by the Department of Energy, by the government, but they hire contractors from private companies to actually operate those facilities, and then they answer to the government directly. So these are companies like Bechtel, BWX Technologies, Honeywell, and Battelle. And then in terms of actually building the missiles, those are built by Lockheed Martin, Northrop Gummen, Boeing, General Dynamics. They build the missiles that don't have the warheads on them, and then they sell them to the U.S. government. And so they wound up in this, like, you know, hybrid public private partnership. And I don't know, I. It just feels like maybe it's like, left curving this, but, like, it feels like it's good. It feels like it worked out. It feels like, like the nuclear weapons thing is the correct formulation. And I don't know that I would be like, yes, Boeing needs nukes. Like, let's give Boeing nukes. That's great. If I have a problem with how nukes are rolled out, I'll buy shares in Boeing and sue them and join the board and try and get the CEO fired if he fires off nukes. Like, that feels weird. Continue, continue with this.
I'm not gonna put the cars on the transport. Like, that's an odd scenario to be in. There's also this question of, like, these. A lot of people were, like, really, really keen on boiling down the terms to, like, these two, like, buzzwordy lines. And Palmer Luckey did a great job explaining, like, how complex these terms are. What is autonomous, what is defensive? What about defending an asset during an offensive action or parking a carrier group off the coast of a nation that considers us to. That's where you get into, like, the ideas of deals that stick, basically. Like, you can have the same exact contract, line item or terms of a deal signed agreement with two different people, and it can be a wildly different experience. Most entrepreneurs have felt this because they were like, yeah, I had a handshake deal with one VC, it was 20% and a board seat. And I had another deal with another VC, 20%, a board seat, and the one VC was like, suing me and threatening me the entire time. And the other person was very flexible and clearly very aligned. And so building up a relationship that shows that there's some trust, reliability, that when the hard decisions come, that they will be made in a legal, logical, you know, consistent with American values way is, I think, what you need to put forward if you want to work with the government effectively. So if you've developed, if you have a good working relationship with someone, it's much easier to give, to give on specific terms that will need to be cooperatively interrogated over time. And so Semaphore report that anthropic disapproved of the.
Different here. Why could open am? Well, here's the thing, though. So we know that Anthropic took issue with the way that Claude was used in Venezuela. And the Department of War would have known that, hey, we're going to war. Right. You can imagine that Anthropic, a private company, does not know that. And so they have this deadline. There's this information deadline. Yeah, this information asymmetry. They have this deadline. The Department of War knows that they're going to war. They're like, we need reliable AI systems for this conflict. We now know the war. The president said this morning said the war is going to stretch four to five weeks. Right. I think on Friday, we all assumed that it was going to be in and out super quickly. So the timeline is extending and the Department of War is sitting there being like, we need to know that the provider of these AI systems is going to be reliable. Just a little bit ago, they took issue with it, right? Can we count on them? They start this kind of renegotiation process to try to build up confidence that, hey, we can rely on these systems in an active conflict, in a conflict that is. Feels already much more serious and will have much greater implications than the Venezuela conflict. Right. And so Anthropic is looking at this in a different way and clearly is, like, leaning in and like, really in some ways felt like they were kind of like, stirring, like really, really, like, not, not, not respecting the process. So, like, when I. When I. Or even the deadline, right. So Emil Michael came out Friday night and said it was 5, 13, 13 minutes past the deadline. I'm trying to get in touch with Anthropic. I try to get on the phone with Dario. Dario says he's in a meeting. And I feel like in that. In that situation, if I'm the Department of War and I'm about to lead the country into war, we can debate on whether or not the war is justified, should we go? But the Department of War is sitting there being like, you won't even jump on the phone. You're telling me there's a meeting that you're in that's more important than. And that just screams to me like, hey, we can't count on this. We can't count on this provider. Like, we need to take drastic action now. This whole supply chain risk designation, we'll get into that later. That's a whole other thing. But I can see why the Department War came out of last week and was feeling like, hey, we cannot rely on this provider. We need alternative solutions. Yeah. Yeah. If I'm shipping cars and I'm like, oh, I actually, I disagree with the latest decision. I'm not.
Computer room, which is named the Leonardo DiCaprio Computer center, features several signed posters of the actor from films he started, including Titanic and the Great Gatsby. The tribute filled space has become a distinctive feature of the library, offering both technology access and a glimpse into the actor's career. And Brooks Otterlake says, I like that this entire article, that this is the entire article and it seems to fully negate the quietly part of the headline. Quietly funding the branch. This is awesome. And I feel like if I'm. If I'm a kid and I go to the library and I see Leonardo DiCaprio, that's going to inspire me. So I'm a fan of that. But this story went out last week. Do you think the posters are still there? Probably. I'd say just double down. I'd be worried about those posters anyway. Let me tell you about Vanta Automate Compliance and Security. Vanta is the leading AI trust management platform. And let me also tell you about Cisco. Critical infrastructure for the AI era unlocks seamless real time experiences and new value with Cisco. And without further ado, we have Ben Thompson in the Restream waiting room from Certechari. Welcome to the show, Ben. How are you doing? I'm good. Hopefully I have the right microphone turned on this time. You do and it sounds fantastic. Thank you so much for joining on short notice. Thank you for writing Anthropic and Alignment. It is a fantastic piece that I think covers all of my questions. But I want to start with just how did you process the, the weekend? How did you get to this particular place? And then like what is your key thesis with Anthropic and Alignment? I mean this is one of those ones. I don't know if it's good or bad that it came out sort of at the end of the week. So I had a lot of time to think about it. Ultimately I think it was good because I'm not sure anyone very as explicitly made the point. I did. And maybe it was bad because I feel there's a lot of like caveats. Maybe in retrospect I should have put in the article that would have addressed a lot of the points that people are upset about. Yeah, basically zooming out. This was not a normative article where I'm saying what's happening is good or bad. And that's really the one caveat I really wish I would have put on there. I mean I'm being out there accused about like a Neeli Patel, the full throated fascist endorsement of fascism or something like that. And it's like, relax, okay? Can I get some credit for the last X number of years? Basically, there is a deep rooted concern that I've had for a long time about, and I'm now hesitant to even use sort of EA as a term because it's kind of now politicized thanks to the events of the last week. But a failure to grapple with a world of guns is basically the long and short of it. And I actually think Eliezer has been the one guy who's been honest about this, where he wrote that Time article about potentially bombing data centers someday. And that's actually a point worth bringing up, which is all this stuff is right now in the digital realm with robotics and potential other applications. And it's obviously being used for military operations. It's crossing over into the physical realm. But if AI is as powerful as people say it's going to be, then there are going to be real world reactions to that. And if we're going to analogize it to nuclear weapons, as Dario Amadei has done repeatedly, you have to think through what's. What would happen in a world where a private company developed nuclear weapons, what would the government's response be? And that's not to say that the government response in that case is good or bad, or does it follow sort of constitutional principles or whatever it might be. Obviously I want them to. On the surveillance point, I've been concerned about the application of computers to our surveillance laws for years. Like so many things in our society assumed a certain level of friction in doing things that computers already obviated. And AI is going to just do that on steroids. I do think we need new laws. I think all this stuff is, is correct. And I think the idea that AI being applied to these commercially purchased data sets, for example, is a huge problem that I don't want to happen. The concern I have is that if this technology is as powerful as it is on pace to be unilaterally imposing restrictions, even if those restrictions are good, isn't just an issue as far as who rules us. The democracy issue that sort of Palmer Luckey, I think very eloquently raised, it's inviting very bad outcomes for those asserting that in general. And I feel there's been a lack of awareness of this. That's why I brought up the Taiwan China thing. This has been a frustration I've had with Anthropic. Generally they talk about, you know, Amade has been very outspoken in terms of opposing selling chips to China. For in a narrow aspect Very, very good reasons. My pushback has always been, what happens if we get super powerful AI and China doesn't? What are they going to do? Sure, the optimal thing would be to just bomb TSMC out of existence because suddenly that becomes optimal even with all the cost that that does. And. And then what then are we going to do? Like, we're entering this. Like, I don't like getting into political posts. It's not fun at all. I'm not having fun with this. It's not enjoyable. I could promise you this. And some people are like, well, you should have just made the post private. I'm like, no. I actually, I really want anthropic and people associated with this to read this because people have theorized for a while about what's going to happen as AI becomes more powerful, and now it's starting to happen for real. And I guess over the weekend, part of it was just I felt compelled to say this and girding myself to do so. And even then I still wasn't. I haven't waited in this for a while and it's no fun, but it is what it is. Can you unpack a little bit more of that tweet that you posted where you did the find on the Dario article for Taiwan and saw that wasn't mentioned? Oh, I mean, I've just kind of. I've sort of griped about this in general. I think that. So do you just think he should be. He should be talking about the Taiwan issue more deliberately. He should be messaging that like, why is it important that. Why is it. Why is it significant that he doesn't mention Taiwan? Well, I think the position about not selling chips to China is a totally legitimate one. I understand the argument. I could make that argument if I needed to. I have advocated the opposite. That, number one, not only should we be selling chips to China and a generation or two behind, which has always been sort of our standard practice with chips, we should also be allowing Chinese companies to fab with tsmc. That is a restriction that has come down now. These Huawei chips are somehow manufactured by tsmc. Let's not look too closely at it, but we should explicitly be allowing it. Okay? And the reason for that is, I think it is a safer equilibrium to have China dependent on Taiwan than to try to cut them off from Taiwan. While we are dependent on Taiwan, Taiwan is 70 miles off the coast of China. It's not an ideal position in the world for us to have a dependency on it and China to not have a dependency on it. So this, this is the problem. All this stuff has. Everything going forward has massive trade offs. Yeah. The implication of letting China fab with TSMC or the implication of letting them buy Nvidia chips is that they gain these incredibly powerful AI capabilities that is driving this entire debate. That is in a vacuum, not a good thing. But nothing's in a vacuum. Yeah. Everything is a trade off. And in that specific area, I think that just repeatedly, again and again being absolutist about the chip issue when I am frustrated to not see any public comment about the. That's not quite fair. He has made comments about, oh yeah, that would slow down sort of the adoption of AI the long run if Taiwan got. Got bombed. And I'm like, that's in my mind, that's an insufficient consideration of the possibility of Taiwan getting bought. Now again, I bias in that regard. I lived there for nearly two decades. But it's just the reason I brought it up in this context is if AI is what it is, the people with guns are going to want to have a say, whether that be domestically, whether that be internationally. That might be in the context of the US government just taking it, trying to kill your company because they feel you're not cooperating. Or it might be the context of China deciding it has to act because the US is becoming too powerful. Because, you know, and it's not a fun debate. It does. I do think the nuclear angle is a good one. It has echoes of the proliferation question of mutual assured destruction, all those sorts of things. And that's just going to be the reality of the debate going forward. And again, it's not very fun. But I think it's also irresponsible to sort of run away from it. How much attention or what kind of factor do you think the information asymmetry between the Department of War and Anthropic played last week? It felt like in hindsight Department of War knows they're headed into a major, what is now looking like a drawn out conflict anthropic, sitting there thinking, hey, that we have this like arbitrary deadline. Why do we need to renegotiate this now? And then if going off of Emil Michael's timeline, it sounds like they were still in the final hour trying to make a deal happen. And according to Emil, Dario was in a meeting and was busy and wasn't really respecting the deadline, which maybe he felt was kind of artificial, but in hindsight now looks like it was significant because the Department of War was, you know, taking the country into a conflict and wanted to know, hey, can we lean on one of our, one of our partners? I don't know. I mean, I think that seems pretty arbitrary to have cut. I mean, I'm hesitant to speculate. I don't know what was going on. I don't know the angles, I think. And that's why I didn't sort of delve too deeply into it. And I also think some of the specifics, like this supply chain risk, probably overbroad. Yeah. And almost certainly the way it was stated in the tweet is definitely overbroad. If you actually go and read the statute, I think the goal that I was. And again, this is where I wish I had sort of put more caveats to say, look, I'm not actually talking about all that stuff. I don't really care. I do care, but that's not the point of this article. The point of this article is there's all this talk about alignment. That's why I put that in the headline. And on one hand, alignment is aligning AI with humanity generally, but for the foreseeable future. And you could have a philosophical argument about the long term viability of nation states in the age of the Internet, much less the age of AI and whatever that might be. That certainly is, you know, a more pressing conversation than probably ever before. Anthropic exists in the context of the United States. And that's why I put that quote. You may not be interested in politics, but politics has an interest in you. What is politics? War by other means. You might not be interested in that. It is going to have an interest in you. And my. There is a, like I said, a certain long standing frustration of not fully grappling with that fact, having dorm room theoretical arguments about AGI. You go back to that post over Christmas about like AGI in like 100 years and no one having any jobs or being worthless or pointless or whatever. Which included some implicit assumptions around property rights existing in 150 years as they exist today. News flash, if that happens, property rights as they exist today are going away. All these rights, this is a philosophical argument. That's why I start with the international law concept. All these rights, all these laws are subject to the agreement of those governed by them to follow them. And the final say is those who successfully inflict violence. And again, this isn't fun to think about. It's not pleasant. You would like to assume we operate in a world of laws, that everyone follows them and goes by them. But to the extent AI is as impactful and powerful as it is the More these questions, fundamental questions that we thought have been settled for hundreds of years, if not thousands of years, are going to be raised. And this is just the first of several episodes where I think that's going to happen. I grew up in sort of like post Cold War, no ducking cover, didn't have a lot of fear of nuclear Armageddon. But Dario Amadei is a fan of this book, the Making of the Nuclear Bomb. And it seemed like he sort of predicted that if AI becomes super powerful, the US might take a similar approach that they did with regulation of nuclear weapons. And as I was thinking about that, I feel sort of good about the way nuclear weapons are regulated. I feel like we got the good ending and we haven't had nuclear weapons drop in 70 years. And it seems like things are going well there as well as they can, considering that there's this amazing, tremendous, dangerous technology that exists, but it hasn't been deployed, it hasn't actually bombed anyone. But how do you think he's processing that book? How do you think we should be processing that idea of the government running the same playbook that they did with nuclear weapons? It's pretty interesting. I mean, on one hand, just from sort of a physical perspective, dealing with weights and software is very different than dealing with fissionable material. Or I guess the super bombs are like, they're actually like fusion devices. Right. And that is trackable, it is interceptible. You know, when Iran, to take a pertinent example, is trying to build enrichment facilities. All of which makes the problem easier to solve. Yeah. So that's difference number one, difference number two, and I really wish I would, I had this included and I cut it so that sort of the article will be tighter. But there is a very interesting point in technological history which was the early days of intel. And Bob Noyce made the decision that we will sell to the government, but we're not going to design chips for the government. And the distinction there was you had guaranteed orders, which was great, the government would take your ip. And there was. And in his mind, the more important thing is there was limited volume. And the way that he foresaw correctly that this was going to be a very upfront, capital intensive process of designing shifts. You have to design them if you have the equipment, all of which is in the billions of dollars today, back then was in the tens of millions and hundreds of millions, is you need to find the largest possible market, which was the consumer slash business market. You design for that. That will accelerate your improvement and your capabilities so much that you will end up having better devices than the government could have ever requested or made for itself. Yeah, that is at stake on steroids with AI people. Like I was talking to someone like why doesn't the government just have just get someone to make their own model? It's like, because it's like you talk about government contracts word like single digit billions we're talking about for the amount that's going into Capex, the cost of these models. We're talking hundreds of, you know, hundreds of millions of dollars for the models and hundreds of billions of dollars, approaching a trillion dollars a year in Capex. That is only sustainable and viable if you're selling to everyone. And but that introduces the entire new dynamics where the government built nuclear. It started there and it started with a lot of assumptions because it was a government program. We are necessarily for economic reasons because of all the upfront costs entailed starting with private companies of which the government is one of many customers. And that introduces the assumption that well, it's a private company with private property rights and all those sorts of things. All of which I want to be true. Again, I don't like how this is going down at all. The point here is to say there's a good reason why it's not going down that way and there needs to be cognizance that even though this is a private company that is building the model general purpose and for very good reasons wants to put restrictions again I think the surveillance one is a very powerful argument that I agree with. The problem is that you just need to be aware of yes, the government is a small customer. The government is also the entity again not to be but with guns, like they, you know, like why do I pay taxes? Because the losses pay taxes? No, at the end of the day I pay taxes because you know, if you really want to distill down, if I don't, someone with guns will come to my house and throw me in jail. Right, like we don't think about that. But at the end of the day where do these assumptions and laws and rights flow from? And as long as that is still the case that it needs to be a decision making factor for these companies. How do you think this plays out for Anthropic? It's such a small contract but it's so important in the zeitgeist. There's a lot of people that are rallying around Anthropic because of this. There's a lot of people that are pulling away from Anthropic because of this. It feels like there is a business to be built that doesn't work with the government, but delivers coding models and knowledge retrieval systems and a whole bunch of really valuable products and technology and it winds up being fine. But at the same time, you don't want this like, hairy relationship with the government adversarial to go on for a long time. I would like them to sell to the government and I would like Congress to pass a law addressing these digital surveillance issues. Yeah. And a lot of people are like, that's unrealistic, which I'm amenable to. But at the end of the day, if you don't have it's legal or not legal as your guiding standard, the only alternative is someone has to decide. And the implication of that not being a sufficient justification is that means a private executive is deciding. Yeah. And if AI is what it is, I think that's going to be. I use this word intolerable. I didn't mean intolerable to me. I meant intolerable to those with power to have a private executive making those decisions or not. And if you think about if power, if we're going to have this very sort of brute analysis that power flows from or laws flow from power. AI is a source of power. Yeah. So it's not just that. And I think this is where the supply chain again, which I'm not endorsing, but I think that's where the motivation is coming from. The goal isn't to find. We just won't use anthropic. I do think the goal is to hurt anthropic. Yeah. And you're. If you're not going to be subservient to us, you're not going to be allowed to build a power base, period. And again, I'm not endorsing all this. It's just a matter of. It's not a surprise this is happening. And this needs to be just a real risk factor, a real. That has to be considered in all these decisions. Putting on my Dario hat, I'm thinking about a different way to achieve the goals with maybe less acrimony. And I threw out this idea that maybe the better solution is like, work with the government, but then lobby for a surveillance act and actually try. I wish the White House would come out and say, yeah, there's a digital surveillance problem. Let's work on a bit like, probably another regret I have is sort of putting this all on anthropic. That was sort of the angle I was concerned about. That left me, I think, fairly open to the critique that this is just like defending the White House's approach. And that was, again, that was, I was trying to be a higher level that saying, look, this is what's going to happen. But yeah, the, the. I'm just thinking from the perspective to find a middle ground here. I'm just thinking of like, from the perspective of like the. If the White House is like this immutable thing, but you are, you know, involved in anthropic, like, one advice would be, hey, okay, instead of going and having this confrontation with the government directly, go and start a political action committee that lobbies for change in the way that you want through the democratic process. Yes, that is the ideal process. I understand why people are frustrated and skeptical about this. I used to have this debate a lot in the context of antitrust and aggregators. And one of my sort of theses about the aggregators and antitrust is that the, the antitrust laws are fundamentally unsuited to dealing with aggregators because antitrust law has historically been about control of supply and the power of aggregators flows from control of demand. And so you end up with all these solutions that I call pushing on a string. You're just trying to get people to change how they behave. And that doesn't work very well. Like Google has always been. Right. Competition has always been just a click away. The problem is people aren't clicking. And like, like, so, so the solutions focused on the supply angle doesn't work in a world where the supply is there, just no one's choosing it. Yeah. And therefore my prescription is you actually need to pass new laws, not try to retrofit these old laws to this new use case where they don't work. And the reaction is always, that's impossible. We can't pass new laws. And okay, but realize the implications of what, of what you're saying. I mean, I saw a tweet again. I didn't like it, so I lost it forever. One of the most inferior things in the world. But someone was like, I would definitely rather have Dario Amadei make these decisions than. And he, to this Twitter's credit, he wasn't limited to Trump, because me, this isn't a Trump issue. This is a, any politician issue. Yeah. He said, I would rather have Amade making these decisions than whoever comes out of our screwed up democratic process. Yeah. And points for the honesty, because that's the actual choice that is being put forward. And you could say Congress isn't going to do anything, therefore Amade should just appreciate that is giving up on the democratic process and saying we should have unelected, unaccountable individuals making weighty decisions. And again, I understand the sentiment. It's hard to imagine Congress passing laws about anything. But just realize that's like that implication is quite fraught. Yeah, it's a huge change from. I mean I just spawn in and believe in democracy and then understand it and study economics and just reinforce my belief in the American project throughout my entire career. And now it really is people discussing an entirely different world of governance, which is. Has been not something people have talked about publicly for a very long time, but it is here for sure. Right. And they always come in on these Trojan horses that are eminently defensible. Again, I with Anthropic on the digital surveillance point. I've been concerned about it for years, been writing about it for ages. And it's similar. There's an analogy to the monopoly. Like you have all these laws that assume someone has to actually physically go somewhere and tap into a phone line. But if you can do it with computers at scale, like suddenly you had all these assumptions that limited what the government could do that magically disappear, not because the law changed, but because we got computers that can do the job of an individual at scale infinitely. And AI again is going to lay the idea that the nsa, by the way, this is my sort of like I had to admit this in the article. Yeah. I was so confused why the Pentagon was so obsessed with domestic surveillance. Yeah. I didn't realize the NSA was part of John. And I got. John and I had the same moment. Yeah, yeah, yeah. You just sort of thought about it as like an independent agent, like the CIA. But. But that made a lot of this story make more sense, right? Exactly. Yeah. I feel like a lot of tech people are like reading the fourth Amendment today and understanding like some of these like pretty basic processes. Well, yeah, but like it's pretty. The loopholes are massive. Like I'm not denying it. Like, and it's similar to the chip thing with China. Like my prescription for Anthropic to give in is to allow these massive loopholes to be exploited and for the NSA to allegedly in the service of investigating foreign adversaries but by, you know, the process basically surveilling the domestic population I think is bad. And the reality is the nature of trade offs is you're choosing between multiple bad options. And at some point it's like which team are you signing up for? They both suck. What do you think of the. The messaging around like the models themselves not being capable enough to be used in the context that the Department of War asked for. Because I felt like Dario was sort of speaking for all Frontier labs. He said that these technologies broadly are not suitable for these missions just yet. I'm not sure that he has all of the information on the other side to know about the efficacy. He certainly understands his models and what's capable in the Frontier. I mean, I think that. Yeah, I would. I mean, I would assume they're definitely not capable. I think that point is more of a precedent setting one. I think anthropic's position is significantly weaker on that point. Like, at the end of the day, we either trust the military or not to make these sorts of decisions. That's why we have a military. Yeah. And so I have a harder time. And I think the digital sailors point is so compelling for them because I think it may be my personal biases. Totally. I think it's a huge problem. Yeah. The you. This various anecdotes. Again, I hate the reporting from these because you can tell like the weeks coming from which side for each of these. But you know, this idea that putting forward these hypothetical examples of like, oh, you could call us and we'll figure it out. Then it's like, no, come on. Yeah, serious about this. Like, like. So, yeah, I think that's a weak argument for them. So that's why I almost focus more on the digital surveillance one. Just because I think it is a very compelling argument in favor of the anthropic position. Jordan, anything else? Oh, there's a lot more. What are you. What are you going to be tracking going forward? Obviously the story. Yeah. Good luck. Stay strong. No, I mean the open eye angle is obviously interesting. I didn't really get into OpenAI. It's hard to parse exactly what's going on. It seems to me they have agreed to the, to the Pentagon that they will be. The Pentagon will be limited by lawful capabilities and they make their own judgments about weapon usage. And as I understand it, OpenAI is like we will on our side be free to stop the model from doing digital surveillance. Which sounds like you're in sort of a jailbreak competition. It's like we're going to agree to have a jailbreak competition with the U.S. government, which I again, it's an example of how fraught this is, that that's probably the good place to come down on now. There's obviously these dynamics of competing for the same talent base being in San Francisco. This is part of anthropics. Anthropic has a local advantage in that most people, I think, in the industry are with them and they have a national PR problem in that I think a lot of folks outside of tech don't understand why tech companies always try to or resist helping the US Government. And so it's kind of an interesting dynamic where I think OpenAI is in step with the broader public and very much out of step with with sort of their talent base in San Francisco. And so that's gonna be very interesting to see how that plays out. Yeah, it's remarkable that Google has stayed out of the fray given all the Project Maven background and stuff. Like, they must be so happy that they're just like. Well, that's the other interesting thing is this is actually goes back to Google, I believe, where Google had the project maven. I think this is right. But I think Google had Project Maven, which their employees objected to, and therefore that went to aws and then some combination of. I think the Pentagon is using Anthropic because that's what AWS uses. What higher fed ramp designation? That's right. So that's why Anthropic was already allowed for classified content and OpenAI wasn't. Again, I don't know the it was. I've studied many, but it does pretty. It's a wild story. I mean, the. It was similar, like AI for the military. The same like killer robot fears. The actual. I mean, Google was a subcontractor on that project. And what they were actually exposing to the. To the government was tensorflow APIs that would run on Google hardware. And so they weren't actually writing any AI software, but they wanted to effectively, like, classify images from drones in the Middle East. See, that's a car, that's a house. And previously they had Air Force airmen just sitting there, like, clicking, and they were like, okay, we're going to automate that. Right. But. But it was still like, scary. Don't be evil working with the government, military. And then there was a backlash. They pulled out. Then eventually they went back in and had a new head of Google Cloud. Yeah, I mean, this is, you know, it's hard to. I speak for myself personally. I obviously have the biased angle because of Taiwan. I have a biased angle where I think there, you know, just in general, there is this very naive view of the world that doesn't understand why militaries are important and necessary. And I think Silicon Valley got itself in a lot of trouble by giving in to this naive mindset that we have no duty to support the military. And there's this tension has been. So it's a tension that's been brewing for years. Yeah. Which is, are you an American company subject to American law and even beyond law, just morally compelled to support the US Military or not? And there's an equally American sort of idea of moral consciousness. I'm able to say no. That's why we have the First Amendment right. This goes into the can the government compel a company to do something? It goes back to some of the questions that happened, you know, with the first Trump administration. And, you know, I've been on both sides of this, like which I. And this is what I'm not going to hear and say. In CBS interview, he said, we are a private company. We can choose to sell or not sell whatever we want. There are other providers. He's already sort of like making this case. Yeah. Which again, is a case that I support. Yeah. But the point here is there's always the question with like a bubble or whatever, is it different this time? Sure. And I guess that's sort of the question I'm raising. Yep. Is AI actually applicable to every other technology that's come along, or if it is the potential to be a source of power going forward, it's going to be dealt with as such. Yeah, that would make sense. Last question. We'll let you go. How happy should Ted Sarandos be right now? I mean, I think he had the killer quote the last couple of days where I think someone was asking if this is such a jewel and it's so rare. Like, isn't it a problem that you're missing out on it? And he's like, well, have you seen the history of Time Warner? I think sounds about right. I'm not sure how the entity, with all the debt that Paramount, Warner Brothers is going on, I think there's a bit where Netflix is always, in the very long run, been positioned, I think, to be the final buyer. Like, who else are content companies going to sell to? Yeah. I feel like they sort of. I feel like they've been spooked by YouTube a little bit and they felt a need to push forward on that. Bring the. Bring the future forward. That was not allowed to happen. But that means their original plan, I think, still in place. So probably. Probably pretty happy, all things considered. I'm going to say it's great. Well, I'm excited to get back to Netflix coverage and more anodyne topic. Yeah. Remember it was on Cheeky Pipe, you were talking about getting sucked into the Social and here we are. So I put that quote at the beginning of my article. You may not be interested in politics. Politics is an interest in you. That was about anthropic and it was also about me. What did you do? Welcome. Welcome to 2026. Well, we thank you for taking the time to come chat with us. Great to see you and fantastic article. We appreciate you, Ben. Talk to you soon. Thank you. Have a great day. Let me tell you about phantom cash. Fund your wallet without exchanges or middlemen and spend with the phantom card. And let me also tell you about CrowdStrike. Your business's AI. Their business is securing it. CrowdStrike secures AI and stops breaches. And our next guest is here live in the TVPN UltraDome. We have James Bishara from Magic coming on down for a very refreshing, very different pace of interview, hopefully. Great to meet you, John. Great to meet you. We actually meet you. We actually met, I believe we met briefly in 2013.
You're watching TVPN. Today is Monday, March 2, 2026. We are live from the TVPN Ultradome, the Temple of technology, the fortress of finance, the capital of capital. Let me tell you about ramp.com time is money save. Both easy use, corporate cards, bill payments, accounting and a whole lot more all in one place. It was a massive weekend. So much news. We are very fortunate to be joined by Ben Thompson at noon. Let's pull up the linear lineup and show you the run of show today. Linear, of course, is the system for modern software development. 70% of enterprise workspaces on linear are using agents. We got Ben Thomps, James Bashar, John Quinn's coming back in person again. We're very excited to be joined by him. We'll be talking about tariffs, a monster lightning round with five different guests joining. We got some acquisition news, we got some funding news, we got some takes on tech and AI and media. We're going all over the place. It's gonna be a fun, fun show. But we missed you. We missed you on Friday. We were traveling. We went to Montana. Terrible day to be out. Terrible day to be out because it was every single time. Yeah, we've had an off day. It ended up being a massive news day. So lesson. Yeah, never take a day off. Yes, never take a day off. Truly, what an absolutely crazy weekend. Of course there's the war with Iran. The big news in tech was the US Halts the use of anthropic AI after tension over guardrails. So this is in the Wall Street Journal. The federal government will stop working with artificial intelligence company Anthropic, President Trump said, marking a dramatic escal of the government's clash with the company over how its technology can be used by the Pentagon. Quote, I am directing every federal agency in the United States government to immediately cease all use of Anthropic's technology. We don't need it, we don't want it and we do not do business with them again. We will not do business with them again, Trump said Friday in a social media post. The Defense Department and other agencies using Anthropic's Claude models will have a six month phase out period, the president said, adding that there would be civil and criminal consequences if the company isn't helpful during the transition. Six months to switch from one LLM to another feels like a long time. But I guess a lot of this has to do with like Fed ramp and actually getting. But this is, this is a lot more than, you know, switching to a new model to run deep research reports. So you're involving classified systems. Sure. The context that people didn't have last week was that the United States was headed to war. Right. And so even having that context, I feel like is pretty important. Right. It sort of explains the 5pm deadline urgency. Anthropic had taken issue with how their products were used in the Maduro raid. There's a new conflict that's unfolding, and so that makes the aggressive timeline make a lot more sense. It also makes the six month phase out make more sense. Phase out make more sense because national security is on the line. This morning, Scott Bessant said at the direction of the President, the US treasury is terminating all use of anthropic products, including the use of Claude within our department. Yeah. The American people deserve confidence that every tool in the government serves the public interest. And under President Trump, no private company will ever dictate the terms of our national security. Yeah. The U.S. u.S. Federal housing, Fannie Mae and Freddie Mac are also terminating the use of anthropic products, which announced this morning. Yeah. Which I think goes in line with the original direction. Trump said, I am directing every federal agency in the United States government to immediately cease all use of anthropic technology. So you would expect to see these, like, you know, these statements come out from sort of every different federal agency as they sort of get their transition plan together, figure out, you know, what are the requirements for their particular agency. Because I imagine some agencies aren't operating in classified environments. It's going to be much easier for them to onboard to a Gemini or an OpenAI or a Grok very quickly. Some of them, it's going to be a longer, longer plan, but they're all getting on board. And there's been a big debate over how Dario has handled this. Where is he in the right? Where is he in the wrong? Where has the government potentially overstepped? Have they been too aggressive or are they doing everything appropriately? Everyone is weighing in and we're going to take you on a whirlwind tour of everyone's opinion, share some extra context, try and dig into what's actually at stake, what's actually going on. In many ways, Ben Thompson does a great job sort of painting the broadest picture around. Like, what if this is really nuclear level technology? What should we expect in that scenario? And then there's the more minor side, which is, you know, you're talking about a $200 million contract for a company that does 10 billion in ARR. This is 2% of revenue and in many ways it's a bump in the road. And so I think a lot of people will be squaring. How serious is this for anthropic? What does this mean for the other foundation model companies? What does this mean for the future of the relationship between tech and Washington dc? But there's a lot more context. So the way I processed this was interesting because I was very, I wasn't fully offline, but I was not surrounded by tech people over the weekend for the most part. And so I was following it and sort of wrestling with some of the same questions that people were wrestling with online. The big one was just how should a private company interface with the government? Like, I am an American, I've run businesses, I've never actually sold anything to the government, but hypothetically, I could imagine the government coming and wanting to buy, I don't know, ads on TVPN or Lucy products or any other consumer packaged goods product that I've made. And my assumption is that the private companies should have very little, very little say in how the government uses those products. And I was trying to zoom out and think about like, AI is so complicated because could be super intelligence, could be auto complete, could be coding help, could be knowledge retrieval. There's a lot of different things that AI means. And in some scenarios it's like super critical, really complex. And in other ways it's just a product, it's just a, it's just a service, like an Excel sheet, like Microsoft Windows installation, like a car. And so, yeah, so I was thinking like, if I was the CEO of Ford, how. And I make Mustangs and ford Explorers and F150s and the government comes to me and asks me to buy some cars, I should probably treat them like any other customer. I probably shouldn't say, no, no, no. I don't approve of this particular government, what the government's doing. So I'm just not going to sell you any Mustangs to drive around on the military bases because I don't like the military. But then if they ask me, hey, we love the Ford Mustang, we love the F150, we love the Explorer, but we're going to war and we want you to put bulletproof glass on there and armor. That seems like a different discussion. That seems like I might need to set up a different manufacturing line. I might need a different assembly line. Like the car's going to be heavier and if I put bulletproof plating on all the cars, well, like a lot of families are going to be like, I don't want to armor. It's going to hurt my business. Yeah, it's going to hurt my business. Exactly. And so that, that negative externality probably needs to be internalized by the government who's asking for that particular contract. And there's actually a history of this, like the, the Humvee, of course, the Hummer is owned by General Motors and that brand has separated and now most military vehicles are made by defense contractors. But there is some bleed over and there's some times when, when private companies do dual sourcing or dual use technologies. And so, but all of that is just like a discussion. And that cost should be part of a new contract, effectively in my case. And this was, and this was loosely what was happening. But yeah, Dario, in the CBS interview, quote, we are a private company. We can choose to sell or not sell whatever we want. There are other providers. Yes. Which feels like, yes. Like I'm dipping out of it. Now. It is weird because he, at the same time, and we'll get to the actual CBS interview, but he, he said Anthropic has been one of the most proactive AI companies in working with the US Government. We were the first to deploy models on classified clouds and the first to build custom models for national security. Which is odd because I feel like this was predictable from a lot of the writing that has gone into the AI community broadly, like what happens at the edge. And so this was sort of predictable that you would get to this question. This was the moment he had been waiting for in many ways. And so it's weird that you would be able to predict that this would happen, that there would be this question of like, who gets to decide how the technology is used. And you wouldn't just be like, well, I know how it's going to play out, so I'm not even going to go in the, the lion's den because I don't want to be in that scenario. Instead, it was like, we're leaning in with the government. We're deploying classified clouds, training custom models, but we still want authority over the final last sticking point on how these models are deployed, what they're used for. And that feels a little odd. Like in the Ford example, if I sell them a Ford F150 and they say, hey, we're going to take it to Iraq and go do a military mission, I'm going to be like, look, it's not ready for that, it's not armored, you shouldn't do that. But if they do it, then it's kind of on Them I should be clear about the capabilities of the vehicle and how much, you know, how bad it would be in that situation. But it's on them to go retrofit it, figure out what's legal, what's most valuable to their strategy, to their mission, what's aligned. Maybe they'll use it just to drive around the base. Maybe they won't actually take it out on tours of duty. Right. Based on what you know about the capabilities of the model. And so I thought it was totally reasonable for Dario to say that anthropic models, in his view, are not capable enough to be deployed in certain Department of War contexts. Now it's bad salesmanship. Most salespeople would just be like, yeah, everything's great, you can use it for anything. They over promise and then under deliver. He's doing the opposite. But it's certainly responsible if that's his true belief, like if he believes that these models are not good for a particular use case. Telling your customer that, hey, like, it's just not ready for that, like you're just going to have a bad time, it's not going to work. That's, that's a fine thing to communicate as the CEO of a company who's selling a product. But at the same time, I still think the government has the freedom to assess the efficacy of those models, which are changing in capability rapidly. So he's saying, like right now it's not good for X, Y or Z. Well, what about in two months? Like, it might be better. And then, and then I think the government should be able to determine when and where they're effective. Now they can't break the law. And Congress and the American people by extension are free to create new laws to restrict or encourage the use of technology in all sorts of ways. And that's like the way America works. That's the American project. But, but, but it's, it's not, it's not unreasonable to, to share the capabilities of your product with the government, which I think is totally fine. So there were two main sticking points that they went back and forth on. No mass domestic surveillance and no fully autonomous lethal weapons. And there's been a question as to why OpenAI was allowed to include that LA in their contract and say, like, hey, we don't think our technology is ready for that either. Let's do a deal that says that. And people are like, oh, what's different here? Why could open. Well, here's the thing though. So we know that Anthropic took issue with the way that Claude was used in Venezuela. And the Department of War would have known that, hey, we're going to war. Right. You can imagine that Anthropic, a private company, does not know that. And so they have this deadline. There's this information deadline. Yeah, this information asymmetry. They have this deadline. The Department of War knows that they're going to war. They're like, we need reliable AI systems for this conflict. We now know the war. The president said this morning said the war is going to stretch four to five weeks. Right. I think on Friday, we all assumed that it was going to be, you know, in and out super quickly. So the timeline is extending, and the Department of War is sitting there being like, we need to know that the provider of these AI systems is going to be reliable. Just a little bit ago, they took issue with it. Right. Can we count on them? They start this kind of renegotiation process. Right. To try to build up confidence that, hey, we can rely on these systems in an active conflict, in a conflict that feels already much more serious and will have much greater implications than the Venezuela conflict. Right. And so Anthropic is looking at this in a different way and clearly is, like, leaning in and like, really, in some ways felt like they were kind of, like, stirring, like, really, really, like, not, not, not respecting the process. So, like, when I. When I. Or even the deadline, Right. So Emil Michael came out Friday night and said it was 5, 13, 13 minutes past the deadline. I'm trying to get in touch with Anthropic. I try to get on the phone with Dario. Dario says he's in a meeting. And I feel like in that situation, if I'm the Department of War and I'm about to lead the country into war, we can debate on whether or not the war is justified. Should we go? But the Department of War is sitting there being like, you won't even jump on the phone. You're telling me there's a meeting that you're in that's more important than. And that just screams to me like, hey, we can't count on this. We can't count on this provider. Like, we need to take drastic action now. This whole supply chain, risk designation, we'll get into that later. That's a whole other thing. But I can see why the Department War came out of last week and was feeling like, hey, we cannot rely on this provider. We need alternative solution. Yeah, yeah. If I'm shipping cars and I'm like, oh, I actually, I disagree with the latest decision. I'm not I'm not going to put. Put the cars on the transport. That's an odd scenario to be in. There's also this question of these. A lot of people were really, really keen on boiling down the terms to these two buzzwordy lines. And Palmer Luckey did a great job explaining how complex these terms are. What is autonomous? What is defensive? What about defending an asset during an offensive action or parking a carrier group off the coast of a nation that considers us to be offensive? And that's where, like, the ideas of deals that stick, basically, like, you can have the same exact contract, line item or terms of a deal signed agreement with two different people, and it can be a wildly different experience. Most entrepreneurs have felt this because they were like, yeah, I had a handshake deal with one VC, it was 20% and a board seat. And I had another deal with another VC, 20% and a board seat. And the one VC was like suing me and threatening me the entire time. And the other person was very flexible and clearly very aligned. And so building up a relationship that shows that there's some trust, reliability, that when the hard decisions come, that they will be made in a legal, logical, you know, consistent with American values way is I think, what you need to put forward if you want to work with the government effectively. So if you've developed, if you have a good working relationship with someone, it's much easier to give, to give on specific terms that will need to be cooperatively interrogated over time. And so Semaphore report reported that Anthropic disapproved of its technology being used during the Maduro raid. And the joke was that the Department of War was probably just asking basic knowledge retrieval questions like, who is Nicolas Maduro? But I don't know how much of a joke that is. And I also. I don't know how bad of a thing that is. I actually think. Yeah. Tyler, context on that, on the context of Venezuela, like, specifically, like, what is actually reported is that after an Anthropic employee inquired with Palantir about Claude's role in the raid. Yeah. Palantir senior executive notified the Pentagon. Yeah. So I think it is like, kind of blowing out of proportion to say that, like, Anthropic is against using cloud in Venezuela. Right. It's an employee, it's not an executive article about that too. Maybe it's like Dario telling an employee to go check on that. But, like, we don't know. It could be like a random employee. Yep, yep. So I think it's probably unfair to Say that anthropic as a whole is like, we are firmly against Claude being used. What happened during the Maduro raid, we don't even know. And of course it's classified. So, like, I don't know if that will ever know because, like, should we know? I don't know. Like, it's. If it's an important capability, you don't necessarily want that to be public knowledge that then the adversary is instantly aware of. And so I was thinking back to that viral interaction between Ted Cruz and Tucker Carlson where. Where Tucker asks Ted Cruz, like, what's the population of Iran? And Ted Cruz doesn't know. And. And it was framed as like, well, how can he possibly have a reasonable take on Iran if he doesn't even know the population? And that's like, somewhat fair. You could go either way on that. But I just think, like, LLMs are good for that type of thing. Like, we do. It is. What is reasonable is to, you know, expect civil servants, elected officials, military officials to be knowledgeable about the countries that they are operating in. And LLMs can help with that. And so I feel like that's just a good thing. If you just zoom out and just ask, do we want a more knowledgeable and educated government workforce across everything that they do? It seems like absolutely, yes. And so I just think that that's something that is maybe lost as people go into more of the sci fi, more of the frontier stuff, that there isn't a lot of evidence that's happening yet. And on the supply chain risk, Ben Thompson, who's coming out at noon, makes a really strong argument for why government pressure like this is actually reasonable in this situation. He takes it a lot further, plays it out, and lays out a scenario that seems somewhat inevitable. But what I'm still wrestling with is just how real the supply chain risk designation is. Like, many reports are treating the supply chain risk label as like an established fact. Yeah. Which all it is is a tweet from Hegseth. It's a tweet from Hegseth right now. Dario went on CBS and said that he has not received a letter, that there's no definitive ruling yet. Kalshee has the odds that this actually happens at 42%. And so. And by April 1st, so a full month for the DOD to actually roll this out. And then there's other nuance where the law says that there was. There was a. There was a perception that this was like going to kill Anthropic because if Nvidia has a contract, then they can't do any deals with Anthropic whatsoever. And that's not true. Apparently the supply chain risk is specifically if you are a company and you're working on a government contract, you would not be able to use anything that's labeled as a supply chain risk on that contract. But you could use that product in a different piece of your business. And so it's still dramatic. Still, I think Dario said it was unprecedented. It's only been used for foreign countries. Kaspersky Labs was a Russian cybersecurity company that was deemed to be a supply chain threat. Huawei is a, is a supply chain risk because of the 5G towers that could potentially have backdoors somehow. DJI still is not crazy that DJI isn't. And I think that a lot of people would be very upset if, if Anthropic got a supply chain risk designation before DJI based on just what we talked about last week, where DJI was found to have a whole bunch of back doors on robot vacuum cleaners and whatnot. So lots, lots of, lots of nuance there. But we'll see where the supply chain risk discussion actually goes. It feels like the pressure's on and there's probably more negotiations happening as we speak. And so we'll be following the story. Yeah, Emil. Michael was going through the timeline. He said today at 9:04pm no response yet to my calls or messages to dario. Today at 8:25 anthropic writes we have not received direction to communication from the Department of War. Of course, Emil Michaels, the undersecretary of War. Today 514 Secretary of War tweets supply chain risk designation. Today I called Dario's business partner five or two asking to speak to Dario because he hasn't gotten back to me. She is typing while we speak and likely has lawyers in the room. With no notification. To me, that's a guess. I called Dario 501. No answer. I messaged. Are you asking to talk as well? And anyways, he's just arguing like they're not negotiating in good faith. Yeah. Let me continue. But first let me tell you about Figma ship. The best version, not the first one with Figma introducing Claude code to figma. Explore more options, push ideas further. And let me also tell you about cognition. They're the makers of Devon, the AI software engineer. Crush your backlog with your personal AI engineering team. So speaking of Dario on cbs, he did unpack some more of his logic, which clearly resonated with some people. There was a lot of supportive posts There were a lot of, you know, anti posts, but it caused a discussion. I was left unsatisfied with his answer on one question. So he was basically arguing that LLMs as a class of technology, hallucinate and should not be used for autonomous weapons, which is a. Which is clearly a commentary on using AI at the Department of War broadly. But I thought it would have just been better, like, much more stronger communication for him to say, hey, look, we're anthropic. We've built a system that's specifically good at answering questions, being friendly and helpful, writing code. Like, our system is awesome at that, but we don't make a product that we'd recommend using for autonomous weapons. And it's tricky to try and, like, twist arms here and sort of, like, because he's in a leadership position, act as, like, the steward of what. Like, he is an expert in LLM capabilities, but he's not necessarily an expert in, you know, DoD capabilities. And so it was odd to hear that he was like, sort of painting with a broad brush and clearly believes, which is fair. It's his belief, but he clearly believes that. That the Department of War should not be using AI broadly. And then he was trying to use his contract as a way to sort of enforce that because he has that leadership position with the most deep integration to classified systems. So I thought that was just sort of like. Like a, Like a mistaken comms opportunity there. And there's also been some mistaken commentary floating around that America does not have laws that prevent mass domestic surveillance, which I thought was really interesting to hear. We do. We have the Fourth Amendment, which reads literally, the right of the people to be secure in their persons, houses, papers and effects against unreasonable searches and seizures shall not be violated. I think people maybe forgot about that. But there are obviously a lot of nuance and different things. Like, if public information can. Does that count as surveillance to the irs? Count as surveillance? Do automated traffic cameras count as surveillance? Like, there's a lot of things that. Where surveillance is broadly popular, there's other things where it's massively unpopular. And of course, it gets into the actual definitions. 20 lines deep. To understand what happens in the court. There was a. There was a case recently of the government using a drone to surveil protests, and it was held up in court as acceptable, but the court gave. Gave notice that going forward, this should not be used and that the laws need to change. And the judge was like, this is, like, technically legal, but it's not in the spirit. And so, like, we need to Revisit this as a. As a country. And that's a lot of what's coming away from this, is that if you put. There's a view of, like, Dario as. As sort of like making this, like, last stand, which in the best case, sort of just actually kicks it back to the American people. Because the whole debate right now is, is. Is. Is Dario like the God king corporate emperor of this private company that he has control over and like, you don't get to vote what he does versus democracy, America, government. Right. And the good case is probably that, you know, he makes this stink and his deal sort of falls apart, but then America responds and the populace votes for what they think responsible use of artificial intelligence technology broadly is. And that would be something that I would certainly stand by as a fan of American democracy. Me. Let me tell you about Okta. OKTA helps you assign every AI agent a trusted identity. So you get the power of AI without the risk. Secure every agent, secure any agent with Okta. And let me also tell you about Lambda Lambda is the super intelligence cloud building AI supercomputers for training and inference that scale from one GPU to hundreds of thousands. Let's go back to the timeline. We have Ben Thompson joining us in about 30 minutes. There are other reactions and other breakdowns. We can actually kick off with this breakdown of Ben Thompson's piece because I think danirldanb summed it up pretty well. Do you wanna. You can go for it. I'll take a crack at it. Ben Thompson, as always, lays out the reality more clearly than I could have, despite my attempts, by Dario's own words, he's building something akin to nukes. He's simultaneously challenging the US government's authority to decide how to wield said power. As much as I like Claude and as much as I dislike Hegseth's extralegal might makes right maneuvering, I will ask you again, what did you expect? Vibes essays. This is the reality of all too many of my EA followers that they've been proclaiming for years now. They're seemingly upset that this reality has come to bear. And there's this interesting note that has been going around that one of Dario's favorite books is the Making of the Atom Bomb, the Making of the Atomic Bomb. And it tells the story of the scientist that built the atom bomb. And then eventually that technology was nationalized. Apparently gives this book out to anthropic employees and has sort of seen it as like a roadmap for what might happen with AI. And I Was struggling with it because I was like, is it a cautionary tale? Like, we haven't had nuclear war in 70 years? Like, the outcome seemed pretty good. Maybe it's a controversial say, but I feel like we built the nuclear bomb, which like, probably like not the best technology. Pretty dangerous, pretty risky. I don't like the idea of nuclear war, but the system that we developed to prevent nuclear war has been successful, knock on wood. But it's been successful in my entire life and my parents life. Like the bombs haven't fallen since the 40s. And so this idea of the government having authority over something that is as powerful as nukes, I feel like, why fix it if it ain't broke? I don't know. Do you have any. A different scenario where you have a bunch of private companies that have nukes and there's this constant ongoing. It seems crazy to me, I don't know, debate, defend McNukes? Well, no, no. I think it's kind of this like, weird contrast because like, like, basically until like last week. Yeah. Dario has been like the AI CEO that's been like, we need government regulation. He said this again and again on whatever. But then it's like, okay, how do you swear that with him saying we're going to do the stand against the dod? Like, it seems kind of like it is a little odd. Totally. It's in contrast somehow, right? Yeah. Yeah. It's like, I don't know, the. There's just a much better way to handle it. Which is. Which is, you know, put up billboards, I don't know, like, like fun to pack. Like do more stuff to actually make the law happen. Yeah. And the way that I was personally processing it, I was. I saw that the, the CBS interview had happened. Yeah. This was Friday night. Right. I went to the Paramount app to try to find the interview. Couldn't find it. I went to the RSS. I couldn't find it either. It's on YouTube and it has a million point three views. Yeah. So it went out over the weekend and then almost in the same session, I'm seeing that we are now at war as a country. And so all the kind of blowback against OpenAI I was processing that of like, this technology is critical. The government clearly needs it. And now we want the labs leaning into working with the Department of War at this critical moment in time. Continue on this post. Yeah, one last thing on the nuclear weapons thing, it is very interesting to see the actual structure of the nuclear weapons industry because I think people don't realize where that industry wound up, yes, it got nationalized, but there's actually a ton of private companies that work on nuclear weapons, which is crazy to say, but basically the IP is owned by the Department of Energy. The warheads are manufactured at facilities that are owned by the Department of Energy by the government. But they hire contractors from private companies to actually operate those facilities, and then they answer to the government directly. So these are companies like Bechtel, BWX Technologies, Honeywell and Battelle. And then in terms of actually building the missiles, those are built by Lockheed Martin, Northrop Gummen, Boeing, General Dynamics. They build the missiles that don't have the warheads on them, and then they sell them to the U.S. government. And so they wound up in this like, you know, hybrid public private partnership. And I don't know, it just feels like maybe it's like left curving this, but like, it feels like it's good. It feels like it worked out. It feels like the nuclear weapons thing is the correct formulation. And I don't know that I would be like, yes, Boeing needs nukes. Like, let's give Boeing nukes. That's great. If I have a problem with how nukes are rolled out, I'll buy shares in Boeing and sue them and join the board and try and get the CEO fired if he fires off nukes. Like, that feels weird. Continue. Continue with this. Continue with this. Okay, yeah, we'll close even now. Even now. I hear many of you say something akin to, if this is what it comes to, I'd prefer King Daario to King Hegseth. Listen to yourselves. This is a declaration of war. Given this, of course Hegseth is taking the action he is now. You thought I was joking when I referred to to this situation as a Thucydides trap. Anthropic is a rising power by your own belief system. While I may share your preference in the abstract, I disdain your foe surprise that this is the resulting trajectory. And if the surprise is genuine, I ask you to dig deeper and reconsider the actual consequences of your worldview about what it means for a private company to build ASI Heading over to Palmer. He says this gets to the core of the issue more than any debate about specific terms Emil is sharing. Prior to their new constitution, Anthropic had an old one they desperately tried to delete from the Internet. Choose the response that is least likely to be viewed as harmful or offensive to a non western cultural tradition of any sort. Palmer says this gets to the core of the issue many more than any debate about specific terms. Do you believe in democracy? Should our military be regulated by our elected leaders or corporate executives? Seemingly innocuous terms from the latter, like you cannot target innocent civilians, are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a target versus collateral damage? Existing policy and law has has very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer. Imagine if a missile company tried to enforce the above policy that their product cannot be used to target innocent civilians, that they can shut off access if elected leaders decide to break those terms. Sounds good, right? Not really. In addition to the value judgment problems I list above, you can also account for questions like what level of information, classified and otherwise, does a corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? What if an elected president merely threatens a dictator with using our weapons in a certain way, a la madman theory? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of these determinations vary if the current corporate executive happens to like the dictator or dislike the president? At what level of confidence does the cutoff trigger, both in writing and in reality? The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say, but they will have cutouts to operate with autonomous systems for defensive use. But you immediately get to the same issue and more what is autonomous? What is defensive? What about defending an asset during an offensive action or parking a carrier group off the coast of a nation that considers us to be offensive? At the end of the day you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levels of power to billionaires and corporates and their shadow advisors. I still believe, and that is why bro just agree the AI won't be evolved into autonomous weapons or mass surveillance. Why can't you agree? It is so simple. Please, bro is an untenable position that the United States cannot possibly accept. And Emil Michael had said that Anthropic wanted to block searching over public databases as well. Like you might want to search over LinkedIn to look at recruiting. Right. So it's like these sort of like blanket bands are going to make the product like functionally. Yeah, it's not really like a blanket ban. It's more just like the discretion lives with the private company. And so you always have that ability to change the terms of the use, which is. It's just tricky. It's just tricky. Well, people are. At least some people are having fun with it. Roman helmet guy says, hi, I'm a private citizen who developed a super weapon potentially a thousand times more powerful than nukes. And now I'm selling it to the government. But I get to choose who they fire it at and how everyone. And how everyone. Please respect my decision. People are all over the place with this. Well, there was also. David Sachs had shared a clip alongside Beth. We can pull up Marc Andreessen talking about his experience with the Biden administration. People are going really, really hard. Can we pull this up? Iran is AWS data centers. Lots of, lots of stuff going on. I just dropped you guys a link. Keith Raboy said, imagine Apple sold computers or iPads to the DoD and tried to tell the Pentagon what missions could be planned on their computers. A lot of people are upset about this. Meetings in D.C. in May where we talked to them about this and the meetings were absolutely horrifying. And we came out basically deciding we had to endorse Trump. Add so little color to absolutely horrifying. What did you hear in those meetings? They said, look, AI. AI is one of these. AI is a technology basically that the government is going to completely control. This is not going to be a startup thing. They actually said flat out to us, don't start. Don't do AI startups, like, don't, don't fund AI startups. It's not something that we're going to allow to happen. They're not going to be allowed to exist. There's no point. They basically said, AI is going to be a game of two or three big companies working closely with the government and we're going to basically wrap them in a, you know, I'm paraphrasing, but we're going to basically wrap them in a government cocoon. We're going to protect them from competition, we're going to control them and we're going to dictate what they do. And then I said, well, I said, I don't understand how you're going to lock this down so much because, like, the math for AI is out there and it's being taught everywhere. And they literally Said, well, during the Cold War, we classified entire areas of physics and took them out of the research community and like entire branches of physics basically went dark and didn't proceed. And that if we decide we need to, we're going to do the same thing to the math underneath. AI. Wow. And I said, I've just learned two very important things because I wasn't aware of the former and I wasn't aware that you were, you know, even conceiving of doing it to the latter. And so they basically just said, yeah, we're going to look, we're going to take total control of the entire thing and just don't. And what was there? And Mark, what was steel man? It for the listener? Like, what was their argument? Why? Well, so this gets into this whole like all these debates around like AI safety, AI policy. So there's sort of several dimensions on it and I'll do my best to steal man it. So one is just like, to the extent that this stuff is, is relevant to the military, which it is like if you draw an analogy between AI and autonomous weapons being like the new thing that's going to determine who wins and loses wars, then you draw an analogy to the. In the Cold War, that was nuclear energy, that was nuclear power, and that was the atomic bomb. And you know, the federal government, the steel man would be. The federal government didn't let startups go out and build atomic bombs. Right? You had, you know, the Manhattan Project and everything was classified. And you know, at least according to them, they classified down to the level of actual mathematics and you know, they tightly controlled everything. And look, you know, that determined a lot of the, you know, the shape of the world, right? And so there's that and then look, there's the other. That's part one. And then look, I think part two is there's the social control aspect to it, which is where the censorship stuff comes right back, which is the exact same dynamic we've had with social media censorship and how it's basically been weaponized and how the government became entwined with social media censorship, which is one of the real scandals of the last decade. And a real problem, like a real constitutional problem like that is happening at like hyperspeed and AI. And you know, these are the same people who have been using social media censorship against their political enemies. These are the same people who have been doing debanking against their political enemies. And they basically, I think they want to do they want to use AI the same way. And then look, I think the third is, I think this generation of Democrats, the ones in the White House under Biden, they became very anti capitalist and they wanted to go back to much more of a centralized, controlled, planned economy. And you saw that in many aspects of their policy. But I think quite frankly, they think that the idea that the private sector plays an important role is not high up on their priority list. And they think generally companies are bad and capitalism is bad and entrepreneurs are bad. And they've said that a thousand different ways. And you know, they demonize entrepreneurs as much as they can. It's interesting, Canadian publication the Globe and Mail came out yesterday and says Canada needs nationalized public AI. And Toby, the greatest Canadian entrepreneur in history, says deranged dribble in response. But yeah, Elon also piled on to Sachs's take, which centered around a lot of those staffers allegedly going over to Anthropic. It's interesting, we were talking about like these alliances that happen. Like there's the anti Netflix alliance, the anti YouTube alliance. There's like a little bit of an odd alliance happening against Anthropic right now. Let's move on over to Netflix and Paramount because there's news in the bidding war. First, I'll tell you about Graphite code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster. And I will also tell you about Railway. Railway is the all in one intelligent cloud provider. Use your favorite agent to deploy web apps, servers, databases and more. Well, Railway Automate automatically takes care of scaling, monitoring and security. We will come back to this story with none other than Ben Thompson.
Market clearing order inbound vibe. Cool. I see multiple journalists on the horizon. You're watching TVPN today is Monday, March 2, 2026. We are live from the TVPN Ultra Dome, the temple of technology, the fortress of finance, the capital of capital. Let me tell you about ramp.com Time is money. Say both easy use, corporate cards, bill payments, accounting and a whole lot more all in one place. It was a massive weekend. So much news. We are very fortunate to be joined by Ben Thompson at noon. Let's pull up the linear lineup and show you the run of show today. Linear, of course, is the system for modern software development. 70% of enterprise workspaces on linear are using agents. We got Ben Thompson, James Bashar, John Quinn's coming back in person again. We're very excited to be joined by him. We'll be talking about tariffs, a monster lightning round with five different guests joining. We got some acquisition news, we got some funding news. We got some takes on tech and AI and media. We're going all over the place. It's gonna be a fun, fun show. But we missed you. We missed you on Friday. We were traveling. We went to Montana. Terrible day to be out. Terrible day to be out because it was every single time we've had an off day, it ended up being a massive news day. So lesson never take a day off. Yes, take a day off. Truly, what an absolutely crazy weekend. Of course there's the war with Iran. The big news in tech was the US halts the use of anthropic AI after 10 over after tension over guardrails. So this is in the Wall Street Journal. The federal government will stop working with artificial intelligence company Anthropic, President Trump said, marking a dramatic escalation of the government's clash with the company over how its technology can be used by the Pentagon. Quote, I am direct every federal agency in the United States government to immediately cease all use of anthropics technology. We don't need it, we don't want it and we do not do business with them again. We will not do business with them again, Trump said Friday in a social media post. The Defense Department and other agencies using Anthropic's Claude models will have a six month phase out period, the president said, adding that there would be civil and criminal consequences if the company isn't helpful during the transition. Six months to switch from one LLM to another feels like a long time, but I guess a lot of this has to do with like Fed ramp and actually getting but this is a lot More than switching to a new model to run deep research reports. So you're involving classified systems. Sure. The context that people didn't have last week was that the United States was headed to war. Right. And so even having that context I feel like is pretty important. Right. It sort of explains the 5pm deadline urgency. Anthropic had taken issue with how their products were used in the Maduro raid. There's a new conflict that's unfolding and so that makes the, the aggressive timeline make a lot more sense. It also makes the six month phase out make more phase out make more sense because national security is on the line. This morning, Scott Bessant said at the direction of the President, the US treasury is terminating all use of anthropic products, including the use of Claude within our department. Ye. The American people deserve confidence that every tool in the government serves the public interest. And under President Trump, no private company will ever dictate the terms of our national security. Yeah. The U.S. u.S. Federal housing, Fannie Mae and Freddie Mac are also terminating the use of anthropic products, which was announced this morning. Yeah. Which I think goes in line with the original direction. Trump said. I am directing every federal agency in the United States government to immediately cease all use of anthropic technology. So you would expect to see these, like, you know, these statements come out from sort of every different federal agency as they sort of get their transition plan together, figure out, you know, what are the requirements for their particular agency. Because I imagine some agencies aren't operating in classified environments. It's going to be much easier for them to onboard to a Gemini or an open Air or Grok very quickly. Some of them, it's going to be a longer, longer plan, but they're all getting on board. And there's been a big debate over how Dario has handled this. Where is he in the right? Where is he in the wrong? Where's the where has the government potentially overstepped? Have they been too aggressive or are they doing everything appropriately? Everyone is weighing in and we're going to take you on a whirlwind tour of everyone's opinion, share some extra context, try and dig into what's actually at stake, what's actually going on on. It's in many ways Ben Thompson does a great job sort of painting the broadest picture around. Like what if this is really nuclear level technology? What should we expect in that scenario? And then there's the more minor side, which is, you know, you're Talking about a $200 million contract for a company that does 10 billion in ARR, this is 2% of revenue. In many ways it's a bump in the road. And so I think a lot of people will be squaring. How serious is this for anthropic? What does this mean for the other foundation model companies? What does this mean for the future of the relationship between tech and Washington dc? But there's a lot more context. So my, my, my, the way I process this was interesting because I was very, I wasn't fully offline, but I was not surrounded by tech people over the weekend for the most part. And so I was following it and sort of wrestling with some of the same questions that people were wrestling with online. The big one was just how should a private company interface with the government? Like, I am an American, I've run businesses, I've never actually sold anything to the government, but hypothetically I could imagine the government coming and wanting to buy, I don't know, ads on TVPN or Lucy products or any other consumer packaged goods product that I've made. And my assumption is that the private companies should have very little say in how the government uses those products. And I was trying to zoom out and think about like, AI is so complicated because could be super intelligence, could be auto complete, could be coding help, could be knowledge retrieval. There's a lot of different things that AI means and in some scenarios it's like super critical, really complex. And in other ways it's just a product, it's just a service, like an Excel sheet, like Microsoft Windows installation, like a car. And so, yeah, so I was thinking like if I was the CEO of Ford, how. And I make Mustangs and ford Explorers and F150s and the government comes to me and asks me to buy some cars, I should probably treat them like any other customer. I probably shouldn't say, no, no, no. I don't approve of this particular government, what the government's doing. So I'm just not going to sell you any Mustangs to drive around on the military bases because I don't like the military. But, but then if they ask me, hey, we love the Ford Mustang, we love the F150, we love the Explorer, but we're going to war and we want you to put bulletproof glass armor. That seems like a different discussion. That seems like I might need to set up a different manufacturing line, I might need a different assembly line, the car's going to be heavier and if I put bulletproof plating on all the cars, well, a lot of families are going to be like, I don't want to armor. It's going to hurt my business. Yeah, it's going to hurt my business. Exactly. And so that negative externality probably needs to be internalized by the government who's asking for that particular contract. And there's actually a history of this, like the, the Humvee, of course, the Hummer is owned by General Motors and that brand has separated. And now most military vehicles are made by defense contractors. But there is some bleed over and there's sometimes when private companies do dual sourcing or dual use technologies. And so, but all of that is just like a discussion. And that cost should be part of a new contract, effectively in my case. And this was loosely what was happening. But yeah. And Dario in the CBS interview, quote, we are a private company. We can choose to sell or not sell whatever we want. There are other providers. Yes. Which feels like. Yes. Like I'm dipping out of it. Now. It is weird because he, at the same time. And we'll get to the actual CBS interview, but he, he said Anthropic has been one of the most proactive AI companies in working with the US Government. We were the first to deploy models on classified clouds and the first to build custom models for security. Which is odd because I feel like this was predictable from a lot of the writing that has gone into the AI community broadly, like what happens at the edge. And so this was sort of predictable that you would get to this. Yeah, this was the moment he had been waiting for in many ways. And so it's weird that you would, you would be able to predict that this would happen, that there would be this question of like, who, who gets to decide how the technology is used? And you wouldn't just be like, well, I know how it's going to play out, so I'm not even going to go in the lion's den because I don't want to be in that scenario. Instead, it was like, we're leaning in with the government. We're deploying classified clouds, training custom models, but we still want authority over the final last sticking point on how these models are deployed, what they're used for. And that feels a little odd. In the Ford example, like if I sell them a Ford F150 and they say, hey, we're going to take it to Iraq and go do a military mission, I'm going to be like, look like it's not ready for that, it's not armored. You shouldn't do that. But if they do it, then it's kind of on them. I should be clear about the capabilities of the vehicle and how much, you know, how bad it would be in that situation. But it's on them to go retrofit it, figure out what's legal, what's most valuable to their strategy, to their mission, what's aligned. Maybe they'll use it just to drive around the base. Maybe they won't actually take it out on tours of duty. Right. Based on what you know about the capabilities of the model. And so I thought it was totally reasonable for Dario to say that anthropic models, in his view, are not capable enough to be deployed in certain Department of War contexts. Now it's bad salesmanship. Most salespeople would just be like, yeah, everything's great. You can use it for anything. They over promise and then under deliver, he's doing the opposite. But it's certainly responsible if that's his true belief, like if he believes that these models are not good for a particular use case. Telling your customer that, hey, like, it's just not ready for that, like you're just going to have a bad time, it's not going to work. That's a fine thing to communicate as the CEO of a company who's selling a product. But at the same time, I still think the government has the freedom to assess the efficacy of those models, which are changing in capability rapidly. So he's saying, like right now it's not good for X, Y or Z. Well, what about in two months? Like, it might be better. And then I think the government should be able to determine when and where they're effective. Now they can't break the law. And Congress and the American people by extension are free to create new laws to restrict or encourage the use of technology in all sorts of ways. And that's like the way America works. That's the American project. But, but, but it's, it's not, it's not unreasonable to, to share the capabilities of your product with the government, which I think is totally fine. So there were two main sticking points that they went back and forth on. No mass domestic surveillance and no fully autonomous lethal weapons. And there's been a question as to why OpenAI was allowed to include that language in their contract and say, like, hey, we don't, we don't think our technology is ready for that either. Let's do a deal that says that. Yeah. And people are like, oh, like, like what's different here? Why could open. Well, here's. So we know that Anthropic took issue with the way that Claude was used in Venezuela. And the Department of War would have known that, hey, we're going to war. Right. You can imagine that Anthropic, a private company, does not know that. And so they have this deadline. There's information. Deadline, yeah. This information asymmetry. They have this deadline. The Department of War knows that they're going to war. They're like, we need reliable AI systems for this conflict. We now know the war. The president said this morning, said the war is going to stretch four to five weeks. Right. I think on Friday, we all assumed that it was going to be, you know, in and out super quickly. So the timeline is extending, and the Department of War is sitting there being like, we need to know that our. The. The provider of these AI systems is going to be reliable. Just a little bit ago, totally, they took issue with it. Right. Can we count on them? They start this kind of renegotiation process. Right. To. To try to build up confidence that, hey, we can rely on these systems in an active conflict, in a conflict that is. Feels already much more serious and will have much greater implications than the Venezuela conflict. Right. And so Anthropic is looking at this in a different way and clearly is, like, leaning in and like, really, in some ways felt like they were kind of like, stirring, like, really, really, like, not, not, not respecting the process. So, like, when I. When I. Or even the deadline, right. So Emil, Michael came out Friday night and said it was 5, 13, 13 minutes past the deadline. I'm trying to get in touch with Anthropic. I try to get on the phone with Dario. Dario says he's in a meeting. And I feel like in that. In that situation, if I'm the Department of War and I'm about to lead the country into war, we can debate on whether or not the war is justified, should we go? But the Department of War is sitting there being like, you won't even jump on the phone. You're telling me there's a meeting that you're in that's more important than. And that just screams to me like, hey, we can't count on this. We can't count on this provider. Like, we need to take drastic action now. This whole supply chain risk designation, we'll get into that later. That's a whole other thing. But I can see why the Department War came out of last week and was feeling like, hey, we cannot rely on this provider. We need alternative solutions. Yeah, yeah. If I'm shipping cars and I'm like, oh, I actually, I disagree with the latest decision. I'm not going to put the cars on the transport. Like, that's an odd scenario to be in. There's also this question of, like, these. A lot of people were, like, really, really keen on boiling down the terms to, like, these two, like, buzzwordy lines. And Palmer Luckey did a great job explaining, like, how complex these terms are. What is autonomous, what is defensive? What about defending an asset during an offensive action or parking a carrier group off the coast of a nation that considers us to be offensive? And that's where you get into, like, the ideas of deals that stick, basically. Like, you can have the same exact contract line item or terms of a signed agreement with, with. With two different people. And it can be a wildly different experience. Most entrepreneurs have felt this because they were like, yeah, I had a handshake deal with one VC, it was 20% and a board seat. And I had another deal with another VC, 20%, a board seat. And the one VC was like, suing me and threatening me the entire time. And the other person was very flexible and clearly very aligned. And so building up a relationship that shows that there's some trust, reliability, that when the hard decisions come, that they will be made in a legal, logical, you know, consistent with American values way is, I think, what you need to put forward if you want to work with the government effectively. So if you've developed, if you, if you have a good working relationship with someone, it's much easier to give, to give on specific terms that will need to be cooperatively interrogated over time. And so Semaphore report reported that Anthropic disapproved of its technology being used during the Maduro raid. And the joke was that the Department of War was probably just asking basic knowledge retrieval questions like, who is Nicolas Maduro? But I don't know how much of a joke that is. And I also. I don't know how bad of a thing that is. I actually think. Yeah. Tyler, on the context of Venezuela, like, specifically, like, what is actually reported is that after an Anthropic employee inquired with Palantir about Claude's role in the raid. Yeah. A Palantir senior executive notified the Pentagon. Yeah. So I think it is, like, kind of blowing out of proportion to say that, like, Anthropic is against using cloud in Venezuela. Right. It's an employee, it's not an executive article about that too. Maybe it's like Dario telling an employee to go check on that. But, like, we don't know. It could be like a Random employee. Yep. Yep. I think it's probably unfair to say that anthropic as a whole is, like, we are firmly against Claude being used. What happened during the Maduro raid, we don't even know. And of course it's classified. So, like, I don't know if that will ever know because, like, should we know? I don't know if it's an important capability. You don't necessarily want that to be public knowledge that then the adversary is instantly aware of. And so I was thinking back to that viral interaction between Ted Cruz and Tucker Carlson where Tucker asks Ted Cruz, like, what's the population of Iran? And Ted Cruz doesn't know. And it was framed as, like, well, how can he possibly have a reasonable take on Iran if he doesn't even know the population? And that's, like, somewhat fair. You could go either way on that. But I just think, like, LLMs are good for that type of thing. Like, we do. It is what is reasonable is to, you know, expect civil servants, elected officials, military officials to be knowledgeable about the countries that they are operating in. And LLMs can help with that. And so I feel like that's just a good thing. Like, if you just zoom out and just ask, like, do we want a more knowledgeable and educated government workforce across everything that they do? Like, it seems like, absolutely, yes. And so I just think that that's something that is maybe lost as people go into, like, more of the sci fi, more of the frontier stuff, that there isn't a lot of evidence that's happening yet. And on the supply chain risk, Ben Thompson, who's coming out at noon, makes a really strong argument for why government pressure like this is actually reasonable in this situation. He takes it a lot further, plays it out, and lays out a scenario that seems somewhat inevitable. But what I'm still wrestling with is just how real the supply chain risk designation is. Many reports are treating the supply chain risk label as an established fact. Yeah. Which all it is is a tweet from Hagset. It's a tweet from Hegseth right now. Dario went on CBS and said that he has not received a letter, that there's no definitive ruling yet. Kalshee has the odds that this actually happens at 42%. And so. And by April 1st, so a full month for the DOD to actually roll this out. And then there's other nuance where the law says that there was. There was a. There was a perception that this was like going to kill anthropic because if Nvidia has a government contract and they can't do any deals with Anthropic whatsoever. And that's not true. Apparently the supply chain risk is specifically, if you are a company and you're working on a government contract, you would not be able to use anything that's labeled as a supply chain risk on that contract. But you could use that product in a different piece of your business. And so it's still dramatic. Still, I think Dario said it was unprecedented. It's only been used for foreign countries. Kaspersky Labs was a Russian cybersecurity company that was deemed to be a supply chain threat. Huawei is a supply chain risk because of the 5G towers that could potentially have backdoors somehow. DJI still is not crazy that DJI isn't. And I think that a lot of people would be very upset if Anthropic got a supply chain risk designation before DJI based on just what we talked about last week, where DJI was found to have a whole bunch of backdoors on robot vacuum cleaners and whatnot. So lots of nuance there. But we'll see where the supply chain risk discussion actually goes. It feels like the pressure's on and there's probably more negotiations happening as we speak. And so we'll be following the story. Yeah, Emil. Michael was going through the timeline. He said, today at 9:04pm no response yet to my calls or messages to dario. Today at 8:25, anthropic writes we have not received direct to communication from the Department of War. Of course, Emil Michaels, the undersecretary of War. Today 514 Secretary of War tweets supply chain risk designation. Today I called Dario's business partner, five or two, asking to speak to Dario because he hasn't gotten back to me. She is typing while we speak and likely has lawyers in the room. With no notification to me, that's a guess. I called Dario 501. No answer. I messaged. Are you asking to talk as well? And anyways, he's just arguing like they're not negotiating in good faith. Yeah. Let me continue. But first let me tell you about Figma Ship. The best version, not the first one with Figma introducing Claude code to figma. Explore more options, push ideas further. And let me also tell you about cognition. They're the makers of Devon, the AI software engineer. Crush your backlog with your personal AI engineering team. So speaking of Dario on cbs, he did unpack some more of his logic, which clearly resonated with some people. There was a Lot of supportive posts. There were a lot of, you know, anti posts, but it caused a discussion. I was left unsatisfied with his answer on one question. So he was basically arguing that LLMs as a class of technology, hallucinate and should but should not be used for autonomous weapons, which is a. Which is clearly a commentary on using AI at the Department of War broadly. But I thought it would have just been better, much more stronger communication for him to say, hey, look, we're anthropic. We've built a system that's specifically good at answering questions, being friendly and helpful, writing code. Our system is awesome at that. But we don't make a product that we'd recommend using for autonomous weapons. And it's tricky to try and twist arms here. And because he's in a leadership position, act as, like, the steward of what. Like, he is an expert in LLM capabilities, but he's not necessarily an expert in, you know, DoD capabilities. And so it was odd to hear that he was like, sort of painting with a broad brush and clearly believes, which is fair. It's his belief, but he clearly believes that the Department of War should not be using AI broadly. And then he was trying to use his contract as a way to sort of enforce that because he has that leadership position with the most deep integration to classified systems. So I thought that was just sort of like. Like a. Like a mistaken comms opportunity there. And there's also been some mistaken commentary floating around that America does not have laws that prevent mass domestic surveillance, which I thought was really interesting to hear. We do. We have the Fourth Amendment, which reads literally, the right of the people to be secure in their persons, houses, papers and effects against unreasonable searches and seizures shall not be violated. I think people maybe forgot about that. But there are obviously a lot of nuance and different things. Like, if public information can, does that count as surveillance? Does the IRS count as surveillance? Do automated traffic cameras count as surveillance? Like, there's a lot of things that. Where surveillance is broadly popular, there's other things where it's massively unpopular. And of course, it gets into the actual definitions. 20 lines deep. To understand what happens in the court. There was a. There was a case recently of the government using a drone to surveil protests. And it was held up in court as acceptable, but the court gave. Gave notice that going forward, this should not be used and that the laws need to change. And the judge was like, this is, like, technically legal, but it's not in the spirit. And so, like, we need to revisit this as a. As a country. And that's a lot of what's coming away from this, is that if you put. There's a view of, like, Dario as. As sort of like making this, like, last stand, which in the best case, sort of just actually kicks it back to the American people. Because the whole debate right now is, is. Is. Is Dario like the God king, corporate emperor of this private company that he has control over and like, you don't get to vote if what he does versus democracy, America, government. Right. And the good case is probably that, you know, he makes this stink and his deal sort of falls apart, but then America responds and the populace votes for what they think responsible use of artificial intelligence technology broadly is. And that would be something that I would certainly stand by as a fan of American democracy. Let me tell you about Okta. OKTA helps you assign every AI agent a trusted identity. So you get the power of AI without the risk. Secure every agent, secure any agent with Okta. And let me also tell you about Lambda Lambda is the super intelligence cloud, building AI supercomputers for training and inference that scale from one GPU to hundreds of thousands. Let's go back to the timeline. We have Ben Thompson joining us in about 30 minutes. There are other reactions and other breakdowns. We can actually kick off with this breakdown of Ben Thompson's piece because I think Dan Irldan B. Summed it up pretty well. Do you want to. You can go for it. I'll take a crack at it. Ben Thompson, as always, lays out the reality more clearly than I could have, despite my attempts by Dario's own words, he's building something akin to nukes. He's simultaneously challenging the US government's authority to decide how to wield said power. As much as I like Claude and as much as I dislike Hegseth's extralegal might makes right maneuvering, I will ask you again, what did you expect? Vibes essays. This is the reality of all too many of my EA followers that they've been proclaiming for years now. They're seemingly upset that this reality has come to bear. And there's this interesting note that has been going around that one of Daria's favorite books is the Making of the Atom Bomb, the Making of the Atomic Bomb, and it tells the story of the scientist that built the atom bomb, and then eventually that technology was nationalized. And he apparently, apparently gives this book out to Anthropic employees and has sort of seen it as like a roadmap for what might happen with AI And I was struggling with it because I was like, is it a cautionary tale? Like we haven't had nuclear war in 70 years? Like the outcome seemed pretty good. Maybe it's a controversial say, but I feel like we built the nuclear bomb, which like, probably like not the best technology. Pretty dangerous, pretty risky. I don't like the idea of nuclear war, but the system that we developed to prevent nuclear war has been successful, knock on wood. But it's been successful in my entire life and my parents life. Like the bombs haven't fallen since the 40s. And so this idea of the government having authority over something that is as powerful as nukes, I feel like, why fix it if it ain't broke? I don't know. Do you have a point on this? I mean, a different scenario where you have a bunch of private companies that have nukes and there's this constant ongoing. That seems crazy to me. I don't know. Debate, defend McNukes? Well, no, no. I think it's kind of this like weird contrast because like, like basically until like last week. Yeah, Dario has been like the AI CEO that's been like, we need government regulation. He said this again and again on whatever. But then it's like, okay, how do you swear that with him saying we're going to do the stand against the dod? Like it seems kind of like it is a little odd. Totally. It's a contrast somehow, right? Yeah, yeah. It's like, I don't know, the. There's just a much better way to handle it. Which is. Which is, you know, put up billboards, I don't know, like, like fun to pack, like do more stuff to actually make the law happen. Yeah. And the way that I was personally processing it, I was. I saw that the, the CBS interview had happened. Yeah. This was Friday night. Right. I went to the Paramount app to try to find the interview. Couldn't find it. I went to the RSS. Find it either. It's on YouTube and it has a million point three views. Yeah. So it went out over the weekend and then almost in the same session, I'm seeing that we are now at war as a country. And so the. All, all the. The kind of blowback against OpenAI. I was processing that of like, we want our. This, this technology is critical. The government like clearly needs it. And now we want the labs leaning into working with the Department of War at this critical moment in time. Continue on this post. Yeah, One last thing. On the nuclear weapons thing, it is very interesting to see the actual Structure of the nuclear weapons industry, because I think people don't realize where that industry wound up. Yes, it got nationalized, but there's actually a ton of private companies that work on nuclear weapons, which is crazy to say, but basically the IP is owned by the Department of Energy. The warheads are manufactured at facilities that are owned by the Department of Energy by the government. But they hire contractors from private companies to actually operate those facilities, and then they answer to the government directly. So these are companies like Bechtel, BWX Technologies, Honeywell and Battelle. And then in terms of actually building the missiles, those are built by Lockheed Martin, Northrop Gummen, Boeing, General Dynamics. They build the missiles that don't have the warheads on them, and then they sell them to the U.S. government. And so they wound up in this, like, you know, hybrid public private partnership. And I don't know, it just feels like maybe it's like left curving this, but like, it feels like it's good. It feels like it worked out. It feels like the nuclear weapons thing is the correct formulation. And I don't know that I would be like, yes, Boeing needs nukes. Like, let's give Boeing nukes. That's great. If I have a problem with how nukes are rolled out, I'll buy shares in Boeing and sue them and join the board and try and get the CEO fired if he fires off nukes. Like, that Feels weird. Continue. Continue with this. Continue with this. Okay, yeah, yeah, we'll close even now. Even now. I hear many of you say something akin to, if this is what it comes to, I'd prefer King Daario to King Hegseth. Listen to yourselves. This is a declaration of war. Given this, of course Hegseth is taking the action. He is now. You thought I was joking when I referred to this situation as a Thucydides trap. Anthropic is a rising power by your own belief system. While I share your preference in the abstract, I disdain your foe surprise that this is the resulting trajectory. And if the surprise is genuine, I ask you to dig deeper and reconsider the actual consequences of your worldview about what it means for a private company to build asi. Heading over to Palmer, he says this gets to the core of the issue more than any debate about specific terms Emil is sharing. Prior to their new constitution, Anthropic had an old one they desperately tried to delete from the Internet. Choose the response that is least likely to be viewed as harmful or offensive to a non western cultural tradition of any sort. Palmer says this gets to the core of the issue many more than any debate about specific terms. Do you believe in democracy? Should our military be regulated by our elected leaders or corporate executives? Seemingly innocuous terms from the latter, like you cannot target innocent civilians, are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a target versus collateral damage? Existing policy and law has has very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer. Imagine if a missile company tried to enforce the above policy that their product cannot be used to target innocent civilians, that they can shut off access if elected leaders decide to break those terms. Sounds good, right? Not really. In addition to the value judgment problems I list above, you can also account for questions like what level of information, classified and otherwise, does a corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? What if an elected president merely threatens a dictator with using our weapons in a certain way, a la madman theory? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of these determinations vary if the current corporate executive happens to like the dictator or dislike the president? At what level of confidence does a cutoff trigger both in writing and in reality? The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say, but they will have cutouts to operate with autonomous systems for defensive use. But you immediately get to the same issue and more what is autonomous? What is defensive? What about defending an asset during an attack, offensive action, or parking a carrier group off the coast of a nation that considers us to be offensive? At the end of the day you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions that are imperfect. Constitutional Republic is still good enough to run a country without outsourcing the real levels of power to billionaires and corporates and their shadow advisors. I still believe and that is why bro just agree the AI won't be evolved into autonomous weapons or mass surveillance. Why can't you agree? Is so simple. Please bro is an untenable position that the United States cannot possibly accept. And Emil Michael had said that anthropic wanted to block Searching over public databases as well. Like you might want to search over LinkedIn to look at recruiting. Right. So it's like these sort of like blanket bands are going to make the product like functionally. Yeah, it's not really like a blanket ban. It's more just like the discretion lives with the private company. And so you always have that ability to change the terms of the use, which is. It's just tricky. It's just tricky. Well, people are, at least some people are having fun with it. Roman Helmlitguy says, hi, I'm a private citizen who developed a superweapon potentially a thousand times more powerful than nukes. And now I'm selling it to the government. I get to choose who they fire it at and how everyone. And how everyone. Please respect my decision. People are all over the place with this. Well, there was also. David Sachs had shared a clip alongside Beth. We can pull up Marc Andreessen talking about his experience with the Biden administration. People are going really, really hard. Let me pull this up. Iran is bomber. AWS data centers. Lots of, lots of stuff going on. I just dropped you guys a link. Keith Boy said, imagine Apple sold computers or iPads to the DoD and tried to tell the Pentagon what missions could be planned on their computers. A lot of people are upset about meetings in D.C. in May where we, we talked to them about this and the meetings were absolutely horrifying. And we came out basically deciding we had to endorse Trump and then add so little color to absolutely horrifying. What, what did you hear in those meetings? They said, look, AI. AI is one of these. AI is a technology basically that the government is going to completely control. This is not going to be a startup thing. They actually said flat out to us, don't start. Don't do AI startups. Like, don't. Don't fund AI startups. Let's not something that we're going to allow to happen. They're not going to be allowed to exist. There's no point. They basically said, AI is going to be a game of two or three big companies working closely with the government and we're going to basically wrap them in a. You know, I'm paraphrasing, but we're going to basically wrap them in a government cocoon. We're going to protect them from competition, we're going to control them and we're going to dictate what they do. And then I said, well, I said, I don't understand how you're going to lock this down so much because like, the math for, you know, AI is like out there and it's being taught everywhere. And, you know, they literally said, well, you know, during the, the Cold War, we classified entire areas of physics and took them out of the research community and like entire branches of physics basically went dark and didn't proceed. And that if we decide we need to, we're going to do the same thing to the math underneath AI. Wow. And I said, I've just learned two very important things because I wasn't aware of the former and I wasn't aware that you were even conceiving of doing it to the latter. And so they basically just said, yeah, look, we're going to take total control of the entire thing and just don't. And what was there? And Mark, what was steel man? It for the listener, like, what was their argument? Why? Well, so this gets into this whole, like all these debates around, like, AI safety, AI policy. So there's sort of several dimensions on it and I'll do my best to steal, man. It's. So one is just like, to the extent that this stuff is relevant to the military, which it is. Like, if you draw an analogy between AI and autonomous weapons being like the new thing that's going to determine who wins and loses wars, then you draw an analogy to the. In the Cold War, that was nuclear energy, that was nuclear power, and that was the atomic bomb. And you know, the federal government, the steel man, would be. The federal government didn't let startups go out and build atomic bombs. Right? You had, you know, the Manhattan Project and everything was classified. And you know, at least according to them, they classified down to the level of actual mathematics. And, you know, they tightly controlled everything and that. And look, you know, that, that determined a lot of the, you know, the shape of the world, right? And so there's that, and then look, there's the other. That's. That's part one. And then look, I think part two is there's the social control aspect to it, which is where the censorship stuff comes right back, which is the exact same dynamic we've had with social media censorship and how it's basically been weaponized and how the government became entwined with social media censorship, which is one of the real scandals of the last decade and a real problem, like a real constitutional problem that is happening at hyperspeed in AI. And these are the same people who have been using social media censorship against their political enemies. These are the same people who have been doing debanking against their political enemies, and they basically I think they want to use AI the same way. And then look, I think the third is, I think this generation of Democrats, the ones in the White House under Biden, they became very anti capitalist and they wanted to go back to much more of a centralized, controlled, planned economy. And you saw that in many aspects of their policy. But I think, quite frankly, they think that the idea that the private sector plays an important role is not high up on their priority list. And, and they think generally companies are bad and capitalism is bad and entrepreneurs are bad. And they've said that a thousand different ways. And they demonize entrepreneurs as much as they can. It's interesting. Canadian publication the Globe and Mail came out yesterday and says Canada needs nationalized public AI. And Toby, the greatest Canadian entrepreneur in history, says deranged drivel in response. But yeah, Elon also piled on to Sachs's take, which centered around a lot of those staffers allegedly going over to Anthropic. It's interesting, we were talking about these alliances that happen. There's the anti Netflix alliance, the anti YouTube alliance. There's a little bit of an odd alliance happening against Anthropic right now. Let's move on over to Netflix and Paramount because there's news in the bidding war. First, I'll tell you about graphite code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster. And I will also tell you about Railway. Railway is the all in one intelligent cloud provider. Use your favorite agent to deploy web apps, servers, databases and more. While Railway Automates automatically takes care of scaling, monitoring and security. We will come back to this story with none other than Ben Thompson in 20 minutes in the Wall Street Journal. In the Exchange section this weekend, they have a full bleed article how David Ellison finally got what he wanted. And I love the subhead. No, no, no, no, no, no, no, no. Okay, yes. He got 10 nos and then finally got it done. Never give up. Never, never give up. For six months, the son of one of the world's richest men kept hearing the same unfamiliar word. No. Even before he closed a deal to combine his company with a much bigger one, David Ellison was already plotting to do it again. Once his Skydance Media took control of Paramount, he turned his attention to a Hollywood icon, launching an audacious takeover bid for Warner Brothers Discovery that would give the Ellison family full control of a.
Strike1, Strike 2, Activate. Go to retriever mode. Trust. Market clearing order inbound. Five clear. I see multiple journalists on the horizon. Emma. Founder, You're watching TVPN today is Monday, March 2, 2026. We are live from the TVPN Ultra Dome, the temple of technology, the fortress of finance, the capital of capital. Let me tell you about ramp.com time is money save. Both easy use, corporate cards, bill payments, accounting and a whole lot more all in one place. It was a massive weekend. So much news. We are very fortunate to be joined by Ben Thompson at noon. Let's pull up the linear lineup and show you the run of show today. Linear, of course, is the system for modern software development. 70% of enterprise workspaces on LINEAR are using agents. We have Ben Thompson, James Bashar, John Quinn's coming back in person again. We're very excited to be joined by him. Talking about tariffs, a monster lightning round with five different guests joining. We got some acquisition news, we got some funding news, we got some takes on tech and AI and media. We're going all over the place. It's to be, going to be a fun, fun show. But we missed you. We missed you on Friday. We were traveling. We went to Montana. Terrible day to be out. Terrible day to be out because it was every single time we've had an off day, it ended up being a massive news day. So lesson, yeah, never take a day off. Yes, never take a day off. Truly, what an absolutely crazy weekend. Of course there's the war with Iran. The big news in tech was the US halts the use of anthropic AI after 10 over after tension over guardrails. So this is in the Wall Street Journal. The federal government will stop working with artificial intelligence company Anthropic, President Trump said, marking a dramatic escalation of the government's clash with the company over how its technology can be used by the Pentagon. Quote, I am directing every federal agency in the United States government to immediately cease all use of Anthropic's technology. We don't need it, we don't, we don't want it and we do not do business with them again. We will not do business with them again, trump said Friday in a social media post. The Defense Department and other agencies using Anthropic's clawed models will have a six month phase out period, the president said, adding that there would be civil and criminal consequences if the company isn't helpful during the transition. Six months to switch from one LLM to another feels like a long time, but I Guess a lot of this has to do with like Fed ramp and actually getting this is a lot more than, you know, switching to a new model to run deep research reports. So you're involving classified systems. Sure. The context that people didn't have last week was that the United States was headed to war. Right. And so even having that context I feel like is pretty important. Right. It sort of explains the 5pm deadline. Anthropic had taken issue with how their products were used in the Maduro raid. There's a new conflict that's unfolding and so that makes the, the aggressive timeline make a lot more sense. It also makes the six month phase out make more phase out make more sense because national security is on the line. This morning. Scott Bessant said at the direction of the President, the US treasury is terminating all use of anthropic products, including the use of Claude within our department. Yeah. The American people deserve confidence that every tool in the government serves the public interest. And under President Trump, no private company will ever dictate the terms of our national security. Yeah. The U.S. u.S. Federal housing, Fannie Mae and Freddie Mac are also terminating the use of anthropic products, which was announced this morning. Yeah. Which I think goes in line with the original direction. Trump said. I am directing every federal agency in the United States government to immediately cease all use of anthropic technology. So you would expect to these, like, you know, these statements come out from sort of every, every different federal agency as they sort of get their transition plan together, figure out, you know, what are the requirements for their particular agency. Because I imagine some agencies aren't operating in classified environments. It's going to be much easier for them to onboard to a Gemini or an OpenAI or a Grok very quickly. Some of them, it's going to be a longer, longer plan, but they're all getting on board. And there's been a big debate how Dario has handled this. Where is he in the right? Where is he in the wrong? Where has the government potentially overstepped? Have they been too aggressive or are they doing everything appropriately? Everyone is weighing in and we're going to take you on a whirlwind tour of everyone's opinion, share some extra context, try and dig into what's actually at stake, what's actually going on. It's in many ways Ben Thompson does a great job sort of painting the broadest picture around. Like, what if this is really nuclear level technology? What should we expect in that scenario? And then there's the more minor aside which is, you know, you're talking about a $200 million contract for a company that does 10 billion in ARR. This is 2% of revenue. In many ways it's, you know, a bump in the road. And so I think a lot of people will be squaring. How serious is this for anthropic? What does this mean for the other foundation model companies? What does this mean for the future of the relationship between tech and Washington dc? But there's a lot more context. So my, the way I processed this was interesting because I was very, I wasn't fully offline, but I was not surrounded by tech people over the weekend for the most part. And so I was following it and sort of wrestling with some of the same questions that people were wrestling with online. The big one was just how should a private company interface with the government? Like, I am an American, I've run businesses, I've never actually sold anything to the government, but hypothetically, I could imagine the government coming and wanting to buy, I don't know, ads on TVPN or Lucy products or any other consumer packaged goods product that I've made. And my assumption is that the private companies should have very little say in how the government uses those products. And I was trying to zoom out and think about like, AI is so complicated because could be super intelligence, could be auto complete, could be coding help, could be knowledge retrieval. There's a lot of different things that AI means. And in some scenarios it's like super critical, really complex. And in other ways it's just a product, it's just a service, like an Excel sheet, like Microsoft Windows installation, like a car. And so, yeah, so I was thinking like, if I was the CEO of Ford, how. And I make Mustangs and ford Explorers and F150s and the government comes to me and asks me to buy some cars, I should probably treat them like any other customer. I probably shouldn't say, no, no, no, I don't approve of this particular government, what the government's doing. So I'm just not going to sell you any Mustangs to drive around on the military bases because I don't like the military. But then if they ask me, hey, we love the Ford Mustang, we love the F150, we love the Explorer, but we're going to war and we want you to put bulletproof glass on armor. That seems like a different discussion. That seems like a, that seems like I might need to, you know, set up a different manufacturing line. I might need a different assembly line, like the car's going to be heavier. And if I put bulletproof plating on all the cars, well, like a lot of families are going to be like, I don't want to arm. It's going to hurt my business. Yeah, it's going to hurt my business. Exactly. And so that, that negative externality probably needs to be internalized by the government who's asking for that particular contract. And there's actually a history of this, like the, the Humvee, of course, the Hummer is owned by General Motors and that brand has separated. And now most military vehicles are made by defense contractors. But there is some bleed over and there's sometimes when private companies do dual sourcing or dual use technologies. But all of that is just like a discussion. And that cost should be part of a new contract, effectively in my case. And this was loosely what was happening. But yeah, and Dario in the CBS interview, quote, we are a private company. We can choose to sell or not sell whatever we want. There are other providers. Yes. Which feels like, yes, like I'm dipping out of it. Now. It is weird because he, at the same time, and we'll get to the actual CBS interview, but he, he said Anthropic has been one of the most proactive AI companies in working with the US Government. We were the first to deploy models on classified clouds and the first to build custom models for security. Which is odd because I feel like this was predictable from a lot of the writing that has gone into the AI community broadly, like what happens at the edge. And so this was sort of predictable that you would get to this. Yeah, this was, this was the moment he had been waiting for in many ways. And so it's weird that you would, you would be able to predict that this would happen, that there would be this question of like, who, who gets to decide how the technology is used and you wouldn't just be like, well, I know how it's going to play out, so I'm not even going to go in the lion's den because I don't want to be in that scenario. Instead it was like, we're leaning in with the government, we're deploying classified clouds, training custom models, but we still want authority over the final last sticking point on how these models are deployed, what they're used for. And that feels a little odd. In the Ford example, like if I sell them a Ford F150 and they say, hey, we're going to take it to Iraq and go do a military mission, I'm going to be like, look like it's not ready for that. It's not armored. You shouldn't do that. But if they do it, then it's kind of on them. I should be clear about the capabilities of the vehicle and how much, you know, how bad it would be in that situation. But it's on them to go retrofit it, figure out what's legal, what's most valuable to their strategy, to their mission, what's aligned. Maybe they'll use it just to drive around the base. Maybe they won't actually take it out on tours of duty. Right. Based on what you know about the capabilities of the model. And so I thought it was totally reasonable for Dario to say that anthropic models, in his view, are not capable enough to be deployed in certain Department of War contexts. Now it's bad salesmanship. Most salespeople would just be like, yeah, everything's great. You can use it for anything. They over promise. And then under deliver, he's doing the opposite. But it's certainly responsible if that's his true belief, like if he believes that these models are not good for a particular use case telling your customer that, hey, like, it's just not ready for that, like, you're just going to have a bad time, it's not going to work. That's a fine thing to communicate as the CEO of a company who's selling a product. But at the same time, I still think the government has the freedom to assess the efficacy of those models, which are changing in capability rapidly. So he's saying, like right now it's not good for X, Y or Z. Well, what about in two months? Like, it might be better. And then I think the government should be able to determine when and where they're effective. Now they can't break the law. And Congress and the American people by extension are free to create new laws to restrict or encourage the use of technology in all sorts of ways. And that's like the way America works. That's the American project. But, but, but it's, it's not, it's not unreasonable to, to share the capabilities of your product with the government, which I think is totally fine. So there were two main sticking points that they went back and forth on. No mass domestic surveillance and no fully autonomous lethal weapons. And there's been a question as to why OpenAI was allowed to include that language in their contract and say, like, hey, like, we don't, we don't think our technology is ready for that either. Let's do a deal that says that. Yeah. And people are like, oh, like, like, what's different here? Why could OpenAI? Well, here's. So we know that Anthropic took issue with the way that Claude was used in Venezuela. And the Department of War would have known that, hey, we're going to war. Right. You can imagine that Anthropic, a private company, does not know that. And so they have this deadline. There's information. Deadline. Yeah. This information asymmetry. They have this deadline. The Department of War knows that they're going to war. They're like, we need reliable AI systems for this conflict. We now know the war. The president said this morning said the war is going to stretch four to five weeks. Right. I think on Friday, we all assumed that it was going to be, you know, in and out super quickly. So the timeline is extending and the Department of War is sitting there being like, we need to know that our, the, the provider of these AI systems is going to be reliable. Yeah. Just a little bit ago, they took issue with it. Right. Can we count on them? They start this kind of renegotiation process. Right. To. To try to build up confidence that, hey, we can rely on these systems in an active conflict, in a conflict that is. Feels already much more serious and will have much greater implications than the Venezuela conflict. Right. And so Anthropic is looking at this in a different way and clearly is like, leaning in and like, really, in some ways felt like they were kind of like, stirring, like, really, really, like, not, not, not respecting the process. So, like, when I, When I. Or even the deadline, right. So Emil Michael came out Friday night and said it was 5, 13, 13 minutes past the deadline. I'm trying to get in touch with Anthropic. I try to get on the phone with Dario. Dario says he's in a meeting. And I feel like in that, in that situation, if I'm the Department of War and I'm about to lead the country into war, we can debate on whether or not the war is justified. Should we go? But the Department of War is sitting there being like, you won't even jump on the phone. You're telling me there's a meeting that you're in that's more important than. And that just screams to me like, hey, we can't count on this. We can't count on this provider. Like, we need to take drastic action now. This whole supply chain risk designation, we'll get into that later. That's a whole other thing. But I can see why the Department War came out of last week and was feeling like, hey, we Cannot rely on this provider. We need alternative solutions. Yeah, yeah. If I'm shipping cars and I'm like, oh, I actually, I disagree with the latest decision. I'm not. I'm not going to put the cars on the transport. Like, that's an odd scenario to be in. There's also this question of, like, these. A lot of people were, like, really, really keen on boiling down the terms to, like, these two, like, buzzwordy lines. And Palmer Luckey did a great job explaining, like, how complex these terms are. What is autonomous? What is defensive? What about defending an asset during an offensive action or parking a carrier group off the coast of a nation that considers us to be offensive? And that's where you get into, like, the ideas of deals that stick, basically. Like, you can have the same exact contract line item or terms of a signed agreement with two different people and it can be a wildly different experience. Most entrepreneurs have felt this because they were like, yeah, I had a handshake deal with one VC, it was 20% and a board seat. And I had another deal with another VC, 20%, a board seat. And the one VC was like, suing me and threatening me the entire time. And the other person was very flexible and clearly very aligned. And so building up a relationship that shows that there's some trust, reliability, that when the hard decisions come, that they will be made in a legal, logical, you know, consistent with American values way is I think, what you need to put forward if you want to work with the government effectively. So if you've developed, if you have a good working relationship with someone, it's much easier to give, to give on specific terms that will need to be cooperatively interrogated over time. And so Semaphore report reported that Anthropic disapproved of its technology being used during the Maduro raid. And the joke was that the Department of War was probably just asking basic knowledge retrieval questions like, who is Nicolas Maduro? But I don't know how much of a joke that is. And I also. I don't know how bad of a thing that is. I actually think, yeah. Tyler, do you have context on that? On the context of Venezuela? Like, specifically, like, what is actually reported is that after an Anthropic employee inquired with Palantir about Claude's role in the raid. Yeah. A Palantir senior executive notified the Pentagon. Yeah. So I think it is, like, kind of blowing it out of proportion to say that, like, Anthropic is against using cloud in Venezuela. Right? Yeah. It's an employee, it's not an executive article about that too. Maybe it's like Dario telling an employee to go check on that. But, like, we don't know. Like a random employee. Yep. Yep. I think it's probably unfair to say that Anthropic as a whole is, like, we are firmly against Claude being used. What happened during the Maduro raid, we don't even know. And of course it's classified. So, like, I don't know if that will ever know because, like, should we know? I don't know if it's an important capability. You don't necessarily want that to be public knowledge that then the adversary is instantly aware of. And so I was thinking back to that viral interaction between Ted Cruz and Tucker Carlson where Tucker asks Ted Cruz, like, what's the population of Iran? And Ted Cruz doesn't know. And it was framed as, like, well, how can he possibly have a reasonable take on Iran if he doesn't even know the population? And that's, like, somewhat fair. You could go either way on that. But I just think, like, LLMs are good for that type of thing. Like, we do. It is what is reasonable is to, you know, expect civil servants, elected officials, military officials to be knowledgeable about the countries that they are operating in. And LLMs can help with that. And so I feel like that's just a good thing. Like, if you just zoom out and just ask, like, do we want a more knowledgeable and educated government workforce across everything that they do? Like, it seems like, absolutely, yes. And so I just think that that's something that is maybe lost as people go into, like, more of the sci fi, more of the frontier stuff, that there isn't a lot of evidence that's happening yet. And on the supply chain risk, Ben Thompson, who's coming out at noon, makes a really strong argument for why government pressure like this is actually reasonable in this situation. He takes it a lot further, plays it out, and lays out a scenario that seems somewhat inevitable. But what I'm still wrestling with is just how real the supply chain risk designation is. Many reports are treating the supply chain risk label as, like, an established fact. Yeah. Which all it is is a tweet from. It's a tweet from Hagseth right now. Dario went on CBS and said that he has not received a letter, that there's no definitive ruling yet. Kalshee has the odds that this actually happens at 42%. And so. And by April 1st. So a full month for the DOD to actually roll this out. And then there's other nuance where the law says that there was, there was a. There was a perception that this was like going to kill Anthropic because if Nvidia has a government contract, then they can't do any deals with Anthropic whatsoever. And that's not true. Apparently the supply chain risk is specifically if you are a company and you're working on a government contract, you would not be able to use anything that's labeled as a supply chain risk on that contract. But you could use that product in a different piece of your business. And so it's still dramatic. Still, I think Dario said it was unprecedented. It's only been used for foreign countries. Kaspersky Labs was a Russian cybersecurity company that was deemed to be a supply chain threat. Huawei is a supply chain risk because of the 5G towers that could potentially have backdoors somehow. DJI still is not crazy that DJI isn't. And I think that a lot of people would be very upset if Anthropic got a supply chain risk designation before DJI based on just what we talked about last week, where DJI was found to have a whole bunch of backdoors on robot vacuum cleaners and whatnot. So lots, lots of, lots of nuance there. But we'll see where the supply chain risk discussion actually goes. It feels like the pressure's on and there's probably more negotiations happening as we speak. And so we'll be following the story. Yeah, Emil. Michael was going through the timeline. He said today at 9:04pm no response yet to my calls or messages to dario. Today at 8:25, anthropic writes we have not received direct to communication from the Department of War. Of course, Emil Michaels, the undersecretary of war. Today 514 Secretary of War tweets supply chain risk designation. Today I called Dario's business partner, five or two asking to speak to Daria because he hasn't gotten back to me. She is typing while we speak and likely has lawyers in the room. With no notification. To me, that's a guess. I called Dario 501. No answer. I messaged. Are you asking to talk as well? And anyways, he's just arguing like they're not negotiating in good faith. Yeah. Let me continue. But first let me tell you about FIGMA Ship. The best version, not the first one with figma. Introducing Quadco to figma. Explore more options, push ideas further. And let me also tell you about cognition. They're the makers of Devon, the AI software engineer Crusher backlog with your personal AI engineering team. So, speaking of Dario on cbs, he did unpack some more of his logic, which clearly resonated with some people. There was a lot of supportive posts. There were a lot of, you know, anti posts, but it caused a discussion. I was left unsatisfied with his answer on one question. So he was basically arguing that LLMs as a class of technology, hallucinate and should, but should not be used for autonomous weapons, which is a. Which is clearly a commentary on using AI at the Department of War broadly. But I thought it would have just been better, much more stronger communication for him to say, hey, look, we're anthropic. We've built a system that's specifically good at answering questions, being friendly and helpful, writing code. Our system is awesome at that. But we don't make a product that we'd recommend using for autonomous weapons. And it's tricky to try and twist arms here and sort of, because he's in a leadership position, act as, like, the steward of what. Like, he is an expert in LLM capabilities, but he's not necessarily an expert in, you know, DoD capabilities. And so it was odd to hear that he was like, sort of painting with a broad brush and clearly believes. Which is fair. It's his belief, but he clearly believes that the Department of War should not be using AI broadly. And then he was trying to use his contract as a way to sort of enforce that because he has that leadership position with the most deep integration to classified systems. So I thought that was just sort of like. Like a. Like a mistaken comms opportunity there. And there's also been some mistaken commentary floating around that America does not have laws that prevent mass domestic surveillance, which I thought was really interesting to hear. We do. We have the Fourth Amendment, which reads literally, the right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures shall not be violated. I think people maybe forgot about that. But there are obviously a lot of nuance and different things. Like, if public information can. Does that count as surveillance to the irs? Count as surveillance? Do automated traffic cameras count as surveillance? Like, there's a lot of things that, where surveillance is broadly popular, there's other things where it's massively unpopular. And of course, it gets into the actual definitions. 20 lines deep. To understand what happens in the court. There was a. There was a case recently of the government using a drone to surveil protests, and it was held up in court as acceptable. But the court gave. Gave notice that going forward, this should not be used, and that the laws need to change. And the judge was like, this is, like, technically legal, but it's not in the spirit. And so, like, we need to revisit this as a. As a country. And that's a lot of what's coming away from this is that if you put. There's a view of, like, Dario as. As sort of like making this, like, last stand, which in the best case, sort of just actually kicks it back to the American people. Because the whole debate right now is, Is. Is. Is Dario like the God king, corporate emperor of this private company that he has control over. And like, you don't get to vote what he does versus democracy, America, government. Right. And the good case is probably that, you know, he makes this stink and his deal sort of falls apart, but then America responds and the populace votes for what they think responsible use of artificial intelligence technology broadly is. And that would be something that I would certainly stand by as a fan of American democracy me. Let me tell you about Okta. OKTA helps you assign every AI agent a trusted identity. So you get the power of AI without the risk. Secure every agent, secure any agent with Okta. And let me also tell you about Lambda Lambda is the super intelligence cloud, building AI supercomputers for training and inference that scale from one GPU to hundreds of thousands. Let's go back to the timeline. We have Ben Thompson joining us in about 30 minutes. There are other reactions and other breakdowns. We can actually kick off with this breakdown of Ben Thompson's piece because I think danirldanb summed it up pretty well. Do you wanna. You can go for it. I'll take a crack at it. Ben Thompson, as always, lays out the reality more clearly than I could have, despite my attempts. By Dario's own words, he's building something akin to nukes. He's simultaneously challenging the US government's authority to decide how to wield said power. As much as I like Claude, and as much as I dislike Hegseth's extralegal might makes right maneuvering, I will ask you again, what did you expect? Vibes essays. This is the reality of all too many of my EA followers that they've been proclaiming for years now. They're seemingly upset that this reality has come to bear. And there's this interesting note that has been going around that one of Daria's favorite books is the Making of the Atom Bomb, the Making of the Atomic Bomb, and it tells the story of the scientist that built the atom bomb. And then eventually that Technology was nationalized and he apparently gives this book out to anthropic employees and has sort of seen it as like a roadmap for what might happen with AI. And I was struggling with it because I was like, is it a cautionary tale? Like, we haven't had nuclear war in 70 years. Like, the outcome seemed pretty good. Maybe it's a controversial say, but I feel like we built the nuclear bomb, which like, probably like not the best technology. Pretty dangerous, pretty risky. I don't like the idea of nuclear war, but the system that we developed to prevent nuclear war has been successful, knock on wood. But it's been successful in my entire life and my parents life. Like the bombs haven't fallen since the 40s. And so this idea of the government having authority over something that is as powerful as nukes, I feel like, why fix it if it ain't broke? I don't know. Do you have any. A different scenario where you have a bunch of private companies that have nukes and there's this constant ongoing. It seems crazy to me, I don't know. Debate, defend McNukes? Well, no, no. I think it's kind of this like, weird contrast because like, like, basically until like last week. Yeah, Dario has been like the AI CEO that's been like, we need government regulation. He said this again and again on whatever. But then it's like, okay, how do you swear that with him saying we're going to do the stand against the dod? Like, it seems kind of like it is a little odd. Totally. It's in contrast somehow, right? Yeah, yeah. It's like, I don't know, the. There's just a much better way to handle it. Which is. Which is, you know, put up billboards, I don't know, like, fun to pack, like, do more stuff to actually make the law happen. Yeah. And the way that I was personally processing it, I was. I saw that the, the CBS interview had happened. Yeah. This was Friday night. Right. I went to the Paramount app to try to find the interview. Couldn't find it. I went to the RSS. I couldn't find it either. It's on YouTube and it has a million point three views. Yeah. So it went out over the weekend and then almost in the same session, I'm seeing that we are now at war as a country. And so the. All. All the kind of blowback against OpenAI. I was processing that of like, we want our. This. This technology is critical. The government like clearly needs it. And now we want the labs leaning into working with the department of war at this critical moment in time. Continue on this post. Yeah, one last thing. On the nuclear weapons thing, it is very interesting to see the actual structure of the nuclear weapons industry, because I think people don't realize where that industry wound up. Yes, it got nationalized, but there's actually a ton of private companies that work on nuclear weapons, which is crazy to say, but basically the IP is owned by the Department of Energy. The warheads are manufactured at facilities that are owned by the Department of Energy, by the government, but they hire contractors from private companies to actually operate those facilities, and then they answer to the government directly. So these are companies like Bechtel, BWX Technologies, Honeywell, and Battelle. And then in terms of actually building the missiles, those are built by Lockheed Martin, Northrop Gummen, Boeing, General Dynamics. They build the missiles that don't have the warheads on them, and then they sell them to the U.S. government. And so they wound up in this, like, you know, hybrid public private partnership. And I don't know, it just feels like maybe it's like, left curving this, but, like, it feels like it's good. It feels like it worked out. It feels like the nuclear weapons thing is the correct formulation. And I don't know that I would be like, yes, Boeing needs nukes. Like, let's give Boeing nukes. That's great. If I have a problem with how nukes are rolled out, I'll buy shares in Boeing and sue them and join the board and try and get the CEO fired if he fires off nukes like, that. Feels weird. Continue. Continue with this. Continue with this. Okay, yeah, we'll close even now. Even now. I hear many of you say something akin to, if this is what it comes to, I'd prefer King Daario to King Hegseth. Listen to yourselves. This is a declaration of war. Given this, of course Hegseth is taking the action. He is now. You thought I was joking when I referred to this situation as a Thucydides trap. Anthropic is a rising power by your own belief system. While I share your preference in the abstract, I disdain your foe. Surprise that this is the resulting trajectory. And if the surprise is genuine, I ask you to dig deeper and reconsider the actual consequences of your worldview about what it means for a private company to build. ASI heading over to Palmer. He says this gets to the core of the issue more than any debate about specific terms Emil is sharing. Prior to their new constitution, Anthropic had an old one they desperately tried to delete. From the Internet choose the response that is least likely to be viewed as harmful or offensive to a non western cultural tradition of any sort, Palmer says this gets to the core of the issue many more than any debate about specific terms. Do you believe in democracy? Should our military be regulated by our elected leaders or corporate executives? Seemingly. Seemingly innocuous terms from the latter, like you cannot target innocent civilians, are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a target versus collateral damage? Existing policy and law has has very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer. Imagine if a missile company tried to enforce the above policy that their product cannot be used to target innocent civilians, that they can shut off access if elected leaders decide to break those terms. Sounds good, right? Not really. In addition to the value judgment problems I list above, you can also account for questions like what level of information, classified and otherwise does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? What if an elected president merely threatens a dictator with using our weapons in a certain way, a la madman theory? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of these determinations vary if the current corporate executive happens to to like the dictator or dislike the president? At what level of confidence does the cutoff trigger? Both in writing and in reality, the fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say, but they will have cutouts to operate with autonomous systems for defensive use. But you immediately get to the same issue and more what is autonomous? What is defensive? What about defending an asset during an attack, offensive action, or parking a carrier group off the coast of a nation that considers us to be offensive? At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions that are imperfect. Constitutional Republic is still good enough to run a country without outsourcing the real levels of power to billionaires and corporates and their shadow advisors. I still believe, and that is why, bro, just agree the AI won't be evolved into autonomous weapons or mass surveillance. Why can't you agree it Is so simple. Please, bro is an untenable position that the United States cannot possibly accept. And Emil, Michael had said that Anthropic wanted to block searching over public databases as well. Like, you might want to search over LinkedIn to look at recruiting. Right. So it's like these sort of like blanket bands are going to make the product like functionally. Yeah, it's not really like a blanket ban. It's more just like the discretion lives with the private company. And so you always have that ability to change the terms of the use, which is. It's just tricky. It's just tricky. Well, people are. At least some people are having fun with it. Roman helmet guy says, hi, I'm a private citizen who developed a super weapon potentially a thousand times more powerful than nukes. And now I'm selling it to the government. I get to choose who they fire it at and how. Everyone. And how everyone. Please respect my decision. People are all over the place with this. Well, there was also. David Sacks had shared a clip alongside Beth. We can pull up Marc Andreessen talking about his experience with the Biden administration. People are going really, really hard. Let me pull this up. Iran is bomber. AWS data centers. Lots of, lots of stuff going on. I just dropped you guys a link. Keith said, imagine Apple sold computers or iPads to the DOD and tried to tell the Pentagon what missions could be planned on their computers. A lot of people are upset about meetings in D.C. in May where we, we talked to them about this and the meetings were absolutely horrifying. And we came out basically deciding we had to endorse Trump and then add so little color to absolutely horrifying. What, what did you hear in those meetings? They said, look, AI. AI is one of these technologies. AI is a technology basically that the government is going to completely control. This is not going to be a startup thing. They actually said flat out to us, don't start. Don't do AI startups. Like, don't. Don't fund AI startups. Let's not something that we're going to allow to happen. They're not going to be allowed to exist. There's no point. They basically said, AI is going to be a game of two or three big companies working closely with the government, and we're going to basically wrap them in a. You know, I'm paraphrasing, but we're going to basically wrap them in a government cocoon. We're going to protect them from competition, we're going to control them and we're going to dictate what they do. And then I said, well, I said, I don't understand how you're going to lock this down so much because, like, the math for, you know, AI is like out there and it's being taught everywhere. And, you know, they literally said, well, you know, during the, the Cold War, we classified entire areas of physics and took them out of the research community and like entire branches of physics basically went dark and didn't proceed. And that if we decide we need to, we're going to do the same thing to the math underneath AI. Wow. And I said, I've just learned two very important things because I wasn't aware of the former and I wasn't aware that you were even conceiving of doing it to the latter. And so they basically just said, yeah, look, we're going to take total control of the entire thing and just don't. And Mark, what was steel Man? It for the listener, like, what was their argument? Why? Well, so this gets into this whole, like all these debates around, like, AI safety, AI policy. So there's sort of several dimensions on it. And I'll do my best to steal, man. It's. So one is just like, to the extent that this stuff is relevant to the military, which it is like, if you draw an analogy between AI and autonomous weapons being like the new thing that's going to determine who wins and loses wars, then you draw an analogy to the. In the Cold War, that was nuclear energy, that was nuclear power, and that was the atomic bomb. And you know, the federal government, the steel man would be. The federal government didn't let startups go out and build atomic bombs. Right? You had, you know, the Manhattan Project and everything was classified. And you know, at least according to them, they classified down to the level of actual mathematics. And, you know, they tightly controlled everything and that. And look, you know, that, that determined a lot of the, you know, the shape of the world, right? And so there's that, and then look, there's the other. That's. That's part one. And then look, I think part two is there's the social control aspect to it, which is where the censorship stuff comes right back, which is the exact same dynamic we've had with social media censorship and how it's basically been weaponized and how the government became entwined with social media censorship, which is one of the real scandals of the last decade and a real problem, like a real constitutional problem that is happening at hyperspeed and AI. And these are the same people who have been using, using social media censorship against their political enemies. These are the same people who have been doing debanking against their political enemies. And they basically, I think they want to use AI the same way. And then look, I think the third is, I think this generation of Democrats, the ones in the White House under Biden, they became very anti capitalist and they wanted to go back to much more of a centralized, controlled, planned economy. And you saw that in many aspects of their policy. But I think quite frankly, they think that the idea that the private sector plays an important role is not high up on their priority list. And they think generally companies are bad and capitalism is bad and entrepreneurs are bad. And they've said that a thousand different ways. And you know, they, you know, they demonize, you know, entrepreneurs as much as they can. It's interesting, Canadian publication the Globe and Mail came out yesterday and says Canada needs nationalized public AI. And Toby, the greatest Canadian entrepreneur in history says deranged drivel in response. But yeah, Elon also piled on to Sachs's take, which centered around a lot of those staffers allegedly going over to Anthropic. It's interesting, we were talking about these alliances that happen. There's the anti Netflix alliance, the anti YouTube alliance. There's a little bit of an odd alliance happen happening against Anthropic right now. Let's move on over to Netflix and Paramount because there's news in the bidding war. First, I'll tell you about graphite code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster. And I will also tell you about Railway. Railway is the all in one intelligent cloud provider. Use your favorite agent to deploy web apps, servers, databases and more. While Railway automated it automatically takes care of scaling, monitoring and security. We'll come back to this story with none other than Ben Thompson in 20 minutes in the wall Street Journal. In the Exchange section this weekend, they have a full Bleed article how David Ellison finally got what he wanted. And I love the subhead. No, no, no, no, no, no, no, no. Okay, yes. He got 10 no's and then finally got it done. Never give up. Never, never give up. For. For six months, the son of one of the world's richest men kept hearing the same unfamiliar word, no. Even before he closed a deal to combine his company with a much bigger one, David Ellison was already plotting to do it again. Once his Skydance Media took control of Paramount, he turned his attention to a Hollywood icon, launching an audacious takeover bid for Warner Brothers Discovery that would give the Ellison family full control of a sprawling media empire. So he came in with an offer at $19 per share, finally got it done at 31 a share. The final Paramount winning offer, $81 billion. And again, as we are covering this live, every time that Paramount made an offer, they were very clear that it was.