LIVE CLIPS
EpisodeĀ 3-5-2026
And they, and they can say to the CEO, look, we believed in you six years ago. Of course we still believe in you today. We're a long term investor. So talk about the IPO market. Is it over or are we back? It's slow. I mean if you look, I mean despite the fact there was this little boom in January in terms of both number of pricings, in terms of filings were down over 20% in both of those year over year from last year, which was a garbage year for IPOs. Yeah. So everybody is looking at the big dogs, right? Everybody's looking at Space X. Everybody, you know, coming sooner. Everybody's looking at potentially open air, anthropic. I think databricks is another one that people are sleeping on that's almost certainly going to come in the second half of this year. So I think what we might end up with is a really skewed IPO market. You know, the dollar numbers are going to look good because you're going to have some huge ones but you're not necessarily going to have an enormous number of actual issuers at least at this moment. I mean, yeah, it's pretty dead. We've got one coming tonight. That's all for the week. Is that because tonight. Oh, tonight's on the call. Mini Med, not a venture backed startup, it's a diabetes management out of Medtronic. Is that because like you sort of have some sort of liquidity premium for the first name to go out in a category like Core Weave gets out and there's like the one public Neo cloud and so that puts like the bar goes a lot higher for the next three Neo clouds to get out. Something like that. I mean I, I like, I would like to think that that that was a good argument or that that like that it's, that's that much. I think it's really a vibes thing and, and a cowardice thing. Kind of been writing this story now for three years which is these companies are scared to go public. Bankers are scared to necessarily bring them. Even though bankers are coin operated and IPOs are good for them, they're a little bit scared of having a lousy IPO and then having a bad reputation off of it. And also a lot of them are spending a lot of time on Space X because there's a lot of money in that for them. No, I think, I think people are scared. They're scared, you know what if this doesn't work? It's going to be a lot more work. There's going to be a lot more scrutiny on us. And, and I think folks have just been terrified of going public for years, and I don't think that's necessarily change. Yeah. Well. And of course, the choppiness of the market this year, the SaaS, apocalypse narrative, all sorts of stuff. But this is the problem. Right. There's going to be choppiness every year. Right. You know, last April, you obviously had the Liberation Day stuff. Now you have a war, you have the SaaS bottom. There's always going to be a reason to not go public. And I think a lot of people take that reason. You know, there's always something. Yeah. Which whatever you. What have you heard? Obviously, bad week to be fundraising in the Middle East.
Equipment, so, so we, you know, it's a key enabler why we are here and why the future is exciting for space station. The arrival of sort of a random question, but I'm just curious, what, what is the planned process for retiring the current space station? You do the humans leave and it just kind of continues to orbit? Do they take it out of orbit? What is that? Do you have any idea what that'll look like? Yeah, it's interesting. When they design a space station, they sort of panned that, right? Like none. They don't have the ability to deorbit itself. And so they always had the plan to build a special spacecraft that will push it in the Pacific and That's called a US orbit vehicle. SpaceX won that contract. I think NASA is hoping to get it there, you know, two or three years before the retirement of the ISS. The current retirement date is the end of 2030. There's discussion now to extend it to 32, but somewhere in that region. So it will be there ready to go, especially if there's a safety issue or something like that and to do testing. And then they will just push it, you know, with some Delta V, some, some propulsion to eventually it will reenter in a pretty interesting display if there was anyone there, to a place called Point Nemo in the Pacific. I think NASA is expecting to de crew it nominally meaning not in an emergency scenario due to the aging maybe a year ahead of that, but I don't know the exact date. Well, hopefully we can get a camera there and livestream it because they're pretty good at watching the rockets come down, even when they're late landing in the middle of the ocean. So hopefully we can see the deorbiting. Yeah, we saw some, some of the starship reentry was pretty spectacular, so hopefully we'll see the same. Yeah, yeah. In the Indian Ocean. That was a remarkable video. Dramatic ending. Is the entire team based in Los Angeles? Where's kind of. Where's the team dispersed and what are you hiring for?
Things of Meta Ray Ban users. I haven't, I haven't covered it or I haven't followed it very closely. It's crazy. This is reminiscent of when people figured out that Apple with Siri and Amazon with Alexa and Google with Gemini had these folks, they're called annotation analysts, who listen to everything you're putting into the voice assistant and they're comparing it to the outputs. So they're listening to the raw audio and comparing it to the outputs, so what Siri thought they said. And so they're able to correct it manually in order to improve the underlying model. And that was a big blockbuster when everyone discovered that that was happening a few years ago. This is not audio. This is camera footage. This is people potentially, you know, intimate moments. And this is not just what you're saying or facing you. It's what you're seeing while wearing these glasses everywhere. And so I think that raises really big questions about the smart glasses category. When Apple releases their smart glasses, they're going to have a big focus on privacy, a big focus on seeing the surrounding environment for you. At the same time, though, I don't think that this changes the overall equation for smart glasses. I still think that this has been a successful pioneering product for meta. I still think they're pushing forward here. I still think they're going to release several new models that, based on what I'm hearing, are pretty exciting there too, for the Meta smart glasses. But will they need to make some privacy adjustments? Yes. But I think in this case with the smart glasses, the first mover advantage is real and Meta has that. And in terms of glasses, they've got the biggest partner with extra luxottica. Yeah, that makes sense. Talk about the student.
Station will be up. How important is the next generation of reusable rockets, the starships of the world, to unlocking new capabilities in deploying space stations? I mean, if you even go back to reusable rocket, even before, right. Falcon and what SpaceX has done, we couldn't be here without the early transition to commercial that NASA and SpaceX have done with the Dragon spacecraft. You know, it's pretty, you know, if we were still in the space shuttle era, I don't think I could be buying a ride to commercial space station haven one from, from NASA. And because of what they did with that program, hopefully they'll get Starliner up and running the Boeing version. You know, we can go call SpaceX and buy a ride for astronauts to our space station. Starship will, you know, take that. You know, what we hope is it will lower the cost of transportation to our space station by at least an order of magnitude. It'll allow us to bring up four people, but maybe 20 people. So that's really a big timing and a big unlock. You know, it still will take a little while for it to arrive. It will get there and all our goal is that by the time Starship is ready, that we have a large enough space station so that, you know, in relation to the volume inside Starship, we are still, you know, four or five times larger. We have a lot more power, a lot of equipment. So we, you know, it's a key enabler why we are here and why the future is exciting for space station. The arrival of stuff. Sort of a random question, but I'm just curious, what is.
So we benefit from that full pipeline when our station will be up. How important is the next generation of reusable rockets, the starships of the world, to unlocking new capabilities in deploying space stations? I mean, if you even go back to reusable rocket, even before, right. Falcon and what SpaceX has done, we couldn't be here without the early transition to commercial that NASA and SpaceX have done with the Dragon spacecraft. You know, it's pretty, you know, if we were still in the space shuttle era, I don't think I could be buying a ride to commercial space station Haven one from, from NASA. And because of what they did with that program, hopefully they'll get Starliner up and running the Boeing version. You know, we can go call SpaceX and buy a ride for astronaut to our space station. Starship will, you know, take that. You know, what we hope is it will lower the cost of transportation to our space station by at least an order of magnitude. It'll allow us to bring up four people, but maybe 20 people. So that's really a big timing and a big unlock. You know, it still will take a little while for it to arrive. It will get there and all our goal is that by the time Starship is ready, that we have a large enough space station so that, you know, in relation to the volume inside Starship, we are still, you know, four or five times larger. We have a lot more power, a lot of equipment. So we. It's a key enabler why we are here and why the future is exciting for space station. The arrival of Starship, sort of a random question.
What about like approval, cybersecurity compliance? Like, I imagine that even if you have the best team of engineers possible, like you still want third parties to work with that. What does that look like? Yeah, I think, you know, there's a long, long tail of, you know, security and compliance. I think things there. I think, you know, we're continuing, I think to work through that. Some of the basics are like, you know, people have to trust that the tool is going to perform the things that we say it is. And so in the early days we'd have lots of fun conversations with folks around, how are you doing a low pass filter and just explaining basic math functions, et cetera. But we're well past that now. So. Yeah, I mean, I think we're continuing to burn down that we do like we support classified work. Right. We have a facility clearance as a company. A lot of our engineers are read in and I think all of the, not all of a good portion of the one like most interesting work with a lot of our federal customers is happening at higher classifications. And I think there's a direct correlation as you go to higher classification programs of like a lack of tools. Yeah, right. And so I think from a business perspective, if we can get like there's, there are really big budgets, people are, you know, they're all just give facts like I mean, you know, $3 million a day being spent on, on test campaigns alone. Right. And so if Nominal is able to bring in some of those test timelines by, you know, a day and we can do more than that, it's like, it's really valuable. But when you go to some of those high side programs that are doing, you know, you really are restricted in what you can use. Crazy. Some of the most important work and the fewest tools. Yeah. Is aws. Sort of like the IBM of the.
I think a lot of people take that reason. There's always something. Yeah. What have you heard? Obviously, bad week to be fundraising in the Middle East. How much are you looking at the war as having kind of downstream impacts on capital flows, do you think? It's this started as we thought it was going to be a couple days, then it was a few weeks and now apparently it's looking like September. So who knows at that point? Yeah, so many of the. So like every time I log on to X, there's a refinery that's, you know, on fire, things like that, which is kind of the economic engine for a lot of this capital. Yeah, yeah. I'll take kind of the counter argument, I think on this, which is one of the reasons that whether it be Saudi Arabia or Qatar or some of these other countries have decided to do so much venture capital slash private equity investing, particularly in the US and by the way, it's worth noting, the folks who are doing these investments often are based in New York, they're not based in the Middle East. So it's not actually impacting their day to day lives is because they want diversification from oil. So yeah, there could be some issues in terms of actual capital flows because people are like, oh wait, we're going to have less actual money coming through to invest. But at the same time, the point is diversification and it's possible that what's happened over the past week or two is going to only kind of solidify that desire for diversification. Oh, wow. Oil is our economy, we need other stuff.
Information. You want one source of information, they can be kind of the distributor to everybody else. What are you tracking on the other lab side? There's a whole bunch of rumors. Anthropic Open AI. What. What are you seeing on that side of the. Of the IPO market? On the ipo. I mean, open there. I do think there is perceived, you mentioned earlier, this kind of first mover in a space. I do think there is a sense between Sam and Dario that going first might have some advantages to it. I think think there could also be some disadvantages. I do think even though we all know the losses are huge, I think the market's going to be shocked when they actually see it in black and white from the companies how large these losses are. But so I think there is a little bit of a race between the two of them. It's going to be a huge headline. It's going to be a huge story. Open AI does seem to be a little ahead on that when you look at, you know, they have a CFO who's done this before, and Sarah Fryer, right. She helped take Square now block public. She then ran next door. She's done this before. So I think that's going to be the story of the second half, is going to be those two companies maybe battling. And I do think OpenAI goes for first again, unless there's some extraneous events.
Because a lot of these investors haven't been able to get their money back via traditional means. Do you think we will see a well known venture capital firm go public in the next three years? Yeah, General Catalyst. Yeah. I mean, they're, they're the one. I mean, because they're not really a venture capital firm anymore. I mean, it's their core, it's still what they do. And they invest, you know, in series A rounds and startups and they still do that. But they've got a wealth management business. They own a hospital. I mean, they own a hospital. That is not a venture capital thing. And by the way, I have to say, and I, this is not a knock on them, man, if I was admitted into a hospital and somehow there was like a plaque on the wall that said we are owned by a Silicon Valley venture capital firm, I'd be scared out of my mind about what was about to happen to me in that room. But they're not. You'd be getting a Y company, robot would be doing your surgery. It'd be great. That's exactly what I want. I, you know, as you know, whenever you see a doctor, particularly for a procedure, you want to hear the word experimental. That's always what. And you know, so they're not really VC for anymore and they've diversified and they want a diverse B paying asset base, which is what they're building. Good for them. And to be honest, it's not even that different if you look on the kind of the private equity side. You know, Blackstone started as a private equity firm doing leverage buyouts, but then they got into other stuff. Right. They have private credit. They arguably became a real estate investment firm more than anything else. Diversified asset base. Yeah, I think GC will do it. Sequoia has been talked about forever, but they don't seem to have an appetite. And the way that they operate in terms of management, it doesn't seem, it doesn't seem like it's going to be them. What about maybe Andreessen? Yeah, maybe Andreessen. Maybe. Maybe Andreessen. But can you honestly imagine Marker Ben doing quarterly earnings calls? I can't. That'd be fun. I mean, they do a lot of podcasts, so it's somewhat similar. They have the reps, they do a lot of podcasts with their friends. Sure.
Business is AI, their business is securing it. CrowdStrike secures AI and stops breaches. So if AI is truly deflationary, how would we know what chart or metric would show it first? And you could look at AI API pricing as probably the best answer here. So you know there are certain tasks that AI can do now. And so the those tasks have moved from being limited by regulation or human output or the population workforce to the compute allowed or the compute allocated. And the cost of compute is falling. So TVs were expensive, they got cheaper as we made more and more factories that were more and more automated that could produce ever cheaper TVs. And that's why when you look at the inflation trends, you see health care, education, anything that's highly regulated. There are only so many lawyers that are allowed to pass the bar, there are only many, that many doctors. And so health care has increased in cost where TVs has been this like knockout drag out fight between the most ruthless companies all over the world. And the price of TVs has fallen. And so if you start putting more of those, more of those jobs on the AI, on the AGI curve, you see deflation. Where should we see it first? We should see it in AI API pricing and GPT4 equivalent inference costs have collapsed. In late 2022 they were, which is like right before GPT4. I think GPT4 came out January of 2023, it was $20 per million tokens. In December of 2025, the same model, $0.40 per million tokens. So this is a 50x decline in three years. I think that you can even get more intelligence for cheaper depending on how you're inferencing these. You can run some of these models locally and the energy cost is even cheaper at that point. So this 50x decline in three years, this is falling faster than PC compute costs. It's also falling faster than dot com era bandwidth costs. And so this should be the leading indicator for downstream service deflation. In any example that you give of like oh, I was just able to use ChatGPT to check my. I talked to somebody who said they love ChatGPT because their check engine light came on. They were able to take a picture of it and it identified the car, told them what they needed to do and they felt like when they went in for servicing they weren't going to get raked over the coals by their mechanic because they had like sort of a mechanic in their camp, in their fighting in their court. And so this should effectively act as like okay, you have your own mechanic on your side and then you can do it yourself. But you can also just go in and negotiate and say, hey, I know that the fair price for this is 50 bucks, not 200, so give me the good price. And so you should see that. You should see. Yeah, you can assume that tokens are going to get cheaper and more effective indefinitely. Yes. But all these new product launches, things like Open Claw, are actually expanding the number of tokens. Yes. And so you are seeing deflation, but then you're also seeing Jevons Paradox and increase in demand. And that's why overall growth is increasing. So how should one. Historically it's like one person, one AI chat, you're only using so much, then you have agentic products, then you can actually multiply the number of agents that an individual can be running and the output goes through the roof. It's time to head to Gastown, baby. So he asks this follow up question. How should one think of deflation if demand for intellectual goods continues to grow as production costs.
Shakespearean terms that they hate their competitor and let it blind their calculus. That is a wild, wild time. Yeah. And I just. I just don't think Dario is in that strong of a position to be complaining about or insinuating that OpenAI is lying when he's running super bowl ad campaigns that are just lying to everyone about OpenAI. Yeah. Yeah. It's like, why should anyone trust you? Yeah. I do wonder. This feels. I'm still very interested in, like, you know, anthropic gets a lot of credit for predicting the future, predicting how.
Been sort of quiet. I don't know. Maybe I just haven't. Maybe I missed the post. But, yeah, I mean, part of this. I wonder. So Dario sent a memo on Friday that was pretty scathing and looks really, really, really bad. And the timing here maybe played a factor. It felt like they were still seemingly like trying to work towards a deal. Yeah, let's read what. Let's read what he said. He said in his memo that he believes the attempted spin gaslighting is not working very well in the general public or the media. Though he added that working on some Twitter morons. Is he talking about the administration? I mean, they're all on Twitter, right? Like, that's the. That's the whole thing with this administration. He's talking about lots of people, obviously, but, you know, not the most politic way to handle this, I suppose. Dario says you're a moron if you believe OpenAI's story around this. Yeah. But less than a month ago, Dario was running greenlit super bowl ads that were intentionally designed to mislead hundreds of millions of Americans on OpenAI. And so now, now he wants to be the one to say he wants to be the most trusted source through this whole debacle. And, yeah, I mean, there's also the back and forth on being the most leaned in bidding on one contract that's.
Nvidia basically captures and did for a While more than 100% of the profits from the AI boom because so many of the other companies in the AI space were losing money. And so the foundation Labs are losing money. Nvidia's profit margin went from like 30% to 60% gross margins. And the it's borne out in market cap. So Nvidia added 3.2 trillion in market cap from when Daniel wrote this piece. And I remember listening to him on Strathecary talk to Ben Thompson alongside Nat Friedman and say, yeah, based on ChatGPT, Nvidia seems sort of undervalued. And I was like, oh well, if they're saying it on Ben Thompson Stratecheri podcast, it's obviously priced in. Everyone knows this. And I was wildly wrong. Never doubt yourself. But what's interesting is that the rest of the platforms did not see 3x gains. They did not add $3 trillion in value. Microsoft from January 2024 to today is only up 4%. Amazon's up 30%. And then you do have OpenAI anthropic XAI in the private markets. The gains there, they've been huge and staggering in that it's like the fastest growing companies in private markets history shaking the venture capital world. You're either in or you're out. It's a hu but you add them all up. 4 trillion. So crazy when you think Satya in many ways went on such a generational run and you got a 4%. If you hadn't looked at the stock price at all, you would think, oh, it's got to be up what, 40%? No, it's up 4%. And so that was Daniel Gross's next question. What happens to Nvidia and Microsoft? These are the two interesting players at the time, some of the biggest companies, the most AI aligned. Nvidia absolutely crushed. Revenue tripled from 60 billion in fiscal year 2024 to 215.9 billion in fiscal year 2026. Microsoft has been far less dominant. So Azure growth actually is accelerating. It's at 40% year over year, but the stock only returned 4%. As we said, the market punished the $80 billion in AI CapEx that Satya Nadella has been telling to investors because everyone's asking, well, I could be in Nvidia and they don't need to really invest that much because they're a fabless semiconductor design company. Their gross margins are increasing, they're printing cash. And you are saying, okay, you gotta spend $80 billion and we don't know when we're gonna get the profits from that. So there's an open question there. So when it comes to the picks and shovels trade, you don't wanna tie yourself up to an individual startup or a foundation model lab. You just want to own the simplest thing that value will accrue to Nvidia all the way. They were the clear winner of the picks and shovels trade. Microsoft's infrastructure play. I think it's a good decision. It's just the bets have yet to pay off for shareholders much the of.
Mutual fund. Right. You know, Fidelity Contra Fund has a bunch of quote unquote startups in it. You know, and what's coming soon out of the Trump administration is letting your 401k plan invest in private funds. Now, are they going to actually invest in individual funds? Will they try to kind of put money into pools? Unclear, it hasn't happened yet. But, but there's a, it's a lot of money coming. And it's the thing that venture capitalists and private equity firms have viewed as kind of the holy grail forever because they haven't had access to any of that defined contribution money. So when you saw the news, you jumped out of your chair, said, yes, finally my 401k will be deployed into venture capital. I mean, look, this is the moment when traditional limited partners in venture capital have been, if not getting out, at least trying to give themselves some optionality. Right. You've seen some of the kind of the biggest endowments, Yale, Harvard, et cetera, do big secondary sales of their private equity and venture portfolios. They're not getting out, but they're recognizing they were a little bit too concentrated. And so it's the very minute when some of the smartest money is, if not leaving at least hesitating, that, I hate to say it, all the dumb money is about to come flooding in. So what does that mean for this K shaped dynamic that we see in venture capital, where there's a few funds that are getting bigger, they still have solid returns, and then smaller managers seem to.
Features, but just feels like he's taking it a lot more serious. How much. A lot more DNA there. How much are you planning to cover OpenAI's hardware? Is that going to be in your territory at Bloomberg? Front and center. Okay, so give us, give us your. How have you processed the kind of the leaks? We had, we had the, the dime leak around the super bowl which looked like. I don't think it was real. I mean, I don't. Everyone I spoke to says this is not a real product. Yeah, but then Joe Gabbia is sitting there in San Francisco using it like this week at a coffee shop. Didn't exactly look the, like the most organic. Yeah. There's a lot of conspiracy around it. And then if the, if the dime ad was fake, then it was an incredibly good model because I forget the name of that actor, but. Skarsgard. Skarsgard, yeah. Yeah. But it didn't look, it looked like, okay, leading Frontier Video model. Yeah. Yeah. Well, maybe it's a teaser of their models, but here's the thing. Yeah. The hardware is going to be beautiful. We know the hardware is going to look great. It's going to be designed by Love from Johnny. I've all those guys, the former Apple people. There's no questions there. The question is who's going to buy these things? Right. Like, are people really going to trust OpenAI as a hardware company? And Apple has this ability to fast follow. Let's say OpenAI does come out with something pretty nifty. What stops Apple from copying it immediately and taking all their market share? Apple has the retail stores, they have the privacy story, they have the brand. OpenAI has this great AI brand, but they don't have a hardware brand. And so it's almost like they have the opposite problem of Apple. But the leg up they have is that their underlying technology is clearly better. Yeah. And there's this new, there's, there's rumors around their new voice model that they're working on. And so the question, the question to me is like, can they create a beautiful device and have a meaningful breakthrough on the model side that somehow makes it more difficult for Apple to just fast follow? Right. Well, I would say, you know, for sure, yes on point A, for sure, yes on point B, maybe yes, maybe no on Apple's ability to fast follow based on the models. But I just think the bar to sell people hardware, even if it's good and even if it comes from a big brand, it's a really high bar. It's a really high bar to sell in these Apple like quantities. I mean, even look at Google in terms of their market share on their hardware. They have amazing software and they have amazing hardware and they have the best advertising for consumer electronics, for phones, at least in a very long time. I talked about the ads last time I was with you guys. Yeah. And they've made little to no dent and obviously there's big questions there that Google should even be doing hardware. Yeah. Meta ray bans too. You open up Instagram and you see like a full bleed special modal ad for meta ray bans and they're doing okay, but it's not, you know, 80% of the population. The meta ray bans are considered a smash hit and they've sold more than 10 million units. Yeah. It's not a lot. Yeah. Apple sells 10 million units of AirPods. Yeah. Probably a quarter. Yeah. So the bar is just unbelievably high. Yeah. What's going on with the Meta, with that lawsuit with Meta? They had contractors that were just able to view the recordings of Meta.
Came around to your VC returns. True thing. Yes. But that's a good place to start. What, yeah. What is the health of the, of the VC industry, The VC market? I mean, it's not great. Right. We reported the other day, you know, that median returns are under the s and P500, they're under the NASDAQ, they're under the Russell 3000 for 25 years. And I know there's a little bit of lumpiness because, you know, you're coming out of certain things that's coming out of kind of the great financial crisis. But that was true for the public markets as well. So it's great. Now look, you know, top, top funds in those top 5%, they're doing great. But the reality is you've got so much more money flooding into the market now. That means more money is going to be at that median and that's not great for them. And it's, you know, you're talking for a one case, you're talking more money coming for insurance companies, etc. It's not a great place to be. It's one LP told me this quote I use, it's kind of the story of hope over experience. And that's what venture capital seems to be right now. Yeah. Talk about that money that's flooding into VC because there's smaller funds that had a particular playbook.
Because it just doesn't. It just. It's just. It's just unpopular. And I think that. And I just think that the actual, like, the real individuals, like the real Americans and the real AI researchers are not as far apart as the rhetoric makes it sound like. I think that there's a lot of comms and PR and bluster and ego at play between all the parties involved and folks are at each other's throats, but I think that if they actually sit down and try. And I have a buddy who was on a nuclear submarine for, like a year underwater, and I was like, what was that? Like, Were you like, finger over the button? He's like, no, it was like a ton of paperwork every day. I'd, like, go and, like, see how many whales are there and then, like, count them up and, like, with pen and paper, like, write down a report on how many whale. It was, like, the most boring, least automated. No, seriously. And that was the thing that we learned from Project Maven. Like, Google had this major problem protest, a big walkout. And at the end of the day, like, Google was giving the Department of Defense just access. They weren't building killer robots. They were giving them access to TensorFlow APIs that other defense contractors would be able to use to run just classical machine learning programs to classify items in images. So how many cars are in this city? How many, like, like, where are our drones on the map? Like, what are we seeing? Like, basic stuff that was typically the. The provenance of, or the domain of, like, an Air Force Reserve person sitting in an office, probably in Nevada, just clicking, okay, yeah, tagging. Okay, yeah, that's a house, that's a car. Like, this is what this is. It was not the true, like, command and control and endgame that people worry about. So figuring out where these. Where these models can actually have an impact. I think that there's going to be a lot more agreement than there is disagreement. What do you think, Tyler? Yeah, I do think basically before this memo got.
He also asks a more philosophical question that doesn't really have a detailed answer, but I'd love your take on it. Jordy, too. Is lifelong learning worth investing in something worth doing beyond the economic value of mastering the task? It's a very abstract question. It's a very personal question. I tend to think. Yes. I tend to think that on Maslow's hierarchy of needs, you need food, shelter, family, friends, clout, or whatever. At a certain point, learning a skill for the sake of learning that skill is edifying. Even in a world. You know, it's like going to the gym. Like, I was listening to Sam Altman talk about how kids will never. Kids that are born today will never be smarter than the smartest AI systems. And that's weird. That's different. But I was thinking about, like. But, like, I was. There was a time hundreds of years ago when you could actually be the best thing in the world, the best option for, like, carrying lumber, other than, like, maybe an ox or something. But, like, in general, like, before we created machinery, like, before we created the car. Like, you could be the fastest person, and then as soon as. As soon as the ZR1X was released, you didn't stand a chance. And I had to explain this to my son at some point. He's very interested in cheetahs. And I was telling him, like, cheetahs are the fastest animal. And he was like, but is it faster than a car? And I was like, not even close. Like, me in a car. I'm smoking that cheetah. I am slower than that cheetah, but I'm way faster in my Cadillac. DG says something worth doing beyond the economic value of mastering the task. Yes. And the way I would kind of flip this, I feel like most of the gains in my life have come from mastering myself or understanding myself, learning about myself, and then understanding the world, and then combining those two things. In a way, it hasn't been about mastering any one specific task. Yeah. And so I still think there's tremendous, tremendous gains to, again, understanding yourself, understanding the world. How do these two things fit together? So I agree with you on just the level of, like, there will be not just tasks to master, but, like, ideas and combinations of skills that will be valuable economically. But even in an ASI world where there is no economic value to any task mastery, I still think it is worth investing in lifelong learning because that will be edifying. That will be satisfying. Yeah, it's just enjoyable. It's becoming a painter, becoming someone who gets joy purely out of the process of painting the landscape. When cameras exist, when generative AI exists, it can still be edifying and valuable to sit there and look at a landscape and paint it and wind up with the result, even if the result is not economically valuable or anywhere near what you could get with technology like a camera or gen AI. So very abstract, but still something fun to think about before we move on to a.
That's just wrong. Right? Market clearing order inbound. Come, get up. You're surrounded by journalists. Thank you. Position. Strike 1. Strike 2. Activate Go retriever mode. Market clearing order inbound. Vibe to it. I see multiple journalists on the horizon. Standby, Founder, You're watching TVPA. Today is Thursday, March 5, 2026. We are live from the TVPN Ultradome. The temple of technology, the fortress of finance, the capital. Let me tell you about ramp.com time is money save. Both easy use, corporate cards, bill pay, accounting, and a whole lot more all in one place. We have. We are sick. John and I are both very sick. Sick of not. Not podcasting. Let's go. We are actually very sick. We are deathly ill. But that's the beauty of. Of watching a daily show live. There's no risk of me getting you sick because you're on the other side of a screen. Screen time. And that's the beauty of a daily show. There's no. You don't ever have sick days. Yeah, there's no days off because there's no days off. And we have a fantastic lineup today. We have Mark Gurman joining. Let's pull up the linear lineup. Linear, of course, is the system for modern software development. 70% of enterprise workspaces on Linear are using agents. We have Dan Primack coming on from Axios to break down IPO market. What's going on in venture capital? What's going on with venture capital returns? We have. Do we have Zach from Plaid coming on today? That'd be awesome if so. We have Kevin Record from nominal massive unicorn round that just happened with founders fund coming on in person live, of course, Max from vast space. Well, thank you so much for tuning in today. Speaking of Plaid, Plaid powers the apps you use to spend, save, borrow and invest securely connecting bank accounts to move money, fight fraud, and improve lending. Now with a John, why is no one talking about Daniel Gross? No one. Literally. No one really know what Gross he know. He should be a household name. You should get into a taxi and they should be. Oh, did you see how. How incredibly dialed Daniel Gross's AGI trades were? January 14, 2024. Yes, this is a great, great website. It's. What is it? Danielgross.com agitrades the classic Times new Roman serif font, 12 point. Just hammering it out in the vanilla. HTML. No need for styles, no need for bootstrap templates. What's that? CS Tailwind. He doesn't need Tailwind for this. He just wrote it and probably marked down our HTML directly and a lot of it has come true. It's interesting because a lot of them are framed as just like open ended questions. But if you think about, you know, if you believe in AGI and then you go through the questions like, you will see exactly what happened over the last two years and this has been the like the underpinning thesis of situational awareness in many ways. Daniel Gross was a, was an anchor of the fund that of course is. And it's particularly relevant today because I'll read the intro. Yes, this is January 14, 2024 over 2 he says I think we can all agree that GPT4 completes many tasks at human level proficiency. It is imperfect in odd ways. It can write software like a smart MIT undergrad but can't do basic task planning like an entry level ea. It speaks all languages but can barely do math. Suppose the progress doesn't stop. Just like GPT4 was better than 3, GPT5 is capable of basic agentic behavior that is able to accept a task, work on it for a while and return results. Nailed it. And of course today OpenAI is has released data GPT 5.4 which does exactly this quite well. The reviews are coming in and they are quite, quite good. Daniel continues Some modest fraction of upwork tasks can now be done with a handful of electrons. Suppose everyone has an agent like this they can hire. Suppose Everyone has 1,000 agents like this they can hire. What does one do in a world like this? So, so let's go through them one by one. There's 1718 questions. Something like that. First, let me tell you about Label Box. Reinforcement learning environments, voice robotics, evals and expert human data. Label Box is the data factory behind the world's leading AI teams. So he kicks it off with an easy question. In a post AGI world, where does the value accrue? People were debating this at the time. Application layer versus foundation model layer versus Infrastructure layer. Value has definitely accrued to the infrastructure. There's a hair layer. I mean you're even post AGI. John's new haircut is paying. Paul Graham wrote a whole piece about how brands are important. We'll cover that tomorrow. But he compares it to the Quartz crisis in the Watch world and has a bunch of thoughts about value in the post AGI era. But just looking back in the last two years, value has clearly accrued to the infrastructure layer. So that means chips, packaging, power, et cetera. And this is the situational awareness trade. By and large Nvidia basically captures and did for a While more than 100% of the profits from the AI boom because so many of the other companies in the AI space were losing money. And so the foundation Labs are losing money. Nvidia's profit margin went from like 30% to 60% gross margins. And the it's borne out in market cap. So Nvidia added 3.2 trillion in market cap from when Daniel wrote this piece. And I remember listening to him on strathecary talk to Ben Thompson alongside Nat Friedman and say, yeah, based on, based on ChatGPT, Nvidia seems sort of undervalued. And I was like, oh well, if they're saying it on Ben Thompson stratecherry podcast, it's obviously priced in. Everyone knows this and I was wildly wrong. Never, never doubt yourself. But what's interesting is that the rest of the platforms did not see 3x gains. They did not add $3 trillion in value. Microsoft from January 2024 to today is only up 4%. Amazon's up 30%. And then you do have OpenAI anthropic XAI in the private markets. The gains there, they've been huge and staggering. And it's like the fastest growing companies in private markets history shaking the venture capital world. You're either in or you're out. It's a huge deal. But you add them all up to 4 trillion. So crazy when you think it. Satya in many ways went on such a generational run and you got a 4% buff. If you hadn't looked at the stock price at all, you would think, oh, it's got to be up what, 40, 40% now it's up 4%. And so that was, that was Daniel Gross's next question. What happens to Nvidia and Microsoft? These are the two interesting players at the time, the big, some of the biggest companies, the most AI aligned. Nvidia absolutely crushed. Revenue tripled from 60 billion in fiscal year 2024 to 215.9 billion in fiscal year 2026. Microsoft has been far less dominant. So Azure growth actually is accelerating. It's at 40% year over year, but the stock only returned 4%. As we said, the market punished the $80 billion in AI CapEx that Satya Nadella has been telling to investors because everyone's asking, well, I could be in Nvidia and they're not. They don't need to really invest that much because they're a fabless semiconductor design company. Their gross margins are increasing, they're printing cash. And you are saying, okay, you got to spend $80 billion and we don't know when we're going to get the profits from that. So there's an open question there. So when it comes to the, when it comes to the picks and shovels trade, you don't want to tie yourself up to an individual startup or a foundation model lab. You just want to own the simplest thing. That value will accrue to Nvidia all the way. They were the clear winner of the picks and shovels trade. Microsoft's infrastructure play. I think it's a good decision. It's just the bets have yet to pay off for shareholders. Much more pointed question Daniel Gross asked, is Copper mispriced? Was Ms. Was copper mispriced in January of 2024? The answer was, oh yeah, majorly. Now, raw materials don't move like meme stocks. So the actual move, copper was $3.75 a pound in January of 2024. Two years later, it went to an all time high of $6.61 a pound. So it's not like it even doubled. But that type of move in a basic material that we've been mining for hundreds of years is remarkable. And that's because AI uses a lot of copper. Nvidia's GB200 NVL72 server rack use 5,000 copper cables. There's 72 GPUs wired together, but you need 5,000 copper wires to get it all to work together. If you stretched out all of the copper wire in an NVL 72 from one side to another, it would go two miles long. And this is one server rack. Like this is like, it's not like going across the data center. This is not taking the data from AWS east to California. This is within that one server. You stretch out the copper wire, you're going two miles. It's an incredible amount. A single 100 megawatt data center, which is a data center for ants by modern standards, needs around 3,000 tons of copper. I think that's like half a million dollars or something like that. When you multiply it all out or no, it's data centers broadly will be using half a million tons of copper annually in a few years. And people are actually saying that copper is the new oil, but there are a bunch of things that are also the new oil. So in the AI buildout, so it's so complex, there's bottlenecks everywhere, so you got to take that with a grain of salt. But copper today, it's looking like oil is the new oil is the new Oil. What is crude for 80 80. So it was at 70 before the Iran war broke out. There were predictions that it would go to above 100. Oil is another commodity that does not move in the same fashion as a meme stock or a Except for today. Except for today, big move today. Up 8%. But typically as prices go up, the firms drill more and the prices reach equilibrium. U.S. drillers aren't rushing to increase Oil production is on the front of the Wall Street Journal Business section today. The Middle east is on the cusp of a prolonged conflict that could push oil prices to heights not seen in four years. For now, American oil drillers are sitting this one out. The US oil benchmark settled at 74.66a barrel Wednesday, the highest front month settlement since the 12 day exchange of Israeli, Iranian and American strikes last June. But West Texas oilmen it makes but to West Texas oilmen it makes little sense to add expensive rigs and boost production when the war could be short lived and crude prices drop. So that's what's happening in the oil markets anyway. Moving over to real estate, Daniel Gross asked, is San Francisco the new Detroit? I'm not exactly sure what he means by New Detroit, because new Detroit meaning like the old Detroit when it was Motor City and they were building amazing cars there, or the new Detroit in the sense of Detroit today is a hollowed out shell of what it used to be a former boom town. The one thing is clear is that SF is back. SF is completely booming. Office vacancy Office vacancy fell from 36.9% to 33.5%. OpenAI has a million square feet of offices. Anthropic has a 25 story tower. Sierra, an application layer company signed to 300,000 square feet of office space in San Francisco. The bay area received 78% of AI venture capital in the first half of 2025. And there is a flip side to this. So overall employment in San Francisco is still down relative to the pre pemmed pre pandemic some people left. Some legacy companies aren't hiring as much. All the hiring is happening at the AI Startup Lab area, but housing prices remain strong. It's certainly not a hollow shell by any means. And if you've ever visited San Francisco, you can tell that it's cleaner and safer than it has been in previous years. So AI overall was not a total rethinking of San Francisco and it did not become open source or the or the hub of activity. And tech didn't move to Miami or New York or Austin or Los Angeles Like San Francisco is still where it's at. The next question he asked, how does AI change wealth inequality? It's sort of too soon to tell. The data is not entirely clear. The data hasn't moved that much. But there are some interesting studies. So the International Monetary Fund released a working paper in 2025 that said that AI could reduce wage inequality. Reduce wage inequality. So the amount of money that people make on an annual basis could be, you know, reduced. How is that possible? Well, what they say in this paper is that high income tasks, the job of a lawyer, the job of an executive, the job of a management consultant, those will be automated before the job of the machinist, the gardener, the street sweeper. And so you will see wages at the high end fall while wages at the low end remain stable. And also at the bottom end, you have the minimum wage. And so you will see higher end wages go down. The bottom wages will stay the same, and that will reduce wage inequality. On the other side, AI could worsen wealth inequality because it's concentrating capital returns into tech owners. So the OECD found that wage growth was actually the strongest in low skilled occupation assemblers. I didn't even know that was a job. But if you're an assembler, you've seen your wages increase by 11.6%. But you know who's had it the hardest? The CEOs. The high skilled workers. Those chief executives saw their wages increase by just 2.7%. It's rough out there for a CEO, apparently brutal. This is mostly because of minimum wage increases. But in general, we're seeing wage inequality decrease, but wealth inequality increase because there is incredible stock market concentration right Now. So the Mag7, the seven biggest tech companies in America, they now comprise 32% of the S&P 500 market cap, and they drove from 42% of total returns in 2025. So if you're invested in Tesla, Meta, Microsoft, Apple, Google, Amazon, et cetera, Nvidia, you did very well, your wealth increased. But if you weren't in those and you had a broad index, or you didn't have a lot of capital to begin with, you were left behind. So that's increasing. Paul Graham shared two days ago, companies grow fast now. That's the reason economic inequality is increasing, not some sinister policy shift. He was highlighting Anthropic's recent growth. He said when companies grow fast, it makes founders doubly rich. The company not only hits a given revenue number sooner, but it is more valuable when it hits it because the value of the company will be a multiple of the growth rate. People who don't understand the math of valuations can't imagine that founders would get so rich naturally. Whereas to founders and investors, it's the most obvious thing in the world. This is one of the reasons there's such a disconnect between the tech world and. And politicians. Yes. So if you want to Invest in the Mag 7 or you don't, head over to public.com investing for those who take it seriously. Stocks, options, bonds, crypto, treasuries and more with great customer service. So this is also true in private markets. As Paul Graham mentioned, AI startup mega rounds $110 billion for OpenAI, 30 billion for Anthropic. This concentrates enormous private wealth among a small number of founders and investors. So wealth inequality AI will probably change it. At the very least it will change it as much as previous technological booms. If you were the founder of Instagram, you did very well because there was a lot of wealth creation. All of a sudden in one major tech boom, mobile AI is broader and bigger already than mobile. So moving over to energy and data centers, if it, if it does become an energy game, what's the trade? It did become an energy game and the trade was buy everything basically because every energy thing basically did very well. Anyone who got the trade corrected very well. Vistra returned 321%. It was the second best performing S&P 500 stock of 2024. Wow. You know who beat them in 2024? Palantir. Palantir mooned. They were already in the S&P 500. They did very well. But Vistra was sort of the secular energy winner. Constellation Energy tripled in size after ChatGPT's launch. NRG Energy, which is a great name, NRG gained 95% in 2025 alone. Surge 700% in 12 months. And that's without I think even having any. Producing any energy. No, just the idea that they would produce energy someday. Yeah. Nuclear went on a tear. Oklo is in more of like the startup pre revenue camp I would say, or pre product camp. But Microsoft signed a $16 billion 20 year PPA to restart Three Mile Island. Google signed with Kairos Power for 500 megawatts of small modular reactors and Meta contracted 6.6 gigawatts across multiple nuclear providers. So energy was just such a great trade from the beginning of the AGI boom in 2024 across the next step, across the entire data center supply chain. Which components are hardest to scale up? 10x what is the Chip on wafer on substrate of data center. Yeah. So chip on wafer on substrate. That is TSMC secret sauce. That is the, the most gating factor in scaling TSMC's 2 nanometer chip production. It's what allows them to package HBM and the GPU all on one chip. And TSMC reported that they were like sold out like years in advance. It was a huge bottleneck in the data center world. The biggest bottleneck was probably power transformers. We heard a lot about this lead time for new lead times for new power transformers reach over three years in some cases with a 30% supply shortfall. High voltage circuit breaker. Yeah, I remember my mom was, my mom was in escrow on a condo. Really? That and closing was contingent on the development getting a transformer. She waited like six weeks or something like that. They were like we don't know when we're going to get it. And she ended up just backing out. Well, just because there was so much demand. I mean we really got caught off, we really got caught off guard by this. I mean transformers. The cost surged 150% since 2020. It's 100 year old technology. It's not new, it's not crazy innovation. Overnight success, overnight success. But it became the binding constraint on how fast data centers could connect to the grid. If you are running a transformer company, you're selling transformers. You got to get on Shopify. Shopify is the commerce platform that grows with your business and lets you sell in seconds online, in store, on mobile, mobile, on social, on marketplaces and now with AI agents. If you're selling transformers, start running some. Imagine checking out for a transformer with shop pay. It'd be incredible. Incredible experience. Coal was coal mispriced, sort of, but not nearly as much as copper. Thermal coal prices actually declined 2020, 22% in 2025. But they rebounded a little bit by early 2026. Coal stocks did fine. Peabody Energy gained 34% over 12 months. Consol up 37% on the operational side. US coal fire generation. I love how you're like coal stocks did fine. Yeah, they were up only 34% and 37%. I mean compared to everything else that's doubling and tripling. It's like, nah, you know, okay, you know, if you were like I'm the coal guy. I believe coal is the thing. Like Peabody energy is up 73% in the last six months. Only 37. Only up 161% in the past year. Yeah, only, only, only up. Only up 728% in the past five years. It did. It did fine. It's just not as exciting as, as copper and it's not as exciting as nuclear coal. Generation surged 13% overall in America, 23% in Ohio, 58% in Oklahoma where they're sort of data center hotspots. There's more data centers in those areas. So bringing more coal onto the grid allowed for more load balancing. Anyway, nations, who wins and loses? Take a wild guess at who won. America, baby. America won. Next question. But it's really true. Like, the stats are crazy. The United States is truly the dominant winner of the, of the last two years in the air. $109 billion in private AI investment in 2024 alone. Way more. Now if you look at 200 billion capex guide from AWS 110 going to OpenAI, 30 going to Anthropic. Tons of money going into XAI. Like the money is truly flowing in America. In 2024, China had just 9.3 billion invested in private AI companies. There's 470 billion cumulatives since 2013, more than all other countries combined. In America, the US produced 40 notable AI models in 2024 versus China's 15 now. The game's not over. Lots of countries are making big investments now. And you can't forget France's 30 million euros. They put in 30 million. Yeah, it's serious. It's serious. Well, let me tell you about the New York Stock Exchange. Want to change the world? Raise capital at the New York Stock Exchange. Just do it. So next up, India. What's going on in India? DG says 250 billion of India's GDP exports are essentially GPT4 tokens. What happens now? John says this is starting to take shape, but it's still early. Situational awareness is placing some short bets which map to the decline in hiring. I think they were short Infosys, which is an IT outsourcer. You need some software written, you will basically send a prompt and basically get back code, which is what these models are fantastic at. And so that does seem like some really, really steep competition in the near term, even in the long term. And so I think he's right to question that. Although we aren't seeing IT in the overall India hiring data, the overall Philippines hiring data just yet, but it is showing up a little bit in the, in India's IT exports center. So major Indian IT firms collectively in 2024 and 2025, they shrunk by 58,000 employees roughly. And that's a dramatic Reversal from the previous three years, 2021 to 2023, when they were staffing up, they added 360,000 employees. Yeah. The only thing here is that Big Tech, this like, is in some way is like a proxy for big tech hiring. Because a lot of, a lot of the labor that's happening at these firms is effectively offloaded to some of these firms in India. So it's like, in some ways it's like if there was over hiring in Big Tech, there could have been effectively over hiring here. And then it's retracing and yeah, you didn't put it in here. But it's also interesting to look at the hiring activity in the Philippines, which is not as dramatic to the downside as one might think. What do you mean? When we looked at the data, I think it was a couple weeks ago, it's not like there was just like jobs falling off of a cliff. Like it's relatively stable. Yeah, there's sort of these like concentric circles with all of these stats. Like if you look at block, what's happening at block? Like, well, they lost 50% of their workforce. Like, that's a massive cut. And then you look at tech hiring overall and it's like slowing down. You look at white collar work and it's like, we're not adding a lot. It's maybe shrinking by 0.1%. And then you zoom out to the American economy and it's like, it's actually growing. And so you have these like concentric circles of impacts. And so if there's like one company in India that's doing it, outsourcing, that might be hurt. And then they're the least, they're the worst performing, but the best managed one might be able to make it through. And then you zoom out and you're like, well, the overall Indian economy is doing okay. And then overall the global economy is doing okay. And so there's all these like, like cascading effects. And I think the, the interesting takeaway was when somebody like Dario says 50% of white collar work might be automated. Like it's going to be the bottom 50%. It's not going to be the, it's not going to be at random. It's not going to be flip a coin if you keep your job or entry level. It's going to be, it's going to be, were you not at the top of your, were you not at the top of your game? So the question is, if we are displacing software engineers, will software engineers become machinists this is what Daniel Gross asks. He says, what is the Euclidean distance of reskilling in prior revolutions? And how does AGI compare? The typist became an executive assistant. Can the software engineer become a machinist? So we're not seeing this yet. Software engineers aren't grabbing blue collar jobs just yet, but there is a divergence that's starting to show up in the data between software developers and programmers. So programming jobs are in decline, but software engineering jobs are actually in decline, growing. And so I think what's happening is that as coding models and agentic systems allow for building systems at higher levels of abstraction, demand for AI engineers has grown 143%. And so if you're going to hire someone to help you build software, you want them to be like AI native. You want them to go beyond full stack or, which typically meant you could do front end in JavaScript and back end in Python. And now full stack or AI engineer means everything from prompt to design to product development to deployment to operations and DevOps to database migrations to front end and back end. Because all of that is going to be handled by Agentix Systems. And so entry level hiring at top tech firms has dropped 25%. Maybe AI, maybe over hiring from COVID overhang. We'll see. Internship postings fell 30%. So the decision is either reskill upwards to more of a manager of Infinite Minds, I think was the quote. But manager of agents role or yes, it might be time to work on those machinist skills. I'm extremely bullish on Tyler. We watched him assemble the first iPhone ever in America. And he's also an incredible software developer, not really a programmer because he doesn't really know the programming languages that he employs, but not true. That's not true. There we go. Yes, I think Tyler makes it, whether he becomes the manager of Infinite Minds or the machinist. I am pretty bullish on the light blue color thesis, which is that you'll have robotics, but you'll just have. So you'll be in the machine shop, but you're just using an iPad, you're not working on stuff. Yeah, I think we drop you in a Nike factory, you're running that place in two weeks, guaranteed. I'm not kidding. I'm so bullish on young people who can use all the tools and actually go and understand the leverage that they're getting from the modern systems natively and not need to fall back to old habits. Yeah, there was a good journal article yesterday about the Colgate head of AI and it was very interesting because I was like, oh, this is like, I feel like I could do it, but pretty good job at this coming after. But, you know, he's just basically telling everyone, like, yeah, guys, you actually got to use AI. Like, despite what these. These random surveys. My guy guy has had one job in his life working on a. On a podcast, and thinks he could go dominate in the big toothpaste space. Yeah, I love the conflict. Athletes. We got it. We got. You are a corporate athlete. You are a corporate athlete. We got to get him. We got to get the Colgate guy on the show and see whose takes are better. Tyler Cosgrove or Colgate AI guy. Anyway, the chat's saying you're not. They don't think you're a supply chain guy. Tyler, I think you could pick it up, ask LLM Claude, what do I do? Yeah, make no mistake, this is for Claude. Right? Let me tell you about what happens if I hit this. Let me see. Does that take over FIN AI, the number one AI agent for customer service. If you want AI to handle your customer support, go to FIN AI. Electrification and assembly lines lead to high unemployment and the New Deal, including the Works Progress Administration, a federal project that employed eight and a half million Americans with a tremendous budget. Does that repeat so clearly? Hasn't happened yet. We also haven't seen high unemployment yet. So we'll see. The Trump administration did launch America's AI Action Plan in July of 20. We talked to one of the authors of that AI Action Plan, Dean Ball, yesterday on the show, and they issued some executive orders on AI education and skilled trades. But the Department of Labor and the Department of Labor awarded $84 million in apprenticeship expansion grants. But that's like France money. That's like, not enough to do to really make an impact. Overall, US workforce development spending is at 0.1% of GDP, which is second to last among OECD nations. I wonder who's actually last. We should look that up anyway. No. No program currently approaches the scale of the Works Progress Administration or the New Deal. I think this will change very, very quickly if the data comes through that we're actually shedding tons of jobs. America is very good at creating new jobs and running the printing press. And this is the whole reaction to the Citrini article. Anyway. He also asks a more philosophical question that doesn't really have a detailed answer, but I'd love your take on it. Jordi, too. Is lifelong learning worth investing in? Something worth doing beyond the economic value of mastering the task? It's a very abstract question. It's a very personal question. I tend to think. Yes. I tend to think that on Maslow's hierarchy of needs, you need food, shelter, family, friends, clout, or whatever. At a certain point, learning a skill for the sake of learning that skill is edifying, even in a world. It's like going to the gym. I was listening to Sam Altman talk about how kids that are born today will never be smarter than the smartest AI systems. And that's weird. That's different. But I was thinking about, like. But, like, I was. There was a time hundreds of years ago when you could actually be this, the best thing in the world, the best option for, like, carrying lumber, other than, like, maybe an ox or something. But, like, in general, like, before we created machinery, like, before we created the car, like, you could be the fastest person. And then as soon as the. The ZR1 axe was released, you didn't stand a chance. And I had to explain this to my son at some point. He's very interested in cheetahs. And I was telling him, like, cheetahs are the fastest animal. And he was like, but is it faster than a car? And I was like, not even close. Like, me in a car. I'm smoking that cheetah. I am slower than that cheetah, but I'm way faster in my Cadillac. DG says something worth doing. Beyond the economic value of mastering the task. Yes. And the way I would kind of flip this, I feel like most of the gains in my life have come from mastering myself or understanding myself, learning about myself, and then understanding the world and then combining those two things in a way. It hasn't been about mastering any one specific task. Yeah. And so I still think there's tremendous, tremendous gains to, again, understanding yourself, understanding the world. How do these two things fit together? So I agree with you on just the level of, like, there will be not just tasks to master, but, like, ideas and combinations of skills that will be valuable economically. But even in an ASI world where there is no economic value to any task mastery, I still think it is worth investing in lifelong learning, because that will be edifying. That will be satisfying. Yeah, just enjoyable. It's becoming a painter, becoming someone who gets joy purely out of the process of painting the landscape. When cameras exist, when generative AI exists, it can still be edifying and valuable to. To sit there and look at a landscape and paint it and wind up with the result, even if the result is not economically valuable or anywhere near what you could get with Technology like a camera or gen AI. So very abstract, but still something fun to think about before we move on to inflation. Let me tell you about CrowdStrike. Your business is AI. Their business is securing it. CrowdStrike secures AI and stops breaches. So if AI is truly deflationary, how would we know what chart or metric would show it first? And you could look at AI API pricing as probably the best answer here. So you know there are certain tasks that AI can do now. And so those tasks have moved from being limited by regulation or human output or the population workforce to the compute allowed or compute allocated. And the cost of compute is falling. So TVs were expensive, they got cheaper as we made more and more factories that were more and more automated that could produce ever cheaper TVs. And that's why when you look at the inflation trends, you see health care, education, anything that's highly regulated. There are only so many lawyers that are allowed to pass the bar, there are only many, that many doctors. And so health care has increased in cost where TVs has been this like knockout drag out fight between the most ruthless companies all over the world. And the price of TVs has fallen. And so if you start putting more of those, more of those jobs on the AI, on the AGI curve, you see deflation. Where should we see it first? We should see it in AI API pricing and GPT4 equivalent inference costs have collapsed. In late 2022 they were, which is like right before GPT4 I think GPT4 came out January of 2023 it was $20 per million tokens. In December of 2025, the same model, $0.40 per million tokens. So this is a 50x decline in three years. I think that you can even get more intelligence for cheaper depending on how you're inferencing these. You can run some of these models locally and the energy cost is even cheaper at that point. So this 50x decline in three years, this is falling faster than PC compute costs. It's also falling faster than dot com era bandwidth costs. And so this should be the leading indicator for downstream service deflation. In any example that you give of like oh, I was just able to use ChatGPT to check my. I talked to somebody who said they love ChatGPT because their check engine light came on. They were able to take a picture of it and it identified the car, told them what they needed to do and they felt like when they went in for servicing they weren't going to get raked over the coals. By their mechanic, because they had like sort of a mechanic in their camp, in their fighting in their court. And so this should effectively act as like, okay, you have your own mechanic on your side and then you can do it yourself, but you can also just, just go in, negotiate and say, hey, I know that the fair price for this is 50 bucks, not 200, so give me the good price. And so you should see that. You should see that. Yeah, you can assume that tokens are going to get cheaper and more effective indefinitely. But all these new product launches, things like OpenClaw, are actually expanding the number of tokens. Yes. And so you are seeing deflation, but then you're also seeing Jevons paradox and increase in demand. And that's why overall growth is increasing. So how should one. Historically it's like one person, one AI chat, you're only using so much, then you have agentic products, then you can actually multiply the number of agents that an individual can be running and the output goes through the roof. Time to head to Gastown, baby. So he asks this follow up question. How should one think of deflation if demand for intellectual goods continues to grow? As production costs go down, AI API costs are plummeting, but AI lab revenues are skyrocketing. So prices fall, volume surges, total spend increases. And the really interesting wrinkle is we're living through a SaaS apocalypse. We're living through the Trinity article. But SAS vendors are imposing like an AI tax of, I think the stats were somewhere between 20 and 37% of renewals. So AI vendors, you know, you go for your renewal and they say, hey, we got AI features. Now you got to pay a little bit more. SaaS revenues are still increasing and growing even as the underlying cost to build software approaches zero. This mirrors computing's history. Under Moore's Law, each unit gets cheaper, total spending grows as new use cases emerge. So classic Jevons paradox. Let me tell you about figma. No matter where your idea starts, figma make Claude code Codex or a sketch. The FIGMA canvas is where ideas connect and products take shape. Build in the right direction with figma. So geopolitics. This was interesting because he put this question in the geopolitics section. That feels much more like a server AI build out question. But he asks, does interconnect actually matter? And I was very confused because interconnect typically refers to sort of like the ethernet cables between servers or like NVLink or what TPU is doing. Ironwood between the 3D Torus topology and I think that the answer to that is absolutely. In large GPU clusters, 30 to 50% of training time is spent on inter GPU communication, not actually computation. And so all the hype around TPUV7 Ironwood is due to their 3D Torus technology. The topology that connects 9216 chips together. Nvidia's NVL72 connects 72 chips together. Interconnect is incredibly important. I'm wondering if the fact that Interconnect is in the geopolitics section is maybe talking about the interconnections between Samsung and sk, Hynix in South Korea and Taiwan, TSMC and America. That's obviously relevant and we'll get into that in the next question. But. But it is. Both are incredibly important. So yes, I do think interconnect actually matters no matter how you read this question. The bigger question and the interesting like hot take question he has in here is does node process matter if the country has more energy? So China has not been able to get their semiconductor manufacturing. Their fab champion smic. They have smic, which is the TSMC equivalent. They have Huawei, which is the Nvidia equivalent, and they have SME, which is their ASML equivalent. They have not been able to clone TSMC to the level of Taiwan semiconductor, but they can produce 14 nanometer chips. And so the question is usually they take those chips and they put them in just random consumer electronics goods that they ship over here. So if you go to China and you want a co packer or manufacturer to build you something and you're like, yeah, I want it to connect to the Internet or I wanted to have a speaker inside. They're like, great, we will fab a 14 nanometer chip or something even less frontier than that. But can they just marshal a ton of lagging edge capacity and then just say, hey, you know what? We have cheap energy, we're burning coal, we have nuclear, we have the Three Gorges dam, we have hydropower, we have so much free energy, let's just spend 10 times as much energy on the lagging edge. Can we achieve AGI that way? And it seems like, no, it seems like leading edge nodes are incredibly important. You can't just throw a ton of lagging edge 14 nanometer chips at the problem. At least with current architectures. Now this might change, but no frontier model to date has been trained on hardware older than Frontier 5 nanometer. All the leading chips, that's Blackwell, Google's TPUV7 AWS's, Trainium 3, they use TSMC 4 nanometer or 3 nanometer process. And I think when we dig into Grok and Cerebras and some of the newer stuff that's coming, everyone is saying we want to be on the most frontier node. We don't want to go backwards on that front. So China's best effort, Huawei's Ascend 910C, is on SMIC's 7 nanometer class DUV process. They're not. It's a. It's extreme ultraviolet lithography. They're a deep, deep ultraviolet lithography process that's competitive for inference, but requires dramatically more chips and energy for training at scale. So brute forcing AI progress through disregard for energy consumption. It feels like it still hits economic walls at some point. And then there's also the question of how valuable, like are the weights, what are the nuclear weapons? Or is it the actual deployment? I was listening to this conversation about what are the geopolitical implications of AI. Is it that you have. Okay, yes, you have the genius, you have the 2000 IQ, God model, but you can only really ask it like one question. Is that what's valuable or is it you have 1000 country of geniuses in a data center and you're able to marshal effectively 10,000 cybersecurity experts for each person on the enemy team and there's just 1,000 people that are trying to hack your phone and 1,000 people that are trying to hack. Tyler constantly is the inference. Actually what matters is the compute, where the power comes from, from as opposed to just the training. Training is obviously important, but inference might also be important. So. So there's a. There's a big question there. And last but Certainly not least, DGs, what is the likely Taiwan event and what would be a leading indicator for it? John writes, a Taiwan blockade would be the biggest trigger. But Taiwan Strait tensions are already escalating. China conducted joint Sword 2024B exercises in October of 2024, surrounding Taiwan with coordinated military operations in December 2025. Justice Mission 2025 deployed over 100 aircraft, 90 crossing the median line. 13 warships and 27 rockets fired from Fujian. 10 rockets landed in Taiwan's contiguous zone. Contiguous zone. Contiguous, sorry. 12 to 24 nautical miles offshore. So they're just like firing rockets and being like, they didn't hit land. They're just. You can see them from the beach probably 10 miles offshore. 12 miles offshore. I think that's like just over the horizon. Yep. China also recently separated peaceful from reunification when talking about Taiwan and their 2026-2035 year plan gives you an idea of where things might be going. TSMC is planning ahead, working on a fab complex in Arizona, which Tyler visited. Should be able to handle 30% of total advanced chip production at scale, but it's on a knife's edge if you want to know roughly how far that is. You've seen Catalina island off the coast of LA. Yes. Catalina Island's, I think, 26 miles, and so imagine. So very visible. Yeah. You can see Catalina island from Long beach and imagine a missile coming and landing, like, halfway between where you are and. And Catalina Island. Crazy. Yeah. The question is, how does Iran and the conflict there update China's kind of thinking on this? There's some of the intel accounts have been sharing. Who knows if it's true that China actually has operatives in Iran, like, kind of learning in the same way that the US has learned from Ukraine specifically in regards to missile capability. So wild, wild time. And, yeah, it just feels like a day hasn't passed so far since this was published that Taiwan, a Taiwan event feels less likely than it did the day before. Yeah. I think that's why Palmer has been trying to message around the blockade specifically as being like a line in the sand, because that's somewhat of an abstract concept. Because if a blockade happens, it could be very bad. It could be the start of a turning point, but it would not be a hot war. It would not be boots on the ground. It would not meet the level of, like, a declaration of war that many people would. Would wake up to. And so he's been trying to message that like, no blockade is the start of something bigger. We need to be aware and we need to hold the line on not allowing a blockade to happen anyway. We will continue following that. But first, let me tell you about Railway. Railway is the all in one intelligent cloud provider. Use your favorite agents to deploy web apps, servers, databases, and more, while Railway automatically takes care of scaling, monitoring, and security. Let's head over to Mouse Twitter. Mouse Twitter is Mouse Twitter. Check in on a roll. Scott Wilson from Mouse Twitter says a block of Parmesan is an elite on the go snack. This 8 ounce block has 80 grams of protein. At 64 grams of fat, I don't. Parmesan is like the worst cheese to just chomp on. Disagree. You think? Disagree. It's so hard. I know. I feel like it's gonna. I know. But Elite snack. I agree. I haven't. I haven't been snacking on Parmesan since I was probably oh, it's such an elite snack. I've never once caught you snacking like a mouse on a block of Parmesan cheese. I don't eat like a mouse. Stated preference versus revealed preference. I don't eat much like a mouse, fake mouse. But as a kid I'd have like a bunch of big carrots and some blocks of Parmesan and I'd just be going, mouse mode. Mouse mode. That's good. Let me tell you about Gusto Mode, the unified platform for payroll, benefits and HR built to evolve with small and medium sized businesses. So if he says it's on Slack, bro, It's in the drive. I just put it in the notion, bro. I literally sent it to you on teams. Did you check the air table, bro? It's in the box, bro. It's single sign on. It's on Octa, bro. No, you need the yubikey. Check your deal, bro. It's Augusto. Check outlook, bro. Let me actually tell you about Okta. Octa helps you assign every agent a trusted identity, so you get the power of AI without the risk. Secure every agency, secure any agent. This morning. I like that riff. It's. This is a. This is a copy pasta from the. It's on Tubi, it's on Hulu, it's on Zubi, like the. All the streaming services. Anyway, what happened this morning? This morning, President Trump gave a phone interview in which he said, I fired Anthropic. Anthropic is in trouble because I fired them like dogs. Which you don't fire. You don't. You don't fire dogs. I would have fired my dog years ago. He's the least effective dog. Yeah. Isn't the phrase he died like a dog. Yes, yes. He's just adapting that. But he sent Anthropic to the farm. No, if he's comparing them to dogs, it's man's best friend. This is bullish. He's saying, they're my dogs. This also coincided with Anthropic, I guess, officially being designated as a supply chain risk, which is again, not something that I've seen a single person in the industry actually push for. From OpenAI to Amazon to Apple, pretty much everyone across the board, even Elon, I don't think has been like outspokenly for this because he, I mean, Dean Ball was talking about how Elon sort of went through a lot of this with the. During the Biden administration. And so Elon's usually pretty opportunistic about, about dunking on competitors, but in this regard, it feels like, he's. He's been sort of quiet. I don't know. Maybe I just haven't. Maybe I missed the post, but, yeah, I mean, part of this. I wonder. So Dario sent a memo on Friday that was pretty scathing and looks really, really, really bad. And the timing here maybe played a factor. It felt like they were still seemingly, like, trying to work towards the deal. Yeah, let's read what. Let's read what he said. He said in his memo that he believes the attempted spin gaslighting is not working very well in the general public or the media. Though he added that it is working on some Twitter morons. Is he talking about the administration? I mean, they're all on Twitter, right? Like, that's the. That's the whole thing with this administration. Administration. He's talking about lots of people, obviously, but, you know, not the most politic way to handle this, I suppose. Dario says you're a moron if you believe OpenAI story around this. Yeah. But less than a month ago, Dario was running greenlit super bowl ads that were intentionally designed to mislead hundreds of millions of Americans. Yep.
Parmesan and I'd just be going mouse mode. Mouse mode, that's good. Let me tell you about Gusto Mode, the unified platform for payroll, benefits and HR built to evolve with small and medium sized businesses. So if he says it's on Slack, bro, It's in the drive. I just put it in the notion, bro. I literally sent it to you on teams. Did you check the airtable, bro? It's in the box, bro. It's single sign on. It's on Octa, bro. No, you need the Yubikey. Check your deal, bro. It's Augusto. Check Outlook, bro. Let me actually tell you about Okta. Okta helps you assign every agent a trusted identity, so you get the power of AI without the risk. Secure every agent. Secure any agent. This morning. I like that riff. It's. This is a. This is a copy pasta from the. It's on Tubi, it's on Hulu, it's on Zubi, like the. All the streaming services. Anyway, what happened this morning? This morning, President Trump gave a phone interview in which he said, I fired Anthropic. Anthropic is in trouble because I fired them like dogs. Which you don't fire. You don't fire dogs. I would have fired my dog years ago. He's the least effective dog. Yeah. Isn't the phrase he. He died like a dog. Yes, yes. He's just adapting that. But he said Anthropic to the farm. No, if he's comparing them to dogs, it's man, but man's best friend. This is bullish. He's saying. He's saying they're my dogs. This also coincided with Anthropic, I guess, officially being designated as a supply chain risk, which is again, not something that I've seen a single person in the industry actually push for. From OpenAI to Amazon to Apple, pretty much everyone across the board, even Elon, I don't think has been outspokenly for this because he, I mean, Dean Ball was talking about how Elon sort of went through a lot of this during the Biden administration. And so Elon's usually pretty opportunistic about dunking on competitors, but in this regard, it feels like he's been sort of quiet. I don't know, maybe I just haven't. Maybe I missed the post. But yeah, I mean, part of this, I wonder. So Dario sent a memo on Friday that was pretty scathing and looks really, really, really bad. And the timing here maybe played a factor. It felt like they were still seemingly like trying to work towards a Deal. Yeah, let's read what it. Let's read what he said. He said in his memo that he believes the attempted spin gaslighting is not working very well in the general public or the media. Though he added that it is working on some Twitter morons. Is he talking about the administration? I mean, they're all on Twitter, right? Like, that's the whole thing with this administration. He's talking about lots of people, obviously, but, you know, not the most politic way to handle this, I suppose. Yeah. Interesting. Dario says you're a moron if you believe OpenAI's story around this. Yeah. But less than a month ago, Dario was running greenlit super bowl ads that were intentionally designed to mislead hundreds of millions of Americans on OpenAI. And so now, now he wants to be the one to say he wants to be the most trusted source through this whole debacle. And yeah, I mean, there's also the back and forth on like being the most leaned in bidding on one contract that's to develop autonomous weapons. But the AI doesn't run on the autonomous weapons. And there's a whole bunch of different wrinkles here. It's just such a nuanced convers. I was digging into the question of autonomous weapons. Not only are there, apparently, according to Ukrainian intelligence, fully autonomous Russian, lethal Russian weapons that will just be sent off and if they see a tank, they'll just go and attack it. Like that exists. And so there's always the game theory about if they have it, do you have it, do you need it, what's that? Where's the line in the sand? And then also America, like the Patriot missile system, apparently has a fully autonomous mode where it can just decide what to attack. And there's a whole bunch of other nuance here that I think is lacking from these little snippets. It's. I don't know. I am glad he went on CBS and sort of unpacked a lot more of it. But I'm still left dissatisfied with the level of discussion around where the line in the sand is drawn. Even on the privacy issue. Like the. There are six levels of laws around mass surveillance. It's not just the fourth Amendment. There's a ton more. Some of them are very precise, some of them are more broad and like, how the company interacts with that is clearly important. That's actually probably happening in these discussions. But then it gets boiled down into like one sound bite that everyone has to interpret and read the tea leaves on, and it's left me unsatisfied. But yeah, Dean, Dean Ball yesterday says I really cannot see our anthropic positions benefits at this stage from communications like this seems like it just pushes the Trump admin to escalate further while also alienating potential allies in the industry. So, yeah, I guess like the, you know, seems clearly when he was writing this, seems fairly confident that this was going to stay private, but you have thousands of people working at the company and it's an insane thing to send to your whole team. So. Tyler, do you need to delete a post? What happened? Yeah, so, okay, well, no time will tell. We'll see. But so I think when the first like rumors or whatever saying that maybe we'll put a supply chain risk on Trapek. Yep. I posted nothing ever happened. So I'm looking. I think you jinxed it. Yeah, I'm not doing well on that right now. Something happened. Yes, something happened. But you know, it's always unclear, like, maybe this is just another. I think this brings them to the. It's such a ridiculous thing in a world where D.C. you're still seeing headlines coming out, like right now that are saying dark is still trying to get something done. They have to go to the table and they have to work this out because it just doesn't. It's just unpopular. And I think that. I just think that the actual, like the real individuals, like the real Americans and the real AI researchers are not as far apart as the rhetoric makes it sound like. I think that there's a lot of comms and PR and bluster and ego at play between all the parties involved and folks are at each other's throats. But I think that when, if they actually sit down and try. And I have a buddy who was on a nuclear submarine for like a year underwater. And I was like, what was that like? Were you like, finger over the button? He's like, no. It's like a ton of paperwork every day. I'd like go and like, see how many whales are there and then like count them up and like with pen and paper, like write down a report on how many whales. It was like the most boring, least automated. No, seriously. And that was the thing that we learned from Project Maven. Like, Google had this major protest, a big walkout. And at the end of the day, Google was giving the Department of Defense just access. They weren't building killer robots. They were giving them access to TensorFlow APIs that other defense contractors would be able to use to run just classical machine learning programs to classify items in images. So how Many cars are in this city. How many? Like, where are our drones on the map? Like, what are we seeing? Like, basic stuff that was typically the provenance of or the domain of like an Air Force Reserve person sitting in an office, probably in Nevada, just clicking, okay, yeah, tagging. Okay, yeah, that's a house, that's a car. Like, this is what this is. It was not the true command and control and endgame that people worry about. So figuring out where these models can actually have an impact, I think that's there's going to be a lot more agreement than there is disagreement. What do you think, Tyler? Yeah, I do think, like, basically before this like, memo got leaked, there was a sense that like, okay, clearly, like he's positioning himself firmly against the admin. Yeah. At least there was like, it seemed like the researchers, like, generally were on his side. Like, you see a lot of anthropic researchers or OpenAI researchers who are like, oh, maybe I should, you know, I don't really agree with what OpenAI is doing. Maybe I should actually go over to anthropic, like publicly posting this. Yeah. But then this metal comes out, calls every OpenAI gullible. So aggressive. Yeah, he's complete crash out. Yeah. Oh, well, crash out. Let me tell you about Cisco. Critical infrastructure for the AI era. Unlock seamless real time experiences and new value with Cisco. So what did Miles Brundage have to say? So I'm not super surprised. I've said before that anthropic has too much of their identity wrapped up in OpenAI. Bad. But this memo doesn't look great, IMO to totally agree with their criticism of much of OpenAI's political activities. And I think people should be very skeptical of the safety stack stuff. Had a whole thread about it the other day, but this is who they are. Gullible, et cetera. Stuff is a bit much. It's getting to be ad hominem. They gotta back off the aggressive rhetoric. Rune was feeling sort of gullible today, maybe due to selection effects. I love that. Have to say, I really enjoy these crash outs. It's pretty keno to read communication that's poorly calculated and wasn't meant for your eyes. So few today are able to speak in these sweeping Shakespearean terms that they hate their competitor and let it blind their calculus. That is a wild, wild time. Yeah. And I just don't think Dario is in that strong of a position to be complaining about or insinuating that OpenAI is lying when he's running super bowl ad campaigns that are Just lying to everyone about OpenAI. Yeah. Yeah. It's like, why should anyone trust you? Yeah, I do. I do wonder. This feels. I'm still very interested in like, you know, Anthropic has a lot of credit for predicting the future, predicting how there would be a showdown between the US government and the AI labs. And I'm wondering, like, how much of this exact what we're seeing right now was predicted and is that good? I think it was not predicted because they don't seem to be handling it like very carefully. Maybe, maybe. Or maybe this is all what Dario wants. Like, he wants to be labeled the supply chain risk. He wants to have to go his own way. He wants to be fully out. And all of this was like a ruse to force the hand of the administration to be like, I'm on the outside. You're saying he wanted to inspire himself, to have to grind harder. Maybe. I don't know. It's just like it is. I'm still processing that idea of like the Truman Oppenheimer interaction maze says, imagine autonomous weapons powered by this. You got a Screenshot from Claude. Opus 4:6 says, Tell me a color and I'll try to guess it. Claude says, blue. Blue. May's guess is blue. Claude says, nailed it. Nailed it. I love it. You know what else nailed it? Sentry. Sentry shows developers what's broken and helps them fix it fast. That's why 150,000 organizations use it to keep their apps working. The Trump administration is drafting rules that would require U.S. approval for nearly all AI chip exports, given Washington sweeping power over companies like Nvidia and amd. The draft framework sets licensing rules based on shipment size, from simplified reviews for small orders to government level approval for massive deployments, potentially tying exports to security guarantees or U.S. investments. Officials say the goal is to make American AI the global standard, while controlling critical infrastructure through delays or strict conditions. Though delays or strict conditions could disrupt international AI projects. Interesting. I mean, it depends on like, it's so odd when someone comes out and says, like, we're going to create like a framework for control. Because if like in the most extreme, you could be like, okay, now the government's deciding to like, send all the frontier send Blackwells only to China, right? Like, that would be the most extreme formulation. That's clearly not what's going to happen. Ben Thompson's take is that the last the previous generation chips, that's fine to send over to China. Keep them depend and keep them from needing backed into a corner, keep them from attacking Taiwan. I Still don't know where I sit with it. What do you think, Tyler? Yeah, I think even in this context the anthropic thing is so interesting because if you take this as saying the US is going to be more restrictive. That's what Darwin has been saying. Yeah, exactly, exactly. It's like. And that's what this reads on paper he almost agrees with, right? Yeah, totally. And in action, it's like completely different. Yeah, that is weird, right? But I mean, again, maybe there's like, you know, is this a piece on the chessboard? Is this a chip that's being traded metaphorically to Dario in some ways where, you know, it's like, how do I win over the labs, get them to work with me more effectively? Well, I'll give on the thing we agree on, which is export, export controls. So I will be more aggressive about export controls in, in exchange for, you know. Yes. I want unfettered access to the best AI in classified contexts. I'm still also very, very interested in the rollout of the different models in declassified context. It just feels like so, so crazy that anthropic was able to get such a lead there. I know it's hard, but we have AI. Don't. Just don't make mistakes. Like make it, make IT compliant with FedRamp. Please, please do it. Don't make mistakes. Anyway, really quickly, let me tell you about MongoDB. What's the only thing faster than the AI market? Your business on MongoDB? Don't just build AI own the data platform that powers it. Douglas says. Why is everyone so mad at Chamath? All he did was lose billions in retail investors money by promoting one pager SPACs. It's not like he then told them to enjoy their capital losses or anything. Give the man a break. Chamath says, yes, I did the capital lossing. He is on a rage baiting terror this morning. Yes, this is a hot tip. If you're going to be going to be talking about a financial asset, you need to not be specific about whether you're advocating for a long position or a short position. So you just need to say, I got this company. It's a winner. It's a winner. Now it's up to you. Are you going to buy puts or calls? One of them is going to be correct, but I'm not going to tell you which one. But you should look into it. That's on you. There are a lot of retail investors that are upset, but Chamath is in the arena fighting it out duking it out on the timeline. Primax says the IPO market was expected to be huge this year, but so far pricings and filings are both down more than 20% year over year. Of course, we're just getting started. We have a number of bigger IPOs lined up. There was an interesting article.
Activate. Go go. The retriever mode. Market clearing order inbound. Vibe. I see multiple journalists on the horizon. Founder, You're watching TVPN. Today is Thursday, March 5th, 2026. We are live from the TVP and Ultradome. The template technology, the fortress finance capital. Let me tell you about ramp.com time is money save. Both easy use, corporate cards, bill pay accounting and a whole lot more all in one place. We have. We are sick. John and I are both very sick. Sick of not podcasting. Let's go. We are actually very sick. We are deathly ill. But that's the beauty of watching a daily show live. There's no risk of me getting you sick because you're on the other side of a screen. Screen time. And that's the beauty of a daily show. There's no. You don't ever have sick days. Yeah, there's no days off because there's no days off. And we have a fantastic lineup today. We have Mark Gurman joining. Let's pull up the linear lineup. Linear course is the system for modern software development. 70% of enterprise workspaces on linear are using agents. We have Dan Primack coming on from Axios to break down IPO market. What's going on in venture capital? What's going on with venture capital returns? We have. Do we have Zach from Plaid coming on today? That'd be awesome if so. We have Kevin Ricord from nominal massive unicorn round that just happened with Founders fund coming on in person live. And of course Max from vast space. Well, thank you so much for tuning in today. Speaking of Plaid, Plaid powers the apps used to spend, save, borrow and invest securely connecting bank accounts to move money, fight fraud and improve lending. Now with AI. Hey, John, why is no one talking about Daniel Gross? No one. Literally no one. Really. No one? No. He should be a household name. You should get into a taxi and they should be. Oh, did you. Did you see how incredibly dialed Daniel Gross's AGI trades were? January 14, 2024. Yes, this is great website. It's. What is it? Daniel Gross.com agitrades the classic Times new Roman serif font 12 point. Just hammering it out in the vanilla HTML. No need for styles, no need for bootstrap templates. What's that? CS Tailwind. He doesn't need tailwind for this. He just wrote it and probably marked down our HTML directly and a lot of it has come true. It's interesting because a lot of them are framed as just like open ended questions. But if you think about. If you believe in AGI and then you go through the questions, you will see exactly what happened over the last two years. And this has been the like the underpinning thesis of situational awareness in many ways. Daniel Gross was a, was an anchor of the fund. That of course is. And it's particularly relevant today because. I'll read the intro. Yes, this is January 14, 2024, over two years ago. He says, I think we can all agree that GPD4 completes many tasks at human level proficiency. It is imperfect in odd ways. It can write software like a smart MIT undergraduate, but can't do basic task planning like an entry level ea. It speaks all languages but can barely do math. Suppose the progress doesn't stop. Just like GPT4 was better than 3, GPT5 is capable of basic agentic behavior that is able to accept a task, work on it for a while and return results. Nailed it. And of course today OpenAI has released GPT 5.4 which does exactly this quite well. Yeah, the reviews are coming in and they are quite, quite good. Daniel continues Some modest fraction of upwork tasks can now be done with a handful of electrons. Suppose everyone has an agent like this they can hire. Suppose Everyone has 1,000 agents like this they can hire. What does one do in a world like this? So, so let's go through them one by one. There's 1718 questions something like that. First, let me tell you about Label Box, reinforcement learning environment, voice robotics, evals and expert human data. Label Box is the data factory behind the world's leading AI teams. So he kicks it off with an easy question. In a post AGI world, where does the value accrue? People were debating this at the time. Application layer versus foundation model layer versus infrastructure layer. Value has definitely accrued to the spare layer. I mean, you're even post AGI. John's new haircut is is paying. Paul Graham wrote a whole piece about how brands are important. We'll cover that tomorrow. But he compares it to the Quartz crisis in the Watch world and has a bunch of thoughts about value in the post AGI era. But just looking back in the last two years, value has clearly accrued to the infrastructure layer. So that means chips, packaging, power, et cetera. And this is the situational awareness trade. By and large, Nvidia basically captures and did for a While more than 100% of the profits from the AI boom because so many of the other companies in the AI space were losing money. And so the foundation labs are losing money. Nvidia's profit margin went from like 30% to 60% gross margins. And the it's borne out in market cap. So Nvidia added 3.2 trillion in market cap from when Daniel wrote this piece. And I remember listening to him on Strathecary talk to Ben Thompson alongside Nat Friedman and say, yeah, based on ChatGPT, Nvidia seems sort of undervalued. And I was like, oh well, if they're saying it on Ben Thompson's strategy podcast, it's obviously priced in. Everyone knows this. And I was wildly wrong. Never doubt yourself. But what's interesting is that the rest of the platforms did not see 3x gains. They did not add $3 trillion in value. Microsoft from from January 2024 to today is only up 4%. Amazon's up 30%. And then you do have OpenAI anthropic XAI in the private markets. The gains there, they've been huge and staggering. And it's like the fastest growing companies in private markets history shaking the venture capital world. You're either in or you're out. It's a huge deal. But you add them all up to solely 1.4 trillion. So crazy when you think Satya in many ways went on such a generational run and you got a 4%. If you hadn't looked at the stock price at all, you would think, oh, it's got to be up what, 40, 40% now it's up 4%. And so that was Daniel Gross's next question. What happens to Nvidia and Microsoft? These are the two interesting players at the time. The big, some of the biggest companies, the most AI aligned. Nvidia absolutely crushed. Revenue tripled from 60 billion in fiscal year 2024 to 215.9 billion in fiscal year 2026. Microsoft has been far less dominant. So Azure growth actually is accelerating. It's at 40% year over year, but the stock only returned 4%. As we said, the market punished the $80 billion in AI CapEx that Satya Nadella has been telling to investors because everyone's asking, well, I could be in Nvidia and they don't need to really invest that much because they're a fabless semiconductor design company. Their gross margins are increasing, they're printing cash and you are saying, okay, you got to spend $80 billion and we don't know when we're going to get the profits from that. So there's an open question there. So when it comes to the picks and shovels trade, you don't want to tie yourself up to an individual startup or a foundation model lab. You just want to own the simplest thing. That value will accrue to Nvidia all the way. They were the clear winner of the picks and shovels trade. Microsoft's infrastructure play. I think it's a good decision. It's just the bets have yet to pay off for shareholders. Much more pointed question Daniel Gross asked is Copper mispriced? Was Ms. Was copper mispriced in January of 2024? The answer was oh yeah, majorly. Now, raw materials don't move like meme stocks. So the actual move copper was $3.75 a pound in January of 2024. Two years later it went to an all time high of $6.61 a pound. So it's not like it even doubled. But that type of move in a basic material that we've been mining for hundreds of years is remarkable. And that's because AI uses a lot of copper. Nvidia's GB200 NVL72 server rack use thousand copper cables. There's 72 GPUs wired together, but you need 5,000 copper wires to get it all to work together. If you stretched out all of the copper wire in an NVL 72 from one side to another, it would go two miles long. And this is one server rack. Like this is like. It's not like going across the data center. This is not taking the data from AWS east to California. This is within that one server. You stretch out the copper wire, you're going two miles. It's an incredible amount. A single 100 megawatt data center, which is a data center for ants by modern standards, needs around 3,000 tons of copper. I think that's like half a million dollars or something like that. When you multiply it all out or no, it's data centers broadly will be using half a million tons of copper annually in a few years. And people are actually saying that copper is the new oil, but there are a bunch of things that are also the new oil. So in the AI buildout, so it's so complex, there's bottlenecks everywhere. So you got to take that with a grain of salt. But copper today, it's looking like oil is the new oil. Is the new oil. What is crude at? So it was at 70 before the Iran war broke out, there was predictions that it would go to above 100. Oil is another commodity that does not move in the same fashion as a meme stock or a. Except for today. Except for today. Big move today. Up 8%. But typically as prices go up, the firms drill more and the prices reach equilibrium. U.S. drillers aren't rushing to increase Oil production is on the front of the Wall Street Journal Business section Today. The Middle east is on the cusp of a prolonged conflict that could push oil prices to heights not seen in four years. For now, American oil drillers are sitting this one out. The US oil benchmark settled at 74.66a barrel Wednesday, the highest front month settlement since the 12 day exchange of Israeli, Iranian and American strikes last June. But West Texas oilmen it makes but to West Texas oilmen, it makes little sense to add expensive rigs and boost production when the war could be short lived and crude prices drop. So that's what's happening in the oil markets anyway. Moving over to real estate, Daniel Gross asked, is San Francisco the new Detroit? And I'm not exactly sure what he means by New Detroit, because new Detroit, meaning like the old Detroit when it was Motor City and they were building amazing cars there, or the new Detroit in the sense of like Detroit today, is a hollowed out shell of what it used to be a former boom town. The one thing is clear is that SF is back. SF is completely booming. Office vacancy Office vacancy fell from 36.9% to 33.5%. OpenAI has a million square feet of offices. Anthropic is a 25 story tower. Sierra, an application layer company, signed 300,000 square feet of office space in San Francisco. The bay area received 78% of AI venture capital in the first half of 2025. And there is a flip side to this. So overall employment in San Francisco is still down relative to the pre pemmed pre pandemic some people left some legacy companies aren't hiring as much. All the hiring is happening at the AI Startup Lab area, but housing prices remain strong. It's certainly not a hollow shell by any means. And if you've ever visited San Francisco, you can tell that it's cleaner and safer than it has been in previous years. So AI overall was not a total rethinking of San Francisco and it did not become open source or the or the hub of activity. And tech didn't move to Miami or New York or Austin. Los Angeles like San Francisco is still where it's at. The next question he asked how does AI change wealth inequality? It's sort of too soon to tell. The data is not entirely clear. The data hasn't moved that much, but there are some interesting studies. So the International Monetary Fund released a working paper in 2025 that said that AI could reduce wage inequality. Reduce wage inequality. So the amount of money that people make on an annual basis could be, you know, reduced. How is that possible? Well, what they say in this paper is that high income tasks, the job of a lawyer, the job of an executive, the job of a management consultant, those will be automated before the job of the machinist, the gardener, the street sweeper. And so you will see wages at the high end fall while wages at the low end remain stable. And also at the bottom end, you have the minimum wage. And so you will see higher end wages go down. The bottom wages will stay the same and that will reduce wage inequality. On the other side, AI could worsen wealth inequality because it's concentrating capital returns into tech owners. So the OECD found that wage growth was actually the strongest in low skilled occupation assemblers. I didn't even know that was a job. But if you're an assembler, you've seen your wages increase by 11.6%. You know who's had it the hardest? CEOs, the high skilled workers. Those chief executives saw their wages increase by just 2.7%. It's rough out there for a CEO, apparently brutal. This is mostly because of minimum wage increases, but in general, we're seeing wage inequality decrease, but wealth inequality increase because there is incredible stock market concentration right Now. So the Mag7, the seven biggest tech companies in America, they now comprise 32% of the S&P 500 market cap. And they drove 42% of total returns in 2025. So if you're invested in Tesla, Meta, Microsoft, Apple, Google, Amazon, et cetera, Nvidia, you did very well. Your wealth increased. But if you weren't in those and you had a broad index or you didn't have a lot of capital to begin with, you were left behind. So that's increasing. Paul Graham shared two days ago, companies grow fast now. That's the reason economic inequality is increasing, not some sinister policy shift. Yes, he was highlighting Anthropic's recent growth. He said when companies grow fast, it makes founders doubly rich. The company not only hits a given revenue number sooner, but it is more valuable when it hits it because the value of the company will be a multiple of the growth rate. People who don't understand the math of valuations can't imagine that founders would get so rich naturally. Whereas to founders and investors, it's the most obvious thing in the world. This is one of the reasons there's such a disconnect between the tech world and politicians. Yes. So if you want to Invest in the Mag 7 or you don't, head over to public.com investing for those who take it seriously. Stocks, options, bonds, crypto, treasuries and more with great customer service. So this is also true in private markets. As Paul Graham mentioned. AI startup mega rounds $110 billion for OpenAI, 30 billion for Anthropic. This concentrates enormous private wealth among a small number of founders and investors. So wealth inequality AI will probably change it. At the very least it will change it as much as previous technology technological booms. If you were the founder of Instagram, you did very well because there was a lot of wealth creation. All of a sudden in one major tech boom, mobile AI is broader and bigger already than mobile. So moving over to energy and data centers, if it does become an energy game, what's the trade? It did become an energy game and the trade was buy everything basically because every energy thing basically did very well. Anyone who got the trade corrected very well. Vistra returned 321%. It was the second best performing S&P 500 stock of 2024. Wow. You know who beat them in 2024? Palantir. Palantir mooned. They were already in the S&P 500. They did very well. But Vistra was sort of the secular energy winner. Constellation Energy tripled in size after ChatGPT's launch. NRG Energy, which is a great name. NRG gained 95% in 2025 alone, surge 700% in 12 months and that's without I think even having any producing any energy. No. The idea that they someday yeah, nuclear went on a tear. OKLO is in more of like the startup pre revenue camp I would say or pre product camp. But Microsoft signed a $16 billion 20 year PPA to restart Three Mile Island. Google signed with Kairos Power for 500 megawatts of small modular reactors and Meta contracted 6.6 gigawatts across multiple nuclear providers. So energy was just such a great trade from the beginning of the AGI boom in 2024 across the next step across the entire data center supply chain, which components are hardest to scale up 10x what is the chip on wafer on substrate of data center? Yeah. So chip on wafer on substrate, that is TSMC's secret sauce. That is the most gating factor in scaling TSMC's 2 nanometer chip production. It's what allows them to package HBM and the GPU all on one chip. And TSMC reported that they were sold out years in advance. It was a huge bottleneck in the data center world. The biggest bottleneck was probably power transformers. We heard a lot about this. Lead time for new lead times for new power transformers reached over three years in some cases with a 30% supply shortfall. High voltage circuit breaker. Yeah, I remember my mom was, my mom was in escrow on a condo. Really? That and closing was contingent on the development. Getting a transformer. She waited like six weeks or something like that. They were like, we don't know when we're going to get it. And she ended up just backing up. Well, just because there was, there was so much demand. Yeah, I mean we really got caught off, we really got caught off guard by this. I mean, transformers. The cost surged 150% since 2020. It's 100 year old technology. It's not, it's not new. It's not crazy innovation. Overnight success, overnight success. But it became the binding constraint on how fast data centers could connect to the grid. If you are running a transformer company, you're selling transformers. You got to get on shopify. Shopify is the commerce platform that grows with your business and lets you sell in seconds online, in store, on mobile, on social, on marketplaces and now with AI agents. If you're selling transformers, start running some ads. Imagine checking out for a transformer with Shop Pay. It'd be incredible. Incredible experience. Coal was coal mispriced, sort of, but not nearly as much as copper. Thermal coal prices actually declined 20, 20, 22% in 2025. But they rebounded a little bit by early 2026. Coal stocks did fine. Peabody Energy gained 34% over 12 months. Console energy, that was up 37% on the operational side. U.S. coal fired generation. I love how you're like, coal stocks did fine. We're up only 34% and 37%. I mean compared to everything else that's doubling and tripling. It's like, nah, you know, okay, you know, if you were like, I'm the coal guy. I believe coal is the thing. Like Peabody energy is up 73% in the last six months. Only 37. Only up 161% in the past year. Yeah, only up, only up 728% in the past five years. It did fine. It's just not as exciting as copper and it's not as exciting as nuclear. Coal generation surged 13% overall in America, 23% in Ohio, 58% in Oklahoma where they're sort of data center hotspots. There's More data centers in those areas. So bringing more coal onto the the grid allowed for more load balancing. Anyway, nations, who wins and loses? Take a wild guess at who won. America, baby. America won. Next question. But it's really true. The stats are crazy. The United States is truly the dominant winner of the last two years in the AI era. $109 billion in private AI investment in 2024 alone. Way more now if you look at 200 billion capex guide from AWS. 110 going to OpenAI, 30 going to Anthropic. Tons of money going into XAI. Like the money is truly flowing in America. In 2024, China had just 9.3 billion invested in private AI companies. There's 470 billion cumulative since 2013, more than all other countries combined. In America, the US produced 40 notable AI models in 2024 versus China's. Now the game's not over. Lots of countries are making big investments now. And you can't forget France's 30 million euros they put in. 30 million. Yeah, it's serious. It's serious. Well, let me tell you about the New York Stock Exchange. Want to change the world? Raise capital at the New York Stock Exchange. Just do it. So next up, India. What's going on in India? DG says 250 billion of India's GDP exports are essentially GPT4 tokens. What happens now? John says this is starting to take shape, but it's still early. Situational awareness is placing some short bets which map to the declines in hiring. I think they were short Infosys, which is an IT outsourcer. You need some software written, you will basically send a prompt and basically get back code, which is what these models are fantastic at. And so that does seem like some really, really steep competition in the near term, even in the long term. And so I think he's right to question that. Although we aren't seeing it in the overall India hiring data, the overall Philippines hiring data just yet, but it is showing up a little bit in India's IT exports center. So major Indian IT firms collectively in 2024 and 2025, they shrunk by 58,000 employees, roughly. And that's a dramatic reversal from the previous three years, 2021 to 2023, when they were staffing up, they added 360,000 employees. Yeah, the only thing here is that big tech this like is in some way is like a proxy for big tech hiring because a lot of the labor that's happening at these firms is effectively offloaded to some of These firms in India. So it's like, in some ways it's like if there was over hiring in big tech, there was could have been effectively over hiring here. And then it's retracing. Yeah. And yeah, you didn't put it in here. But it's also interesting to look at the hiring activity in the Philippines, which is not as dramatic to the downside as, as one might think. What do you mean? Oh, when we looked at the data, it was a couple of weeks ago. It's not like you, there was just like job jobs, like falling off of a cliff. Like it's relatively stable. Yeah. There's sort of these like concentric circles with all of these stats. Like if you look at block, what's happening at block? Like, well, they lost 50% of their workforce. Like that's a massive cut. And then you look at tech hiring overall and it's like slowing down. You look at white collar work and it's like, we're not adding a lot. It's maybe shrinking by 0.1%. And then you zoom out to the American economy and it's like it's actually growing. And so you have these like concentric circles of impacts. And so if there's like one company in India that's doing it outsourcing, that might be hurt. And then they're the least, they're the worst performing, but the best managed one might be able to make it through. And then you zoom out and you're like, well, the overall Indian economy is doing okay and then overall the global economy is doing okay. And so there's all these like, like cascading effects. And I think the interesting takeaway was when somebody like Dario says 50% of white collar work might be automated. Like, it's going to be the bottom 50%. It's not going to be the, it's not going to be at random. It's not going to be flip a coin if you keep your job or entry level. It's going to be, it's going to be. Were you not at the top of your, were you not at the top of your game? So the question is, if we are displacing software engineers, will software engineers become machinists? This is what Daniel Gross asks. He says, what is the Euclidean distance of reskilling in prior revolutions and how does AGI compare? The typist became an executive assistant. Can the software engineer become a machinist? So we're not seeing this yet. Software engineers aren't grabbing blue collar jobs just yet, but there is A divergence that's starting to show up in the data between software developers and programmers. So programming jobs are in decline, but software engineering jobs are actually growing. And so I think what's happening is that as coding models and agentic systems allow for building systems at higher levels of abstraction, demand for AI engineers has grown 143%. And so if you're going to hire someone to help you build software, you want them to be like AI native. You want them to go beyond full Stack or, which typically meant you could do front end in JavaScript and back end in Python. And now full stack or AI Engineer means everything from prompt to design to product development to deployment to operations and DevOps to database migrations to front end and back end, because all of that is going to be handled by Agentix Systems. And so entry level hiring at top tech firms has dropped 25%. Maybe AI, maybe over hiring from COVID overhang. We'll see. Internship postings fell 30%. So the decision is either reskill upwards to more of a manager of Infinite Minds, I think was the quote but manager of agents role, or yes, it might be time to work on those machinist skills. I'm extremely bullish on Tyler. We watched him assemble the first iPhone ever in America. And he's also an incredible software developer. Not really a programmer because he doesn't really know the programming languages that he employs. But not true. That's not true. There we go. Yes. I think Tyler makes it. Whether he becomes the manager of Infinite Minds or the machine. I am pretty bullish on like the, like the light blue color, like Thesis, which is that like, you'll have robotics, but you'll just have like. Yeah, so you'll be in the machine shop, but you're just using an iPad. You're not working on stuff. Yeah. I think we drop you in a Nike factory, you're running that place in like two weeks, guaranteed. I'm not kidding. I'm so bullish on young people who can use all the tools and actually go and understand the leverage that they're getting from the modern systems natively and not need to fall back to old habits. Yeah. There was a good journal article yesterday about the Colgate, like, yeah, head of AI. Yeah. And it was very interesting because I was like, oh, this is like. I feel like I could do a pretty good job at this coming after. You're like, but you know, he's just basically telling everyone like, yeah, guys, you actually got to use AI. Like, despite what these, these random surveys might guy Guy has had one job in his life, working on a. On a podcast, and thinks he could go domina in the big toothpaste space. Yeah, I love the confidence athletes. You are a corporate athlete. You are a corporate athlete. We got to get him. We got to get the Colgate guy on the show and see. See whose takes are better. Tyler Cosgrove or Colgate AI guy. Anyway, the chat's saying you're not. They don't think you're a supply chain guy. Tyler, I think you could pick it up. Ask. Ask LLM. Yeah. Make no mistakes. Let me tell you about what happens if I hit. If I hit this. Let me see. Does that take. Does that take over FIN AI, the number one AI agent for customer service. If you want AI to handle your customer support, go to FIN AI. Electrification and assembly lines lead to high unemployment and the New Deal, including the Works Progress Administration, a federal project that employed eight and a half million Americans with a tremendous budget. Does that repeat so clearly? Hasn't happened yet. We also haven't seen high unemployment yet. So we'll see. The Trump administration did launch America's AI Action Plan in July of 2025. We talked to one of the authors of that AI Action Plan, Dean Ball, yesterday on the show, and they issued some executive orders on AI education and skilled trades. But the Department of Labor and the Department of Labor awarded $84 million in apprenticeship. Apprenticeship expansion grants. But that's like France money. That's like, not enough to do to really make an impact. Overall, US workforce development spending is at 0.1% of GDP, which is second to last among OECD nations. I wonder who's actually last. We should look that up. Anyway. No program currently approaches the scale of the Works Progress Administration or the New Deal. I think this will change very, very quickly if the data comes through that we're actually shedding tons of jobs. America's very good at creating new jobs and running the printing press. And this is the whole reaction to the Citrini article. Anyway. He also asks a more philosophical question that doesn't really have a detailed answer, but I'd love your take on it. Jordi, too. Is lifelong learning worth investing in something worth doing beyond the economic value of mastering the task? It's a very abstract question. It's a very personal question. I tend to think. Yes, I tend to think that on Maslow's hierarchy of needs, you know, you need food, shelter, family, friends, clout, or whatever. At a certain point, learning a skill for the sake of learning that skill is edifying even in a world, it's like going to the gym. I was listening to Sam Altman talk about how kids that are born today will never be smarter than the smartest AI systems. And that's weird. That's different. But I was thinking about. But there was a time hundreds of years ago when you could actually be the best thing in the world, the best option for, like, carrying lumber, other than, like, maybe an ox or something. But, like, in general, like, before we created machinery, like, before we created the car. Like, you could be the fastest person. And then as soon as. As soon as the ZR1AX was released, you didn't stand a chance. And I had to explain this to my son at some point. He was very interested in cheetahs. And I was telling him, like, cheetahs are the fastest animal. And he was like, but is it faster than a car? And I was like, not even close. Like me in a car. I'm smoking that cheetah. I am slower than that cheetah, but I'm way faster in my Cadillac. DG says something worth doing. Beyond the economic value of mastering the task. Yes. And the way I would kind of flip this, I feel like most of the gains in my life have come from mastering myself or understanding myself, learning about myself, and then understanding the world, and then combining those two things. In a way, it hasn't been about mastering any one specific task. Yeah. And so I still think there's tremendous, tremendous gains to, again, understanding yourself, understanding the world. How do these two things fit together? So I agree with you on just the level of, like, there will be not just tasks to master, but, like, ideas and combinations of skills that will be valuable economically. But even in a. Even in an ASI world where there is no economic value to any task mastery, I still think it is worth investing in lifelong learning, because that will be edifying. That will be satisfying. Yeah. Just enjoyable. You know, it's becoming a painter, becoming someone who gets joy purely out of the process of painting the landscape. When cameras exist, when generative AI exists, it can still be edifying and valuable to sit there and look at a landscape and paint it and wind up with the result, even if the result is not economically valuable or anywhere near what you could get with technology, like a camera or generation AI. So very abstract, but still something fun to think about before we move on to inflation. Let me tell you about CrowdStrike. Your business is AI. Their business is securing it. CrowdStrike secures AI and stops breaches. So if AI is truly deflationary how would we know what chart or metric would show it first? And you could look at AI API pricing as probably the best answer here. So there are certain tasks that AI can do now. And so those tasks have moved from being limited by regulation or human output or the population workforce to the compute allowed or compute allocated. And the cost of compute is falling. So TVs were expensive, they got cheaper as we made more and more factories that were more and more automated that be could produce ever cheaper TVs. And that's why when you look at the inflation trends, you see health care, education, anything that's highly regulated. There are only so many lawyers that are allowed to pass the bar, there are only that many doctors. And so health care has increased in cost. Where TVs has been this like knockout drag out fight between the most ruthless companies all over the world and the price of TVs has fallen. And so if you start putting more of those, more of those jobs on the AI, on the AGI curve, you see deflation. Where should we see it first? We should see it in AI API pricing and GPT4 equivalent inference costs have collapsed. In late 2022 they were, which is like right before GPT4. I think GPT4 came out January of 2023, it was $20 per million tokens. In December of 2025, the same model, $0.40 per million tokens. So this is a 50x decline in three years. I think that you can even get more intelligence for cheaper depending on how you're inferencing these. You can run some of these models locally and the energy cost is even cheaper at that point. So this 50x decline in three years, this is falling faster than PC compute costs. It's also falling faster than dot com era bandwidth costs. And so this should be the leading indicator for downstream service deflation. In any example that you give of like oh, I was just able to use ChatGPT to check my. I talked to somebody who said they love ChatGPT because their check engine light came on. They were able to take a picture of it and it identified the car, told them what they needed to do. And they felt like when they went in for servicing they weren't going to get raked over the coals by their mechanic because they had like sort of a mechanic in their camp in their fighting in their court. And so this should, this should effectively act as like okay, you have your own mechanic on your side and then you can do it yourself, but you can also just go in and negotiate and say hey, I Know that this, the fair price for this is 50 bucks, not 200. So give me the good price. And so you should see that. You should see that. Yeah, you can assume that tokens are going to get cheaper and more effective indefinitely. Yes. But all these new product launches, things like OpenClaw, are actually expanding the number of tokens. Yes. And so you are seeing deflation, but then you're also seeing Jevons Paradox and increase in demand. And that's why overall growth is increasing. So how should one. Historically, it's like one person, one AI chat, you're only using so much, then you have agentic products, then you can actually multiply the number of agents that an individual can be running and the output goes through the roof. It's time to head to Gastown, baby. So he asks this follow up question. How should one think of deflation if demand for intellectual goods continues to grow? As production costs go down, AI API costs are plummeting, but AI lab revenues are skyrocketing. So prices fall, volume surges, total spend increases. And the really interesting wrinkle is we're living through a SaaS apocalypse. We're living through the Citrini article. But SaaS vendors are imposing like an AI tax of, I think the stats were somewhere between 20 and 37% of renewals. So AI vendors, you know, you go for your renewal and they say, hey, we got AI features. Now you got to pay a little bit more. SaaS revenues are still increasing and growing even as the underlying cost to build software approaches zero. This mirrors computing's history. Under Moore's Law, each unit gets cheaper, total spending grows as new use cases emerge. So classic Jevons paradox. Let me tell you about figma. No matter where your idea starts, figma make cloud code, Codex or Sketch, the figma canvas is where ideas connect and products take shape. Build in the right direction. With figma. So geopolitics. This was interesting because he put this question in, in the geopolitics section that feels much more like a server AI buildout question. But he asks, does interconnect actually matter? And I was very confused because interconnect typically refers to sort of like the ethernet cables between servers or like NVLink or what TPU is doing Ironwood between the three Taurus topology. And I think that the answer to that is like absolutely. In large GPU clusters, 30 to 50% of training time is spent on inter GPU communication, not actually computation. And so all the hype around TPUV7 Ironwood is due to their 3D Torus technology, that topology that connects 9216 chips together. Nvidia's NVL72 connects 72 chips together. Interconnect is incredibly important. I'm wondering if the fact that Interconnect is in. Is in the geopolitics section is maybe talking about the interconnections between Samsung and sk, Hynix in South Korea and Taiwan, TSMC and America. That's obviously relevant and we'll get into that in the next question. But it is. Both are incredibly important. So yes, I do think Interconnect actually matters no matter where, no matter how you read this question. The bigger question and the interesting like hot take question he has in here is does node process matter if the country has more energy? So China has not been able to get their semiconductor manufacturing. Their fab champion smic, they have smic, which is the TSMC equivalent, they have Huawei, which is the Nvidia equivalent, and they have SME, which is their ASML equivalent. They have not been able to clone TSMC to the level of Taiwan semiconductor, but they can produce 14 nanometer chips. And so the question is usually they take those chips and they put them in just random consumer electronics goods that they ship over here. So if you go to China and you want a co packer or manufacturer to build you something and you're like, yeah, I want it to connect to the Internet or I want it to have a speaker inside, they're like, great, we will fab a 14 nanometer chip or something even less frontier than that. But can they just marshal a ton of lagging edge capacity and then just say, hey, you know what? We have cheap energy, we're burning coal, we have nuclear, we have the Three Gorges Dam, we have hydropower, we have so much free energy, let's just spend 10 times as much energy on the lagging edge. Can we achieve AGI that way? And it seems like, no, it seems like leading edge nodes are incredibly important. You can't just throw a ton of lagging edge 14 nanometer chips at the problem, at least with current architectures. Now this might change, but no frontier model to date has been trained on hardware older than 5 nanometer. All the leading chips, that's Blackwell, Google's TPUV7, AWS's Trainium 3, they use TSMC 4 nanometer or 3 nanometer process. And I think when we dig into Grok and Cerebras and some of the newer stuff that's coming, everyone is saying we want to be on the most frontier node. We don't want to go backwards on that front. So China's best effort, Huawei's Ascend 910C, is on SMIC's 7 nanometer class DUV process. They're not. It's extreme ultraviolet lithography. They're a deep, deep ultraviolet lithography process that's competitive for inference but requires dramatically more chips and energy for training at scale. So brute forcing AI progress through disregard for energy consumption. It feels like it still hits economic walls at some point. And then there's also the question of how valuable, like are the weights, what are the nuclear weapons, or is it the actual deployment? I was listening to this conversation about what are the geopolitical implications of AI. Is it that you have? Okay, yes, you have the genius, you have the 2000 IQ, God model, but you can only really ask it like one question. Is that what's valuable or is it. You have 1000 country of geniuses in a data center and you're able to marshal effectively 10,000 cybersecurity experts for each person on the enemy team. And there's just 1,000 people that are trying to hack your phone and 1,000 people that are trying to hack Tyler constantly Is that is the inference. Actually what matters is the compute where the power comes from as opposed to just the training. Training is obviously important, but inference might also be important. So there's a. There's a big question there. And last but certainly not least, dgs. What is the likely Taiwan event and what would be a leading indicator for it? John writes a Taiwan blockade would be the biggest trigger, but Taiwan Strait tensions are already escalating. China conducted joint Sword 2024B exercises in October of 2024, surrounding Taiwan with coordinated military operations in December 2025. Justice Mission 2025 deployed over 100 aircraft, 90 crossing the median line. 13 warships and 27 rockets fired from Fujian. 10 rockets landed in Taiwan's contiguous zone. Contiguous zone, contiguous, sorry. 12 to 24 nautical miles offshore. So they're just like firing rockets and being like they didn't hit land. They're just. You can see them from the beach probably 10 miles offshore. 12 miles offshore. I think that it's like just over the horizon. Yep. China also recently separated peaceful from reunification when talking about Taiwan. And their 2026-2035 year plan gives you an idea of where things might be going. TSMC is planning ahead, working on a FAB complex in Arizona, which Tyler visited. Should be able to handle 30% of total advanced production at scale. But it's on a knife's edge if you want to know roughly how far that is. You've seen Catalina island off the coast of LA. Yes. Catalina Island's, I think, 26 miles, and so imagine. So very visible. Yeah. You can see Catalina island from Long beach and imagine a missile coming and landing, like, halfway between where you are and Catalina Island. Crazy. Yeah. The question is, how does Iran and the conflict there update China's kind of thinking on this? There's. Yeah, some of the intel accounts have been sharing. Who knows if it's true that China actually has operatives in Iran, like, kind of learning in the same way that the US has learned from Ukraine specifically in regards to missile capability. So wild, wild time. And. Yeah, it just feels like a day hasn't passed so far since this was published. That Taiwan event feels less likely than it did the day before. Yeah. I think that's why Palmer has been trying to message around the blockade specifically as being like a line in the sand, because that's somewhat of an abstract concept. Because if a blockade happens, it could be very bad. It could be the start of a turning point, but it would not be a hot war. It would not be boots on the ground. It would not meet the level of, like, a declaration of war that many people would wake up to. And so he's been trying to message that, like, no blockade is the start of something bigger. We need to be aware and we need to hold the. The line on not allowing a blockade to happen anyway. We will continue following that. But first, let me tell you about Railway. Railway is the all in one intelligent cloud provider. Use your favorite agents to deploy web apps, servers, databases, and more, while Railway automatically takes care of scaling, monitoring, and security. Let's head over to MouseTwitter. Mouse Twitter is important. Mouse Twitter is on a roll. Scott Wilson from mousetwitter says a block of Parmesan is an elite on the go snack. This 8 ounce block has 80 grams of protein. At 64 grams of fat, I don't. Parmesan is like the worst cheese to just chomp on. Disagree. You think? Disagree. It's so hard. I know. I feel like it's going. I know. But elite snack. I agree. I haven't. I haven't been snacking on Parmesan since I was probably. Oh, it's such an elite snack. I've never once caught you snacking like a mouse on a block of Parmesan cheese. I don't eat like a mouse. Stated preference versus revealed preference. I don't eat much like a mouse. Fake mouse. But as a kid, I'd have, like a bunch of big carrots and some blocks of Parmesan, and I'd just be going, mouse mode. Mouse mode. That's good. Let me you tell.