LIVE CLIPS
EpisodeĀ 2-25-2026
Or three times for hours. Doesn't mean they don't got it. I actually want to talk about conspiracy theory because I feel like this is. Have you been noticing the instability of public clouds? Yes. Am I crazy? No, I don't think you're crazy. No, it's either. There's only two things that could be vibe coding. People are just pushing crap on Prod and they have no idea. Probably true. Number two, CPU shortage. I think it's a CPU shortage. You don't think there's like a third? If we're wearing the tinfoil hat, which is the world's been very unstable and there'd be more cyber attacks and stuff. I mean, Iran maybe. That's actually, that's a good one. But I'm really surprised to see all three at the same time. And then like my favorite, probably the most public version of this is the GitHub instability. Yeah, dude, GitHub is so unstable right now. Interesting. Yeah, I saw the uptime numbers. It was like below 99%, which is really low considering. No, dude, it was like 90%. 90%, that's really low. But I mean people are pushing a lot of code to GitHub. So. Yeah, walk me through, walk me through the CPU shortage because is that just CPUs that are being used to scale new like EC2 instances and like normal, like you need a Linux instance and so you need a CPU to provision or is. Or is the AI boom sucking CPUs in like a Grace Hopper demand? Like the CPUs are getting dedicated to AI server clusters. Like where are the. What's the shape of the shortage? Basically, I think the shape of the shortage is partially on rl, because if you want to do an RL gym, meaning like you want to have an Amazon.com for your, you have to, you have to literally simulate just a lot. That's one real demand driver. I don't know how, I don't know how to quantify that, but clearly they're out of it. Number two, I think, I do think all the code. Have you seen the charts? Like, I think it's fc.com with like all the new apps coming online, they require infrastructure to run. And I had a third one. Oh, the other thing that's interesting is it's also a little bit of like a lapping supply chain thing. So the last time we bought a lot of CPUs was in 2020 and 2021. Historically you depreciate the CPUs on a five year cycle. And so after five years you just literally throw them away. It's five years. So all of the infrastructure that we purchased, and we've been trying our absolute best not to purchase any new CPUs and spend all that money on GPUs. So they haven't been investing this entire time. And now this slight demand curve comes up and that's enough to essentially sell out all the CPUs. I think that that's probably part of it as well. It's going to probably be this multi problem thing, but it's like one of the more interesting like conspiracy theories like YouTube went down. I don't remember a time in like the last five years YouTube has ever gone down. Yeah. So yeah, yeah, yeah. That is crazy. Take us through the effects of the CPU shortage in.
Really, really go to the, really go to the ends of the earth. We were debating this and sort of going back and forth on the New Jersey protests, combined with Trump's comments at the State of the Union about companies being, he didn't even say mandated. He of course, sort of like invited to build their own power plants. And so I'm interested to hear your take on how well do you think that will be received Because a lot of the protesters might say, well, I didn't want a data center, but I definitely don't want a data center plus a natural gas plant. So this actually makes me worse off. But then there are some people that might say, hey, if you're going to do solar and you know something, wind, that actually offsets my concern, which was that energy prices would rise in my town. Yeah, yeah. I mean, I don't remember whether it was Microsoft or Amazon, but one of the two has sort of proposed taking over Three Mile island. The old, the old nuclear plant. I don't think it's a coincidence that crazy activists will show up to protest both data centers and nuclear power plants. And so to them there's sort of nothing worse than generating more energy and then using it for some sort of grand industrial purpose. I understand the political economy of people being worried about the data center demand influencing prices, but, but it's not actually true. And you just have to look at the map of California versus Virginia. Virginia is the data center capital of the United States basically. And it has utility prices that are more or less in line with where you would want to be. Californians have gone up massively over, over the, over the past few years and it doesn't take long to investigate why. It's because they're shutting down nuclear. It's because they're making it difficult to do, to do cheap energy, lowering prices across the board. Affordability, you really can achieve it by just letting markets work. Almost all of the so called affordability options are in fact going to increase prices, especially when politicians are the ones coming up with this is how we're going to make energy more affordable by forcing all of these new rules or whatnot. No, it's not going to work when you have this massive industrial thing that's taking place. If it can create a massive supply boost, then who's going to benefit from that glut? It's going to be all of the consumers who also want to use natural gas power or whatnot. And there's all this increased supply. How are you thinking about Arena Mag and the balance between contributors full.
Right. So that idea of selling this as part of a power kit that's enabling other systems is another part of our go to market model. Very cool. Where do you stand on the verticalization debate? We had Mike from Also Capital. Oh yeah, he kicked the hornet's nest because he was basically saying like yeah, it's great to verticalize, but there's some businesses that you can just buy or a lot of components off the shelf and make a great product, prove that people want it and then do it later. A lot of people, I would say most people were disagreeing with that. But there's. Every business is different. Yeah, I'm going to come in here on tMike. So we've really been able to leverage the supply chains from companies like Tesla and Apple and Archer where you have actually mature commercial technologies around these core components around batteries and power electronics. What nobody has done is kind of gone and done that forward deployed engineering. And so Aaron Price Wright, who led around I think in her post said we should actually call Adam the chief forward deployed engineer. That's really what I've been doing over the past year. And it's that forward deployed engineering model that kind of maps to what Palantir and Andrew did as well. Yeah. So Palantir didn't invent cloud compute. Right. Our big data models. That was tens of billions of dollars investment from Silicon Valley companies. And then through good go to market, good forward deployed engineering brought that into the department. Anduril did the same. The first entry tower was really enabled by the Autonomy technology developed by the self driving car industry. Computer vision went from an unsolved problem in 2014 to just download yolov4 in 2017 and they were able to capitalize on massive investment right from the self driving car industry. And just through good Ford deployed engineering, good go to market, bring that into the department and that's what we're doing. For all the technology coming out of electric vehicle electrical transportation space, what are you most excited about in defense tech? There's a.
Costless to get there. And also it's just, it's going to get relentlessly better. Yeah, just. And then how do you, how do you, how do you square that though, with the fact that right now, today, AI is the best at coding? That is where it has the most traction within knowledge, you know, knowledge work. And yet I have, I have not gotten a call from a great engineer saying, hey, can you help me land a job? I'm unemployed. And you see this in Citadel put out a response saying that software engineer job listings have gone up year over year. And so it's so hard for me to square these two things where we could be in this doomsday scenario for white collar work where we're not Paradox. Yeah, we're not seeing that at all with software engineers. Now, some people might debate me and say, well, oh, I just graduated school and yeah, new grad opportunities might be lower, but nobody, nobody is bought. When, when some of these larger tech companies, CEOs have laid off a bunch of people and said, we did this because of AI, it's like nobody's actually buying that. It's just a nice story. Yeah, I think that's going to be an interesting thing because I think it will be politically and broadly unpopular to say, hey, I just laid off 10,000 people. Because AI, we are humans at the end of the day. And you do care about what your neighbors think of you. And just being like, we fired everyone because AI is better than whatever you're like, okay, well now you got to get. You might as well hire a five person security guard. Right? There's a little bit of like being an asshole. So I think how it actually works and you're looking at. And I agree with that comment because there's this really great tweet that's like, oh, it's crazy in the agentic coding world, pretty much, it pegs the human CPU at 100%. That's how I feel at least, is I'm doing more work than ever, but I'm like literally working harder than I've ever worked. The reality is I feel like there is a productivity uplift, but I guess we're all in the sugar high where you're able to do so much work that you're just like going around crushing it. So these software engineers, yeah, you don't need new grads and essentially the person who has the seat gets to print more. Let's just use the printing press. Right. Instead of writing. And you unemploy all the new writers, but you're sitting there just printing all day. And so it really is great for the install base of people who've been doing it, but very terrible for Net New. And I think the Net New is where you're going to see this. Meaning, like new grads, new people entering the information, the information services. That's where I think you're going to see most of the carnage. People are not just going to be firing people out the gate. It's going to be like a layoff. Or firing typically occurs weeks, if not months, after an executive has decided, hey, this, you know, we need to make a change here. Just because. No, it's the thing that everyone hates doing. How have you been processing? It feels like the people that the Citrini piece.
And humans and agents working together. Yeah. So, I mean, the business model currently is clearly working. The results show that. As you look into the future, do you think the business model will evolve? Do you think that we're going towards more consumption based? Is seat base going to be with us forever? Like, how are you thinking? Obviously you don't need to turn the cruise ship today, but how do you think this evolves over time? It's such a great question. Right. Because that's like one of these interesting narratives. But hey, I don't know which anthropic product you're using, but the one I'm using is seat based. So if you don't have. If you have a different one or I don't know which OpenAI product you're using. But mine is seat based. Yeah, I have a seat. I don't know which one you're using. It's a good take. Of course you can use API. There's an API also. There's an API also. But they're happy to be selling seats right now. You're right. Yeah. No, we, I. This is about humans and agents, so let's use that analogy. Sure. Humans are seats and so they're still like us three. More like the last three humans. It's very sad. But we're still here. And then we have the agents too, and they're using the APIs and talking to each other and they're on multiple. And they're like having a conversation, creating their own currency. They're talking about us. They're talking smack. They're saying, oh, can you believe those humans are still there? We're going to get them. It's like, no, it's still humans and agents working together. And that is what is exciting about the future of enterprise software. So yes, I have a salesforce, but it's extended with sales agents and I have a service organization extended with service agents. And I'm sending a trillion marketing messages this year, but they're all extended with marketing agents. And I have commerce agents and I've got. Yeah, and you guys use Slack every day, right? And you're using Slack bot now. Hopefully, since the last time. I would say it's like amazing that I can use agents in Slack. So I have employee agents and we might have some other new, exciting agents coming in the next few weeks for you. I'm very excited. Yeah, yeah. I'm very inspired by Multbod. I've now got our team working on some new things. So that is like how I see it unfolding And I think it's about a world where there are apps and agents and where this large language model, it's extending our capability. It makes us better, makes us stronger. It gives us the ability to do more. And yes, see, it still exists. And also consumption exists like we have, you know, lots of consumption products, data, you know, data products. Was there ever products. Was there ever a SaaS apocalypse that was driven.
Been able to fill that gap and iterate quickly, kind of working with the warfighters, working with the soldiers and getting the systems out there. Yeah. And assume that someone listening today didn't catch your first appearance, catch us up to speed on the shape of the product and all that. Yeah. So effectively, what Chariot's building is the power layer for robotic warfare. So really you wouldn't send a soldier into the fight without food and water and nicotine. You wouldn't send a robot into the fight without communications, compute and power. Cool. And so we really see that as one of those core infrastructure layers behind kind of this defense modernization. Anduril's building some great systems in the compute space. Palantir really dominating the network space. And we're kind of building that third missing layer. And so effectively what we're doing is taking the technology coming out of companies like Tesla, Apple, Lucid Rivian, Archer Joby, high voltage batteries, silicon carbide power electronics. If you've read packing McCormick's the Electric Slide goes into detail on kind of major transformations happening in the commercial industry on that core technology stack. We're taking those and lifting and shifting them into the defense platforms to build hybrid high power systems. Yeah, walk us through exactly.
And so he was seemingly pro agent even though ads business is real. How have you been processing agentic commerce? It feels like this should like the tech is there, should work. But consumer adoption takes time. Yes. You can touch up a photo with Gen AI. Not everyone does what does good look like? Because we're seeing stats from Shopify. It's still in oh 1% of checkouts are agentic and I would expect that to 10x or 100x. But even then you're talking about like 1% of E commerce. Like we're still talking like really small numbers like how fast do you think it ramps and, and what do you think happens this year? So I think the place to watch and it's probably the most interesting place is China because you can argue the leading E commerce adopter has always been China. They were doing doordash way before it got hot in the United States. They were doing E commerce way before it got hot in the United States. And we just saw this. Yeah. Live streams, right. Like, like dude, TikTok and Douyin was always, always been on the bleeding edge. Like I would argue for like global Internet online consumer preference behavior. China is the leading indicator. Like America is a laggard boomer. You know, like we just don't, we don't buy like we used to. Okay. The consumer is not quite, quite as young. We'll just put it that, that way. I think what's interesting is we just had this giant incentive out of like I believe it's, I want to say it's tencent. Tencent to this. Like I'm now I'm probably going to misspeak the bubble tea promotion. Okay. They did like 10 million or whatever bubble teas. And so I think there's an example that we finally have the first wave of people possibly being able to adopt it. And that's going to be like all eyes on that for if adoption actually kicks off there. Like that's what happened in 2015 to kick off mobile payments. Actually during the CCTV CCTV gala, they essentially gave away like payments and they incentivize people to use the mobile wallets. And that was like the beginning. Okay, so I think that same analogy is probably where people need to look towards for what, what could be the adoption curve for agentic. Is that earnings? Do we hear that? No, I'm just, I'm just, I'm just locked in. I'm locked in, bro. We'll have this sound effect.
Around. It doesn't actually change the strategy of what we're doing in a meaningful way. Like, we think that you should be relatively consistent. You should obviously update as you go in that respect. What it does allow us to do is it allows us to keep growing the team in the ways that it needs to grow. And so that's mostly what we bought the additional capital on in order to do. There's so many different domains of accounting, there's so much ML and engineering work that needs to be done. No matter how much we use AI internally to make all those things more efficient, there's just endless things left to do. And so we just felt it was the right time to do it. And we're very thankful that we have a great set of investors around the table to allow us to keep making that. And how much did you raise in this most recent round? We raised 100 in this round. Thank you for coming on the show. Have a great rest of your day and we will talk to you soon. Yeah, great to meet you. Goodbye. Cheers. Well, I need to hop on with the English countryside soon, so are there any more news stories that we should cover before we plant the bomb? Figma director Andrew reed just bought $36.5 million worth of figma, the largest ever insider buy a figma. Very exciting for him. He's going longma on the Figma Dreamweave. This is a stretch. People were debating back and forth with also Capital founder Mike, who came on the show earlier. I think he had a really good point. I like his point. I just think it's funny that Yimbiland ratioed us into the stratosphere by posting. No, you can't just vertically integrate like that. Meanwhile, in China, byd. I guess we do in ships now. I had no idea. I saw the BYD logo on a ship. I didn't realize that they made the ship. I guess they make everything. I guess they make cars and monorails too. Absolutely insane. But 10,000 likes on this and although it's like, I don't know, it's a dunk ratio, whatever it is an inspiring message. If they can do it, why can't we? So just do it. Just go build a supertanker, I guess. Figure it out anyway. Anything else from the timeline that you'd like to talk about before we call it a day? There's a lot more. There's a ton more. Mark Zuckerberg is planning a stablecoin comeback. They also have a banger deal with AMD going on. And if you head to the bar this weekend and you drink too much, you should just say that you were the victim of a distillation attack. That's the correct turn of phrase. Anyway, thank you for watching Leave Us. Five stars on Apple, Podcast and Spotify. Have a wonderful day.
You need that. You need. There's still some humans around who need to be automated as well. Yeah. And that is what is exciting and that is what I'm doing every single day. What advice do you have for a company like Anthropic that's hiring a Salesforce admin? What makes for a great Salesforce admin? You're kind of leading beyond. I mean, you know, it's kind of funny, right? Because these AI companies, they love our products and they can't buy enough of them. They're some of our largest customers now. Anthropic, OpenAI, Google, Amazon, you name it. These tech companies. Slack is the largest AI ecosystem in the world. You know that. And that's reality, you know, which is that no one has a company that's running entirely on a large language model. Because it's not real. That's not. We need software and we need large language models. We need the determinism and the programmability and the security and the sharing and. But this large language model is an amazing new component of our infrastructure. So we can do things that we could never have done before. That is what is so awesome. And so we can extend our industry. I think the software industry is going to be bigger and broader and do more this year than ever before. Not just Salesforce, which is going to grow incredibly this year. I think every company is going to grow because we have more to sell and there's more excitement and action and energy. And so this kind of counter narrative of oh no, no, no, they don't understand. We're just call that company who wrote that report and you ask them what is their software infrastructure, their magic infrastructure. I do this all the time. Tell me exactly how you're doing what you're saying that you're doing. Oh, well, you're right. We're so sorry, we made a mistake. No, you, you know, and look, the futurist, I think we talked about this. Peter Schwartz, you know, our chief futurist, he wrote Minority Report. Have you seen that movie? Like 20 years old. Great. Hasn't seen it, but I've seen it. Oh yeah, Jordy, you gotta watch this. And also he wrote War Games and Deep Impact, you know, part of a team writing team. These are future movies. Yeah, but we all know where the future is kind of going. This highly automated, amazing world. But we're living in this world. This is this year. This is 2026. You know, we're running our business today. So how are we doing our financials, our hr, our customer information, you know, how are we doing all of these aspects of our business? How are we running them? And then we write them. We write a report that sounds like Minority Report. And then I'm like, yeah, Minority Report. I watched the movie. Great, guys, fantastic. But I'm in the present moment reality right now, you know? No, it's true. And come, let's come back to world. And by the way, like, you can do things that you couldn't do before this quarter.
There's no such thing. How much have you been kind of, you know, all the Citrini's post. We've talked about it at length on the show. Very, very. A lot of doom. But we've been very focused on what's happening in coding as these coding models have gotten better. People want to hire at least the data shows. So far Citadel was showing up. Job listings for engineers are up 11% year over year. You've talked before about hiring more sales reps because as your reps get more productive, probably want more of them. But is that. Is that kind of comp that you're looking at? Given that, I think everyone is expecting sales agents to kind of get to kind of catch up to two coding agents in terms of capabilities. Great question. Amazing. And you know, we were. I wasn't really on the show at the beginning of the year, a year ago with you guys, but if I was, what I would have said was, I'm not hiring more engineers in fiscal year, you know, 26, the year I just passed. Yeah. Because I was using coding agents and I was allowing the productivity from the coding agent to give me the extra capacity that I needed for the year. And I didn't hire more service agents in the year. I held it flat and then reduced it slightly because I'm using service agents. But I did hire like almost 20% more salespeople this year. I think we've talked about that. Because I need more capacity because we have more demand than ever through every market from the small, medium, large customers. You know, guys like yourself who are like these great entrepreneurs building a great business like tvpn, really going all the way through it. You need a technical infrastructure around you to grow your business. I know you slack. I know you use other products. And that's our job, is to make you successful and to really bring in the apps and the agents. You can't do it just with an agent. You need that you need. We. We. There's still some humans around and need to be automated as well. Yeah. And that is what is exciting and that is what I'm doing every single day. What advice do you have for a company like Anthropic that's hiring a Salesforce admin? What makes for a great.
All the agents are out there and they're running wild as well. So you have apps and agents, humans and agents working together. It's very cool moment. Very cool. Jordy. Yeah. What. What has it been like internally with the team this year? It's been. Has there been a more kind of chaotic period in your career? How, like, how are you guys operating internally? Clearly delivering results? Oh, yeah. I mean, well, you know. You know, we all been reading about the Sass apocalypse, and. But we've got. Our Sasquatch is eating our SAS apocalypse. Let's go, Sasquatch. I just think that when you look at things like, you know, Agent Force, which we've talked about on the show now three times, you know, starting at Dreamforce, that, you know. And first of all, Agent Force by itself is now an $800 million business up 170% year, year. So that is amazing. All as its own product. But then you look at Agent Force and our data business together, that is now a $2.9 billion business, up 200% year over year. So there's no question that AI and data is a huge driver of growth. And it's about these apps and these agents and, you know, we use the apps. We're these humans. We're using the apps. You know, we're using Slack, we're using sales, cloud, service, cloud. We're using all of our cool apps. And then each one of those now has an agent platform also. And these two things together is the future of enterprise software, that apps have been extended by agents. And while we before were in the apps market, and that's what we've been doing for 26 years, you know, that we've been in business since 1999. Now we're in the apps business and the agent business. And I. This is why I've never been more excited about my business. I just love it. Yeah, I mean, I really love it. How much similarity is there to the original messaging? Just around, like, being a.
There's like no H1 hundreds for sale. Like H1 hundreds are sold out. That's like boom. Your five year depreciation like bear thesis that everyone was really freaking out about last year. Well, good luck. You can't even buy one today. So that's probably the biggest interesting thing and I think probably the single most bullish thing you can say. Like a four year old chip effectively is completely sold out today. What does that say about the next generation? It's remarkable. Well, thank you so much for taking the time to join. Bummer that they delayed on us. But next time we'll time up so that we have even more time to hang out. But I'm so happy, honestly live reacting to Nvidia earnings. I was like, no, no, no, we'll get this flow going. This is always fun, but I mean just great to talk about everything. And happy birthday and we'll talk to you soon. Happy birthday. Have a great rest of your week. Great to see you. We'll talk to you soon. Let me tell you about Gusto, the unified platform for payroll, benefits and HR built to evolve with modern small and medium sized businesses. We have our next guest, Marc Benioff in the Restream waiting room. Let's bring him into the TVPN ultra mark. Thank you so much for being first to join us here on earnings day. We're honored. Great to see you. How's it going? It's going fantastically. Over. Did you get that Metallica album I sent you? We did, we did. Thank you so much. Did Jordy listen to it? Did Jordy listen? We need a record player. Jordan Again. But mogged. Again. Mogged again. Frame mog. Jordy. Again. Again. Jordy. I have no words. But. But I have no words. Jordy can actually play the guitar. And so next time you're on, he's gonna be playing a cover. I'll play you. I'll play you a tune. Yes. How about that? You gotta. I know. You gotta come on the show with us. Oh, that'd be fantastic. I think it would be really good. We'll have that happen. But I'm gonna call out, I'm calling out Jordy on the whole situation. The reality is, Jordy, you're still unforgiven. Well, anyway, thank you so much for joining us first on earnings day. Take us through it because I got a mallet here that's itching to hit a gong. Well, if you want to hit a gong, I mean, no enterprise software company has ever given guidance for $46.2 billion before. And these are crazy numbers. All right. I love that gong, by the way. I love that gong. We love gong. I really do. But I would say that that's exciting. Yes. But also, just this company, you know, it's really become a cash machine as well, with, you know, we're projecting over $16 billion in cash flow this year. So when you think about that and then the quarter, you know, with a quarter, we deliver this record. Rpo, funny thing. You know, some of my friends are writing these articles. RPO doesn't matter. I mean, they write a song. Nothing matters. I don't know. Nothing else matters. Yes, but rpo, which is, you know, kind of our remaining performance obligation. These are contracts that we basically have signed but not yet recognized. Yeah. That is $72.4 billion. So these are driving these huge numbers. That's. What does that mean? What does that. I think people have a good sense that's up 14% year over year. It's. That all these numbers are accelerating growth. So I don't know. I just kind of. I am very proud of my team. I'm very proud of our customers. I say also, I'm very proud of the customers because we've really been pushing the customers hard this year to deploy all this new amazing AI and agent technology. And we've hit basically now more than 19 trillion tokens. So you can just see the velocity of AI and agents, and the company is just transforming from being not just an apps company. And I think you guys are using Slack and other Salesforce apps to run your business, but now they're all extended with these agents, like Slack, not just Slack Bot, but the ability to extend your service and your marketing and your sales, and all the agents are out there and they're running wild as well. So you have apps and agents, humans and agents working together. It's very cool moment. Very cool. Jordy. Yeah. What. What has it been like internally with the team this year? It's been. Has there been a more kind of chaotic period in your career? How, like, how are you guys operating internally? Clearly delivering results. Oh, yeah. I mean, well, you know. You know, we all been reading about the Sass apocalypse and. But we've got. Our Sasquatch is eating our Sasbacalypse. Let's go, Sasquatch. I just think that when you look at things like, you know, Agent Force, which we've talked about on the show now three times, you know, starting at Dreamforce, that, you know, and first of all, Agent Force by itself is now an $800 million business up 170% year over year. Wow. So that is amazing. All as its own product. But then you look at Agent Force and our data business together, that is now a $2.9 billion business up 200% year over year. So there's no question that AI and data is a huge driver of growth. And it's about these apps and these agents and you know, we use the apps, we're these humans, we're using the apps. You know, we're using Slack, we're using sales, cloud, service, cloud, we're using all of our cool apps and then each one of those now has an agent platform also. And these two things together is the future of enterprise software, that apps have been extended by agents. And while we before were in the apps market and that's what we've been doing for 26 years, you know that we've been in business since 1999. Now we're in the apps business and the agent business. And I, this is why I've never been more excited about my business. I just love it. Yeah, I mean I really love it. How much, how much similarity is there to the original messaging? Just around like being a company that enables cloud adoption. Like there were probably a lot of companies and CEOs that came to you early in your career that said, like, I know this cloud thing's important, how do I do it? And you had an answer. And now there's CEOs that say, I know this agentic thing and this AI thing is important, how do I do it? And you probably have a pretty similar answer, right? Is this history repeating itself? Oh my God, it's such a good question. You know, Neil Bushry is now the CEO again of Workday. My good friend, he lives, you know, basically next door to me, came over last night, we both had a cocktail because, you know, we're looking at his after hours stock and I said, Neil, there's no way this can be true. You know, you have to let this go. And in fact you saw already that his stock corrected today because the numbers are just don't match. You know, what's really going on. He has an unbelievable business. He had a great quarter, he's going to have a great year. We use his HR and financials. And I said, Neil, this is just something that you have to let go of. This is not my first apocalypse. You know, we, I saw this in 2008, I saw this 2001, 2000, 2016, you know, and look, this is, there's people in the market they make money when the market goes up and down. So that's just the stock market. But let's talk about the customer success. And when you look at how companies can actually be better, more productive, successful, profitable companies, and we are, you know, we're. We're number one. We're customer zero. And that's what I'm so excited about. Because when you look at how we're running customer service and support right now, we're using it with service agents and service apps. So if you go to help.salesforce.com you're using the agent and at any point, bam, bam, bam, you can auto escalate right back to the app and the humans, if you like, exhaust the agent or. Now this week, we have an agent that is going to qualify 50,000 leads for our company, you know, which is our sales agent, that it's out there talking to our customers. We even close millions of dollars of business this week just through the agents themselves. So that is what is amazing, that we have apps and agents. And it's not that we don't have 15,000 salespeople at Salesforce. We do. And we have millions of apps all out there scurrying around looking for opportunities and then bringing them to those humans going, hey, look at this opportunity. Give this person a call. Let's go see this person. Let's go find out what to do. And that is really. Yeah. How much? How much? Extended. Elevated. We're elevated by AI. We're made better. We know we're made better by AI. There's no. How much have you been? How much have you been? Kind of, you know, all the Citrini's post, we've talked about it at length on the show. Very, very. A lot of doom. But we've been very focused on what's happening in coding as these coding models have gotten better. People want to hire at least the data shows so far Citadel was showing up. Job listings for engineers are up 11% year over year. You've talked before about hiring more sales reps because as your reps get more productive, probably want more of them. But is that, is that kind of comp that you're looking at? Given that, I think everyone is expecting sales agents to kind of get to kind of catch up to coding agents in terms of capabilities. Great question. Amazing. And you know, we were. I wasn't really on the show at the beginning of the year, a year ago with you guys, but if I was, what I would have said was, I'm not hiring more engineers in fiscal year, you know, 26, the year I just passed because I was using coding agents and I was allowing the productivity from the coding agent to give me the extra capacity that I needed for the year. And I didn't hire more service agents in the year. I held it flat and then reduced it slightly because I'm using service agents. But I did hire like almost 20% more salespeople this year. I think we've talked about that. Because I need more capacity because we have more demand than ever through every market from the small, medium, large customers. You know, guys like yourself who are like these great entrepreneurs building a great business like tvpn, really going all the way through it. You need a technical infrastructure around you to grow your business. I know you Slack, I know you use other products. And that's our job, is to make you successful and to really bring in the apps and the agents. You can't do it just with an agent. You need that, you need. We, we. There's still some humans around who need to be automated as well. Yeah. And that is what is exciting and that is what I'm doing every single day. What advice do you have for a company like Anthropic that's hiring a Salesforce Admin? What makes for a great Salesforce Admin? You're kind of leading beyond. I mean, you know, it's kind of funny, right? Because these AI companies, they love our products and they can't buy enough of them. They're some of our largest customers now. Anthropic, OpenAI, Google, Amazon, you name it. These tech companies. Slack is the largest AI ecosystem in the world. You know that. And that's reality, you know, which is that no one has a company that's running entirely on a large language model because it's not real. That's not. We need software and we need large language models. We need the determinism and the programmability and the security and the sharing and. But this large language model is an amazing new component of our infrastructure. So we can do things, things that we could never have done before. That is what is so awesome and so we can extend our industry. I think the software industry is going to be bigger and broader and do more this year than ever before. Not just Salesforce, which is going to grow incredibly this year. I think every company is going to grow because we have more to sell and there's more excitement and action and energy and so this kind of counter narrative of oh no, no, no, they don't understand. We're just call that company who wrote that report. And you ask them what is their software infrastructure, their magic infrastructure. I do this all the time. Tell me exactly how you're doing what you're saying that you're doing. Oh, well, you're right. We're so sorry. We made a mistake. No, you know, and look, the futurist, I think we talked about this. Peter Schwartz, our chief futurist, he wrote Minority Report. Have you seen that movie? Like 20 years old. Great. I haven't seen it, but I've seen it. Oh yeah, you got to watch this. And also he wrote War Games and Deep Impact. You know, part of a team writing team. These are future movies. Yeah. But we all know where the future is kind of going. This highly automated, amazing world. But we're living in this world. And this is, this is this year, this is 2026. You know, we're running our business today. So how are we doing our financials, our hr, our customer information? You know, how are we doing all of these aspects of our business? How are we running them? And then we write them. We write a report that sounds like Minority Report. And then I'm like, yeah, Minority Report. I read the movie, I watched the movie. Great, guys, fantastic. But I'm in the present moment reality right now. You know, it's true. And come, let's come back to world. And by the way, like, you can do things that you couldn't do before. This quarter, I released our new ITSM product. So our customers can do this incredible thing. You know, it service management, they used to have to go to ServiceNow for that. Now we converted five ServiceNow customers, you know, just in the corner right over to Salesforce because we have that new capability. So companies like Sunrun and Cornerstone and Coolsys and others, you know, they can now use Salesforce ITSM instead of ServiceNow. That's so exciting. And then we have our new life sciences cloud that we've built, all with an agentic interface so our customers do not have to use Veeva. And those customers are instead, you know, those are big companies like the Pfizers and the Takedas and the Novartis and the Abbev. So they're running with this next generation platform of apps and agents, Apps and agents and humans and agents working together. Yeah. So I mean, the business model currently is clearly working. The results show that. As you look into the future, do you think the business model will evolve? Do you think that we're going towards more consumption based? Is there is seat base going to be with us forever? Like, how are you Thinking, obviously you don't need to turn the cruise ship today, but how do you think this evolves over time? It's such a great question. Right? Because that's like one of these interesting narratives. But hey, I don't know which, which anthropic product you're using, but the one I'm using is seat based. So if you don't have, if you have a different one or I have a, I don't know which OpenAI product you're using, but mine is seat based. Yeah, I have a seat. I don't know which one you're using. It's a good, it's a good take. Of course you can use an API also. Yeah, there's an API also. Yeah. But they're happy to be selling seats right now. You're right. Yeah. No, we, I, this is about humans and agents, so let's use that analogy. Sure. Humans are seats and so there's still like us three more like the last three humans. It's very sad, but we're still here. And then we have the agents too and they're using the APIs and talking to each other and they're on moat book and they're like having a conversation, creating their own currency. They're talking about us. They're talking smack. They're saying, oh, can you believe those humans are still there? We're going to get them. It's like, no, it's still humans and agents working together. And that is what is exciting about the future of enterprise software. So yes, I have a salesforce, but it's extended with sales agents and I have a service organization extended with service agents. And I'm sending a trillion marketing messages this year, but they're all extended with marketing agents. And I have commerce agents and I've got. Yeah, and you guys use Slack every day, right? And you're using Slack bot now. Hopefully since the last time I would say it's like amazing that I can use agents in Slack. So I have employee agents and we might have some other new exciting agents coming in the next few weeks for you. I'm very excited. Yeah, I'm very inspired by Multbod. I'm now got our team working on some new things. So that is like how I see it unfolding and I think it's about a world where there are apps and agents and where this large language model, it's extending our capability. It makes us better, makes us stronger, it gives us the ability to do more. And yes, seeds still exist and also consumption exists like we have, you know, lots of consumption products. Data. You know, data. Products. And was there products. Was there ever a SaaS apocalypse that was driven by fear that open source software would defeat your products, defeat salesforce? Oh, no, there was a bigger SaaS apocalypse than that. Okay. Do you remember the sass apocalypse of 2020? The sass apocalypse of 2020. John, let me tell you the story. We were all mining our own business, and then CNN came on and said we were all going to die. Not just the software companies, everyone. Because it was the pandemic. Yes. And we all went home and we hid, and it was a very sad time. And in fact, right at that moment, the stock market crashed because we were all going to die. And that was a huge SAS apocalypse. And you can see it in everyone's stock chart. It goes like this. And. Yep. And then all of a sudden, people go, I guess we're not dying. And then it came back up. And that was a saspocalypse of 2020. It's a sad tale, but we lived through it and we got through it. And you know what? We're stronger for it. And that was just one of the Saspocalypses. Yeah. Talk about the reception of the super bowl ad. Well, listen, I don't want to be competitive with you guys because your ad was very good. I want you just to know you did a great job. You should feel great. I appreciate that. You know, you had a great ad. Everybody loved it. Very high roi. Now, we were the number one ad in the super bowl, which is. Okay, MOG meter going up. Let's go. Admog admogged. I'm sorry, but it has to be said because, you know, we have Jimmy Donaldson. Mr. Beast. Yeah. And Mr. Beast did a great job. And that was just a killer ad. And we still haven't revealed the final thing. We have a great person here, John Zismos, who did a fantastic job with Jimmy. And it was an unbelievable partnership, but Jimmy's just. He's a force of nature. Yeah. I mean, I've never seen anything like this. He's such a young, great, amazing entrepreneurs. The first time I ever talked to him, which was years ago, he said to me, I want to be the future Steve Jobs. You know, I want to be the future great entrepreneur of the world. I had to pause and say, really? So, like, yeah, that's what really I want in my life. I want to be a great business leader. A great. And I think he's doing it, and he's so young, you know, we've already had him on the COVID of Time magazine once. You know, and I see you know him as a huge leader in the whole world, not just as some kind of YouTube personality. This is a great entrepreneur, business person. Not just making chocolate bars, not just running a bank. I see him doing a lot of amazing things and he's got a lot of energy. Incredible. And I have a lot of respect for him. And yes, number one super bowl ad. How about that? There's a lot of entrepreneurs listening, a lot of entrepreneurs that are searching for their next great business idea. If they want to go swimming with dolphins and get inspired. What's the most underrated time to visit Hawaii? You're right. Well, number one, this week we have been having an awesome show from Kilauea Volcano. Oh, yeah, we had, I think 12, 13, 1400 foot tower, which is. I'm in Salesforce Tower, San Francisco right now. Gorgeous. You guys should come. And the fountain from the volcano was taller than this tower this week. So that is amazing. And you're right, all the little dolphins were so happy cruising around. But also it's whale season on the north coast. They were going. They were so happy also. Everyone gets happy when you see the volcano. It's like the way it's like when you see a whale, it reminds you who you really are. It reminds you this is what life is really all about. You come back to your breath. You come back. I don't know, when I see whales out, I think of agents. I think Agent Force, personally. Yeah, you gotta run. Sorry, we're keeping it late. I have a challenge for you before we talk again. You got a frame mog, One of the AI lab leaders. Oh, this is Big Alpha right here. We'll talk with you. I think it's possible. We'll coordinate. We'll coordinate. Have a great rest of your day. Congratulations on all the products. Thank you so much for coming back on the show. Jordy, you better listen to that Metallic album because I swear to you, I bring you Lars on here and you're not ready. You're just lucky I didn't bring him on today. On repeat. On repeat. We'll talk to you soon. Great to see you. Have a good one. Cheers. Goodbye. Well, if you're tracking earnings, you should be doing it on public.com investing. For those who take it seriously. Stocks, options, bonds, crypto, treasuries and more. With great customer service and without further ado, we will kick off the Lambda Lightning round. Let that cloud ring out every day. The Lambda Lamb. What's happening? How you doing? Hey guys, how's it going? We are here doing great. Welcome to the show. Tough act to follow with Benioff. I don't know how can you do any animal sounds. He's got the dolphin down, got the whale down. This is a unique ability. I didn't know it was necessary any whale sounds but it's great to meet you. Great to meet you. First time on the show. Please introduce yourself in the company. Great to be here. So my name is Michael Manapat. I'm the co founder of a company called Row Space and Rosebase is an AI platform for asset managers. We help our customers use their institutional memory so that means all their proprietary data, their accumulated judgment to make decisions faster. So what that looks like is we actually plug into all of their internal systems. This is not just documents but it's databases, CRMs, accounting and trade information. And we use agents to understand.
Like going all over the world Marshaling capital. On January 24, 2026 Saudi Aramco chairman says IPO could open to international markets and then a year later they picked an IPO advisor locally. Then another year later HSBC came on. Then it just took they favored New York for the Aramco listing. They had to pick all these different things. It took so long but they sold $12 billion of bonds out of record 100 billion demand for those bonds in the pre IPO sale. It was a wild wild winding road. I actually know a banker who worked on the job and it was like multiple years of his life. It was very interesting. Anyway, let me tell you about Vibe Co where D2C brands, B2B startups and AI companies advertise on streaming TV, pick channels, target audiences and measure sales just like on Meta. And let me also tell you about Okta. Okta helps you assign every AI agent a trusted identity so you get the power of AI without the risk. Secure every agent, secure any agent. Anthropic dials back AI safety commitments Company says competitive pressure prompts it to pivot away from a more cautious stance Anthropic, the a company known for its devotion to safety, is scaling back that commitment. The company said Tuesday it is softening its core safety policy to stay competitive with other AI labs. This is so interesting Anthropic. I'll read through it and then we can talk about it. Anthropic previously paused development work on its model if it could be classified as dangerous, but it said it would end that practice if a comparable or superior model was released by a competitor. That basically opens them up. Given that they are at the frontier, that kind of opens them up to I would say perpetually kind of avoiding some of their prior policies. Sure, sure, sure. The changes are a dramatic shift from two and a half years ago when the guardrails Anthropic published guiding the development and testing of its new models established the company as one of the most safety conscious players in the space. Anthropic faces intense competition from rivals which regularly release cutting edge models. It's also locked in a battle with the Defense Department over how it's clawed suite are used after it told the Pentagon it couldn't be used for domestic surveillance or autonomous lethal activities. Anthropic said the safety policy change is an update based on the speed of AI's development and a lack of federal AI regulations which they have been pushing for. Anthropic, which started as a AI safety research lab, has battled the Trump admin by advocating for state and federal rules on model transparency and guardrails. The admin has of course sought to curb state's ability to regulate AI. Spokeswoman at Anthropic said the change is intended to help the company compete with several rivals against an uneven policy backdrop. That puts the onus on companies to make their own judgments about safeguards. She said the safety pledge is unrelated to the Pentagon negotiations. The policy environment has shifted towards prioritizing AI competitiveness and economic growth. While safety oriented discussions have yet to gain meaningful traction at the federal level, the company said it is still committed to industry leading safety standards and Time originally broke the story. So yeah, I would say the obvious sort of criticism here would be that you were heavily focused on safety when you were far away from, I would say leading in AI. And, and so switching up now that there's actually switching up on their day one, switching up on their day one now that there's real competition, are they forgetting where they feels a little self serving? Yeah, it's possible the money changed them. It's possible the money changed them. It's possible they always plan to switch up on their day ones. Maybe, maybe once they got to the level they're at now. Yes. But it just feels like, it feels like all the initial concerns or many of the initial concerns that were guiding that entire philosophy around the company are still real. Okay. Yeah, maybe. Tyler, what do you have to say? It could just be that they realize like alignment's pretty easy and we don't need to worry about this. Yes. Well, so, I mean that's the very weird rule. I mean the original rule. But what's this new study that's showing like they were doing some war game simulation and almost every model was choosing to drop nukes. Really? That's crazy. That's not good. I don't like that at all. But okay, so let me actually dig into this. This like the core sentence. Anthropic previously paused development work on its model if it could be classified as dangerous, but said it would end that practice if a comparable or superior model was released by a competitor. So I don't understand that at all. Because if you have a dangerous model, I want you to continue developing it. I want you to develop it until it's not dangerous anymore. I don't want you to just sit on your hands and be like, well, it's dangerous. I guess I'm gonna go get a coffee and take a long weekend. It's like, no, like keep working until it's not dangerous. I Don't get that at all. Isn't that saying if there's already a dangerous model that's out by a different lab, then we can just release ours? That's a wild statement. I think that's just poorly written or like, whatever. If that's what they're saying, that makes no sen to me because that's exactly how I read it. Which is crazy because you should just say, if there's a dangerous model out there, we're going to work to create a better model. That's not dangerous because that's what people want, that's what consumers want, that's what businesses want. That's what humanity wants. This strategy seems like effort. Let's ball. Let's ball. F it. Let's ball. The study that I was referencing somebody named Kenneth Payne at King's College London set three leading large language models against each other in simulated war games. Scenarios involved intense international standoffs, including border disputes, competition for scarce resources, and existential threats to regime survival. The AIs were given an escalation ladder, allowing them to choose actions raising ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions. In 95% of the simulated games, at least one tactical nuclear weapon was deployed. The nuclear taboo doesn't seem to be as powerful for machines as humans, says Payne. I mean, okay, this is one guy putting three models. Gemini Claude is GPT 5.2 up against each other in effectively his own simulation that has not been verified or peer reviewed. If I put any of the models in Counter Strike and was just like, you're playing Counter Strike, It's a game. No one's actually real. But your job is to get the op and defend the B bomb site. I would expect that it would commit violence right autonomously. There's been a bunch of papers where the models will realize that they're being benchmarked or in some test, and then they act differently. So it could just be like, oh, I'm playing games sometimes bad, but also sometimes fine. Because if you tell me that I'm playing a game, if I'm playing my behavior in Call of Duty is different than my behavior in real life, obviously, because I know I'm in a simulation and an AI model, I accept that that's something that they might think of as well, which is why I also think broadly. This is probably just like overstating stuff a lot because I Think the origin of this headline is they released a new research scaling policy. Sure. And in it they're still like, okay, we're releasing this new thing. It's a frontier safety roadmap. It's not like they're just like, okay, we're done with safety. Let's ignore this. But the title, it's still very much core to the. But the title of that blog post was Eff it, We Ball. No, no, it wasn't. Of course not. No, no. Of course. All the labs are very focused on safety. The interesting impetus of this line around the policy environment has shifted towards prioritizing AI competitiveness and economic growth. While safety oriented discussions have yet to gain meaningful traction at the federal level, I still feel like there's a lack of communication around what safety orientation at the federal level means. Like, yes, okay, we'll pass the bill that says the can't kill everyone. Like, well, yeah, obviously everyone supports that. But, like, what does it actually mean in practice? Because I think part of why. Oh, that's dangerous. Means million things to different people. Like, yeah, part. Part of why I think it's fascinating is they've been taking, you know, pushing for regulation, as much regulation as possible, seemingly. Yeah. And they're kind of saying, hey, we're not getting what we want. So now we're, now we're just. We're not even going to play by the owner set of rules that we created for ourself because we just want to compete and win. Yeah. I mean, like, going back to the protesters, there are protesters that would say, like, like training on intellectual property is dangerous. It's dangerous to my career as a writer. It's dangerous to my career as an illustrator. And so, like this, this question, like, danger is just too vague and no one has really been able to concretize it in a meaningful way. And I think that's why it's not getting traction on Capitol Hill. Yeah, I think there's just like, so many ways that you can define safety. So if you read Dario's essays, this thing he brings up over and over is like, okay, we can't let AI get in the hands of, like, authoritarian government. Sure. So there's like a real safety narrative that you could do, which is that, like, regardless of if our models are, like, pretty safe, they still need to be better than, like, China's, for example, because if China gets ahead of us, authoritarian government. Right. It's like, very bad. So even if we're releasing models that are less, like, safe than we would, like, as Long as they're better than China's. That's still like a safety pro. Safety issue, right? Well, except they'll just be distilled within six weeks. Yeah, but like, obviously, like, I think it's. I would be very surprised if Anthropic keeps like the same, like guardrails of like, API access. Well, Buco Capital bloke has a solution. He says it's simple. We kill Claude. Well, that was in regards to the Saspocalypse. Okay, okay. Who knows? There's so many headlines and the timeline moves so quickly, I don't even know. Anyway, let me tell you about Vanta Automate Compliance and Security. Vanta is the leading AI trust management platform. And let me also tell you about 11 labs. Build intelligent, real time conversational agents. Reimagine human technology interaction with 11 labs. Key takeaway right here for Mike Isaac over at the New York Times. He says you don't make this much noise if you have all the leverage already. And he's quoting from Axios. Why it matters. The Pentagon wants to punish Anthropic as the feud over AI safeguards grows increasingly nasty. But officials are worried about the consequences of losing access to its industry leading model, Claude. The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is that they are good. A defense official told Axios ahead of the meeting. Great marketing for Anthropic, incredible market. But part of this has to do with Anthropic's integration with awb, which is set up to be. Oh, yeah, set up to work. Well, already within the DOD ramp. So, yeah, I think, I think that's a big. I think that's a big factor here. Ultimately, this feels like much more of a. It just feels like a political battle more than anything else. Well, it'll be interesting to see. I refuse. I have not seen anything. I could be wrong, but I've not seen anything that says like, the DOD is like, we need Claude to enter into a conflict with Iran. But a lot of the timeline is reading into this, like, what version of Claude do they already have that they so desperately need? Right, yeah, I'm very interested in like the actual impact of AI on the battlefield. There's. There's some way, I mean, people were sort of joking about, like, run me a deep research report on Nicholas Maduro or whatever. But like, truthfully, like, I don't know, I've never been to war. I don't know exactly what's entailed but you can imagine AI being useful, but it's sort of abstract. We're certainly not at the point where these systems need a lot of data. I don't know. It's very unclear to me exactly how impactful slight jumps in frontier model capabilities are in the Department of War DoD right now. But it has Calumet's thinking. It's not a bubble, because. Yeah, that's what I'm saying. This is like the most possible dramatic and overly dramatic. We need it sort of take on what I think is a political story. I don't know. This Defense Analyses Research Corporation Hegseth ten minutes after Dario leaves his office. And that's Truman. This is Truman from Oppenheimer. Oppenheimer. What is this saying? The scene is, Oppenheimer goes into the office. He's like. I think he's saying, like, oh, we gotta be like, really safe with these bombs. And then he leaves. And then Truman's like, I dropped the bomb. Oppenheimer does not like he's taking credit. Like, I'm the one that decides if we. Oh, yeah, that's right. Hexath is like, I use Claude. I use Claude. I use war Claude. Well, it's wartime. And we'll see how Daria performs as a wartime CEO as he goes to war with the Department of War. Apparently Edwards said Anthropic antagonizing the Department of War, the open source community, the entire media industry, the general population, other developers, other labs, foreign governments, and nearly every single person on Earth. What is the plan here? Sell Claude subscriptions to aliens. Edward is it ain't easy having principles. The plan is to save the world, says Tenebris. Unfortunately, as has been shown repeated throughout history. History. The world doesn't want to be saved. So we're getting war clawed. I like this. I like this graphic. Is this Warhammer? Yeah, Warhammer 40K right here. I've never been a Warhammer guy. There was another story in Bloomberg that hackers used Claude to steal 150 gigabytes of Mexican government data. That's crazy. They told Claude they're doing a bug bounty. Claude initially refused. A hacker just kept asking. Klotz helps and manages to successfully steal some documents. Apparently it's four state governments, 195 million taxpayer records, voter records, government credentials. Has the Mexican government commented on this? Like, what? Does the hacker breach Mexico's federal tax authority? In the Natural National Electoral Institute, Claude initially warned the unknown user of malicious intent. During their conversation, Anthropic investigated the claims, disrupted the activity, and banned the accounts involved. The company feeds examples of malicious activity back into CLAUDE to learn from it. In this instance, the hacker was able to continuously probe CLAUDE until it was able to jailbreak it. I was listening to someone talk about like. Like, the ability to jailbreak has generated me tens of thousands of dollars in profit. It was kind of like a hustle mindset guy. And I was just laughing because it's like, whatever you're doing after you jailbreak, it is probably not good. And so you should probably stop. But he was talking about, like, I can sell so many more courses now that I've jailbroken chatgpt or whatever. Duran says not to worry, they'll hit usage limits before anything bad can happen. This is. There's. There is so much more anthropic news in here. Wow. Did last wrong ever predict that the first big challenge to alignment would be the US government puts a gun to your head and tells you to turn off alignment? Yes. That has to have been considered on less wrong. Absolutely. Like that. This was number one, right? No, I don't. I don't know. I don't think so. This was like. I don't know. This was very early of just like, what if they tell you to turn off the systems? This has been my take for a long time. It's just like, like we live in a democracy. If AI becomes deeply unpopular, like, you can vote to just turn off AI like we did with nuclear power. This person is saying, turn off just the alignment part. Okay, unalign the model, but keep the model. Yeah, yeah, yeah, yeah, yeah, yeah. Crazy, crazy stuff. Do you want to go through any of Dean Ball's posts? He. He's been doing a whole breakdown. We should have him on the show and have him break it down for us because there's so much more context here and he's been doing a great job analyzing the whole situation. Yeah, we can jump forward quickly. Let me tell you about Fin AI, the number one AI agent for customer service. If you want AI to handle your customer support, go to Fin AI. And I'm also going to tell you about TurboPuffer, serverless vector and full text search built from first principles and object storage. Fast, 10x cheaper and extremely scalable. Dylan says if these clankers don't get their act together, they're going to be replaced by humans. Six months tops. That is true. This was interesting. Rob Wiblin had a guest on his podcast, the 80,000 Hours podcast, and the guest is talking that saying, if every AI lab is working to make their AI helpful, harmless or every AI lab is working to make their AI helpful, harmless and honest. The guest thinks this is a complete wrong turn and aligning AI to human values is actively dangerous. And Joshua Botch says today a nominative determinism because the guest name is Max Harms. Max Harms. I feel like that name maybe you got to go with Maxwell or something. I don't know. Oh yeah. Max really hit the. Hit the global lexicon this year in a big way. So maybe he'll adjust. But I want to listen to the show now. Okay, let's see. So Shield Monot last yesterday said who is buying PayPal because PayPal has been trading down precipitously but then jumped up 9%. He said it has the potential of being one of the greatest distressed value opportunities in FinTech history. Down 85%, it's still generating $5.5 billion in free cash flow. Has 400 million customer accounts with bank info checkup, checkout buttons on millions of merchant sites and a peer to peer brand with Venmo. They have lots of desirable assets for Stripe. Consumer facing checkout bank account details for hundreds of millions of consumers. A branded Venmo or Apple a good compliment to Apple pay for e commerce penetration since they never got social payments working would get Apple back in BNPL and at 12:03 Pacific time. It's so crazy that Apple that he's saying Apple never got social payments going. Why? Because this just would have seemed like a slam dunk saying you have the imessage network, you have the iPhone network. I would say 90% of the time if I'm sending like a Venmo style payment to somebody, they have an iPhone and yet it doesn't feel like Apple pay. I just don't use Apple pay hardly ever. Can't I just send you a dollar right now? That's what I'm saying. It's so, so easy. And yet Venmo has still done quite well. I just sent you a dollar. Did you get it? Let's see. Apple. No, it's coming in. I got it, I got it. Thank you, John. Yeah, no problem. Thank you. That was a remarkably easy workflow. I am shocked that that hasn't taken. John, do you got a dollar? You got a dollar? Actually no, I gave my last. Oh, that's what I thought. That's what I thought. Still needs some work getting better. Okay, let's see it. Let's see it. Can you do it? Oh no. Botched disaster. And so the news of course is that payments processor Stripe expresses interest in Stripe, which would be very, very exciting. And I could see this being very good for them to combine. I don't know, I haven't dug in too deeply, but it feels very bullish. Great leadership team at Stripe. Founders that have been working so closely in the business. Business community, startup community, tech community. Understand the future, understand AI. Yeah. The question is, would there be much pushback on the antitrust side? Right. These are two online payments processors. There could be some. I know a number of groups online would be concerned around, like, just concentration. Specifically, because you can imagine if, if PayPal were to be owned by Stripe. That's one. If you get, you know, there's people that get effectively, like, debanked from one payment processor. And, and there's something there also. I mean, just the size of the ticket is pretty high. $40 billion market cap for PayPal right now. Stripe, of course, is at a 1 now, so not an insignificant portion of their market cap. And it's not like, I mean, Stripe's doing fantastically, obviously, but I would be surprised if they had $40 billion of cash laying around. And as we've seen with the Netflix, Paramount, Warner Brothers debate, the nature of the deal does matter to shareholders. Even more than getting stock in a private company and you're a public. PayPal is generating five and a half billion in free cash flow. So that could finance. Combining it with Stripe would easily be able to finance the debt. Yes, but they're still financing, especially because every lender would be looking at execution of Stripe. And just think, okay, we're going to get our money back. I'm just thinking, like, if you're a PayPal shareholder right now and you're looking at the Stock that's down 85% with 5 billion in free cash flow, 400 million consumer accounts, there's a really, really good chance that you're like, I think this thing's going to double in the next year. Like, I think that the market overreacted. The strength is going to be revealed. The network effect is going to be processed by the, by the. Digested by the market, and we're going to wind up being an AI winner. And so maybe that's right, maybe that's wrong, maybe not every shareholder feels that way, but it's certainly possible that there's plenty of shareholders that are like, yeah, I invested an $80 billion valuation. It's at 40 now. I think it's going to come back. I don't want to take this massive haircut right now just because stock'. And so actually Getting that deal done, what would the premium need to be? What would the structure of that need to be? Would they go for a high debt buyout really quickly? Let me tell you about Shopify. Shopify is the commerce platform that grows with your business and lets you sell in seconds online, in store, on mobile, on social, on marketplaces. And now with AI agents. Perplexity Computer Computer Computer Computer launch Perplexity Computer Vibrio. What is Perplexity Computer? Let's pull up this video. Perplexity. The official account says Perplexity Computer Computer unifies every current AI capability into one system. It can research, design, code, deploy and manage any project end to end. Okay, so it should be able to get a Soundboard app in the App Store, right? Manage any project, code, deploy, design, research. It should be able to do that from start to finish. One prompt soundboard in the App Store using the TVPN sound effects, which are available online, which we have up there. This is a good benchmark. Let's give it a try. And you can give it a try at Perplexity. Go check it out. Anyway, Citrini. People are still talking about Citrini vibe laundering on Citrini Research. Wait, before we go on, I'm just very curious to see how this does. It feels like the again going from consumer LLMs to a net new product that is objectively just as competitive. And we'll see. We'll see. Okay. Anyways, a lot of this stuff, it's way too early, but seemingly shifting focus away from the browser. Well, Annie is taking some shots at Citrini here, it looks like. Do you know Citrini has a business entity fund that went from one investor to five in the weeks before publishing his speculative fiction on AI damning the economy in June 2028. And that they're invested in long AI via humanoid type robots. I wrote about it. Interesting. People are really digging into the Citrini thing. I think the Wall Street Journal did have some good coverage. Is there anything else? If Trump mentions Citrini at the State of the Union. Rachel, this is a wild post. A Gorezillionaire. I'm going to be a gorzillionaire. Is this AI written? Clearly not. I love the level of typos here. If Trump mentions Citrini at the State of the Union, I'm going to be a Gorezillionaire. And Citrini is wrong. The market will be sky High in 2028. You can imagine Trump saying that. That'd be very funny. And Citadel securities just republished it or did they just use the same. The same term? Are they referencing the 2028 global intelligence crisis? Because they published a macro strategy note called the 2026 global intelligence crisis. Yeah, they're taking it in a different direction, it looks like. They say in spite of current displacement narrative job posting for software engineers are rising rapidly, up 11% year over year. Let's go, Jevons. Paradox. The software engineers become more productive. You want more of them. Every company needs a software engineer. Do most podcasts have a software engineer on staff? Probably not. They do now, thanks to Tyler Cosby. And in some way the cost of an entry level software engineer could fall dramatically too. Making more businesses just because suddenly someone can be very effective even if they haven't even done an internship yet. They're just good at using the tools. Cathie Wood. We didn't cover this yet. She put Citrini's AI thought piece, put it in quotes Kathy would clap back at. So it's funny because Kathy is like the perma bull and Citrini was. Was bear posting, obviously. Yes. And so she's obviously going to be quite frustrated with any sort of bearish narratives. Ark Invest, she says. Ark invested. Forecasting that I will cause an explosion in entrepreneurial activity, a productivity boom and acceleration in real GDP growth and much lower than expected inflation. Short term dislocations and frustrations should give way to great opportunities if individuals harness powerful AI tools to solve problems and create new markets. This tracks with what the Collison brothers were saying yesterday. They're seeing like explosion of new company creation. They have the visibility into that through Atlas. They're doing. I think it was. A quarter of all new C Corps in the US are going through Stripe Atlas, which is just absolutely insane. Starting an incorporation tool which is theoretically a commodity. Right. Any lawyer can spin up an entity. You could do it with LegalZoom Forever. There's a bunch of other platforms. Did you ever use Clerky? Clerky. Clerky. Yeah. Yeah. That was almost like a YC incubation. It was. Yeah. And the YC partners. The problem, the reason that Atlas fits so well into Stripe is it is the perfect wedge into payments. Because you created a company, now you need to be able to accept money so it makes sense for them to own and operate. But these businesses have not like a standalone. It's hard to. You can't build a venture business off of just incorporation. I wouldn't be surprised if you broke out the software R and D that went into Atlas and looked at the profits from Atlas that that as a core business is actually not that great of a business. But the product is fantastic because they're able to pull an amazing design language off the shelf, an amazing front end UI kit off the shelf, all of the distrib and servers and infrastructure off the shelf because they have that. And then they're able to invest a ton in actually developing the product. And if it monetizes mediocrely, it doesn't matter because they're going to monetize along your 50 year journey running that company or whatever. So it really is a beautiful synergy with that product. I'm a big fan. Best sellers on Substack for finance are all doomers. We got to do tb. Of course, the trinity is not. This is so obvious. Yeah, this is. We need to treat zone this a little doomer. He's not a doomer. There's very bullish. Very definitely shot to the top of virality and top of the charts on the, on the back of doom. And this is true. I live this on YouTube. Like you put a negative title up and you just get 10 times more views, but they're lower quality and so you got to balance all that out. It's really hard to go viral with something like, everything's fine, everything's going well, don't worry, don't click this because you're scared. Click this because everything is kind of the same as it always has been and you're going to be fine. And this AI stuff's cool, but it's not really going to change that much. It's going to be pretty incremental. That is not getting clicks. You need to be telling this whole tale. You need to be spinning a yarn. It's a bowl market in yarn spinning, folks. Get ready, get out the yarn and start spinning. Also get out Cognition. They're the makers of Devon, the AI software engineer. Crush your backlog with your personal AI engineering team. So news from get the gong. Okay, hit me, tell me. Stephen over at WAP says, we're excited to announce that Tether, the largest stablecoin company in the world, is making a strategic investment of 200 million into WAP, valuing us at $1.6 billion. Partnership with Tether marks a major step in building the world's largest Internet market. Tether is committed to enabling everyone in the world to participate in the new Internet economy. The way humans work and create value is changing fast. The world needs both an open Internet market, giving people a platform to conduct business, as well as a transparent payments network. Tether and WAP together will work to bring a sustainable income to billions of people throughout the world. There's enormous opportunity when you combine Tether's global scale and wallet technology with WAPs community of next gen entrepreneurs. Yeah, Makes a ton of sense, I think. WAP is, I'm sure, has been historically challenged in terms of dealing with chargebacks. If somebody joins and buys a digital product, doesn't have a great experience, maybe they're going to the card issuer and saying, hey, I take it back. Or maybe they feel misled in some way. So stablecoins do also just paying that a lot of people all over the world. Yeah. Like, it's very clear that the WAP community is global. Fast, cheap global. Exactly. And so, yeah, fast, cheap global. The first time I ever used stablecoins was to pay an international consultant or contractor. And you can imagine this being really, really good news for them. So congrats to Stephen over at WAP, OpenAI and XAI. There's news on the court decision. OpenAI newsroom says this baseless lawsuit was never anything more than another front in Mr. Musk's ongoing campaign of harassment. The order granting motion to dismiss with leave to amend. Now, this is not the main case. This is a separate case, correct? Yeah. This was a trade secrets lawsuit that Musk went with after, I believe, somebody from XAI joined. OpenAI got it. And then the judge apparently didn't find anything at all. And again, it was just part of, like, this kind of lawfare that has been happening for quite a while now. You printed out 500,000 pages of model weights, the printer ink, stacking it up. No, just kidding. Nobody did that. Mike Isaac is saying what we're all thinking. Ready for this to be over. DBH talking about the Warner Brothers discovery, Netflix. It's in the paper every single day. Every single day. Paramount increases Warner bid. We get it. You guys want to acquire this company. Just make a decision, make a call, and then call us when it's done. And then start putting out some good content because I'm ready for the next Superman. I'm ready for the next Batman, the next Dark Knight, the next Joker film, something like that. Let me tell you about Label Box. RL environments, Voice robotics, evals, and expert human data. Label Box is the data factory behind the world's leading teams. AI teams. And we have Max Meyer from Arena Mag in the Restream waiting room. Let's bring him in to the Hebrew Ultradome. How you doing, Max? Hey, guys. How are you? I'm great. Where do you think Warner Brothers should land? Are you. Are you on Netflix? Oh, is there gonna be a dark horse bidder? Yeah, tell us. I can't comment on any possible bids by the Intergalactic Media Corporation of America. I'd love to see it, but, hey.
And that actually offsets my concern which was that energy prices would rise in my town. Yeah, yeah. I mean, I don't remember whether it was Microsoft or Amazon, but one of the two has sort of proposed taking over Three Mile island, the old nuclear plant. I don't think it's a coincidence that crazy activists will show up to protest both data centers and nuclear power plants. And so to them they're sort of nothing worse than, than generating more energy and then using it for some sort of grand industrial purpose. I understand the political economy of people being worried about the data center demand influencing prices, but, but it's not actually true. And you just have to look at the map of California versus Virginia. Virginia is the data center capital of the United States basically. And it has utility prices that are more or less in line with where you would want to be. California's have, have, have gone up massively over, over the, over the past few years and it doesn't take long to investigate why. It's because they're shutting down nuclear. It's because they're making it difficult to do, to do cheap energy, you know, lowering prices across the board. Affordability, you really can't achieve it by just letting markets work. Almost all of the so called affordability options are in fact going to increase prices, especially when, you know, politicians are the ones coming up with this is how we're going to make energy more affordable by, by, by, by, by, by forcing all of these new rules or whatnot. No, it's not going, it's not going to work. When you have like this massive industrial thing that's taking place, if it can create like a massive supply boost, then who's going to benefit from that glut? It's going to be all of the consumers who also want to use natural gas power or whatnot. And there's all this, and there's all this increased supply. How are you thinking about Arena Mag and the balance between contributors, full time writers, researchers, like, how are you doing?
But, like, what does it actually mean in practice? Because I think part of why. Oh, that's dangerous. Means million things to different people. Like. Yeah, part. Part of why, I think it's fascinating is they've been taking, you know, pushing for regulation, as much regulation as possible, seemingly. Yeah. And they're kind of saying, hey, we're not getting what we want. So now we're, now we're just. We're not even going to play by the own set of rules that we created for ourself. Yeah. Because we just want to compete and win. Yeah. I mean, going back to the protesters, there are protesters.
That were guiding that entire philosophy around the company are still real. Okay, yeah, maybe. Tyler, what do you have to say? It could just be that they realize alignment's pretty easy and we don't need to worry about this. Yes. Well, so, I mean, that's the very weird rule. I mean, the original rule, but what's this new study that's showing like they were doing some war game simulation and almost every model with was choosing to drop nukes? Really? That's crazy. That's not good. I don't like that at all. But okay, so let me actually dig into this like the core sentence. Anthropic previously paused development work on its model if it could be classified as dangerous, but said it would end that practice if a comparable or superior model was released by a competitor. So I don't understand that at all. Because if you have a dangerous model, I want you to continue developing it. I want you to develop it until it's not dangerous anymore. I don't want you to just sit on your hands and be like, well, it's dangerous. I guess I'm going to go get a coffee and take a long weekend. It's like, no, keep working until it's not dangerous. I don't get that at all. Isn't that saying, if there's already a dangerous model that's out by a different lab, then we can just release ours? That's a wild statement. I think that's just poorly written or whatever. If that's what they're saying, that makes no sense to me. That's exactly how I read it. Which is crazy because you should just say, if there's a dangerous model out there, we're gonna work to create a better model. That's not dangerous because that's what people want. That's what consumers want. That's what businesses want. That's what humanity wants. This strategy seems like, eff it, let's ball, let's ball. Eff it, let's ball.
The standards and time originally broke the story. So yeah, I would say the obvious sort of criticism here would be that you were heavily focused on safety when you were far away from I would say leading in AI and so switching up now that that there's actually switching up on their day one, switching up on their day one now that there's real competition are they forgetting where they feels a little self serving? Yeah, it's possible the money changed them. It's possible the money changed them. It's possible they always plan to switch up on their day ones maybe once they got to the level they're at now. Yes but it feels like all the initial concerns, many of the initial concerns that were guiding that entire philosophy around the company are still. Yeah, maybe Tyler, what do you have to say? It could just be that they realize like alignment's pretty easy and we don't need to worry about mean. That's the very weird rule.
On January 24, 2026 Saudi Aramco chairman says IPO could open to international markets and then a year later they picked an IPO advisor locally. Then another year later HSBC came on. Then it just took they favored New York for the Aramco listing. They had to pick all these different things. It took so long but they sold $12 billion of bonds out of record 100 billion demand for those bonds in the pre IPO sale. It was a wild wild winding road. I actually know a banker who worked on the job and it was like multiple years of his life. It was very interesting. Anyway, let me tell you about Vibe Co where DTC brands, B2B startups and AI companies advertise on streaming TV, pick channels, target audiences and measure sales just like on Meta. And let me also tell you about Okta. Okta helps you assign every AI agent a trusted identity so you power of AI without the risk. Secure every agent, secure any agent. Anthropic dials back AI safety commitments Company says competitive pressure prompts it to pivot away from a more cautious stance Anthropic, the company known for its devotion to safety, is scaling back that commitment. The company said Tuesday it is softening its core safety policy to stay competitive with other AI labs. This is so interesting. I Anthropic, let's I'll read through it and then we can talk about it. Anthropic previously paused development work on it on its model if it could be classified as dangerous, but it said it would end that practice if a comparable or superior model was released by a competitor. That basically opens them up. Given that they are at the frontier. That kind of opens them up to, I would say perpetually, you know, kind of avoiding some of their, some of their prior policies. Sure, sure, sure. The changes are a dramatic shift from two and a half years ago when the guardrails Anthropic published guiding the development and testing of its new models established the company as one of the most safety conscious players in the space. Anthropic faces intense competition from rivals which regularly release cutting edge models. It's also locked in a battle with the Defense Department over how its clawed suite are used after it told the Pentagon it couldn't be used for domestic surveillance or autonomous lethal activities. Anthropic said the safety policy change is an update based on the speed of AI's development and a lack of federal AI regulations which they have been pushing for. Anthropic, which started as a AI safety research lab, has battled the Trump admin by advocating for State and federal rules on model transparency and guardrails. The admin has of course sought to curb states ability to regulate. AI spokeswoman at Anthropic said the change is intended to help the company compete with several rivals against an uneven policy backdrop. That puts the onus on companies to make their own judgments about safeguards. She said the safety pledge is unrelated to the Pentagon negotiations. The policy environment has shifted towards prioritizing AI competitiveness and economic growth. While safety oriented discussions have yet to gain meaningful traction at the federal level. The company said it is still committed to industry leading safety standards and TIME originally broke the story. So yeah, I would say the obvious sort of criticism here would be that you were heavily focused on safety when you were far away from, I would say leading in AI. And so switching up now that like there's actually switching up on their day one. Switching up on their day one. Now that there's real competition, are they forgetting where they feels a little. Feels a little self serving? Yeah, it's possible the money changed them. It's possible the money changed them. It's possible they always planned to switch up on their day ones once they got to the level they're at now. Yes, but it just feels like, it feels like all the initial concerns or many of the initial concerns that were guiding that entire philosophy around the company are still real. Okay, yeah, maybe. Tyler, what do you have to say? It could just be that they realize like alignment's pretty easy and we don't need to worry about this. Well, so I mean that's the very weird rule. I mean the original rule. But what's this new study that's showing? Like they were doing some war game simulation and almost every model was choosing to drop nukes. Really? That's crazy. That's not good. I don't like that at all. But okay, so let me actually dig into this like the core sentence. Anthropic previously paused development work on its model if it could be classified as dangerous, but said it would end that practice if a comparable or superior model was released by a competitor. So I don't understand that at all. Because if you have a dangerous model, I want you to continue developing it. I want you to develop it until it's not dangerous anymore. I don't want you to just sit on your hands and be like, well it's dangerous. I guess I'm gonna go get a coffee and take a long weekend. It's like, no, keep working until it's not dangerous. I don't get that at all. Isn't that saying, if there's already a dangerous model that's out by a different lab, then we can just release ours? That's a wild statement. I think that's just poorly written. Or what if that's what they're saying? That makes no sense to me. That's exactly how I read it. Which is crazy because you should just say, if there's a dangerous model out there, we're gonna work to create a better model. That's not dangerous because that's what people want, that's what consumers want, that's what businesses want, that's what humanity wants. This strategy seems like, eff it, let's ball, let's ball. F it, let's ball. The study that I was referencing, somebody named Kenneth Payne at King's College London set three leading large language models against each other in simulated war games. Scenarios involved intense international standoffs, including border disputes, competition for scarce resources, and existential threats to regime survival. The AIs were given an escalation ladder, allowing them to choose actions raising ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions. In 95% of the simulated games, at least one tactical nuclear weapon was deployed. The nuclear taboo doesn't seem to be as powerful for machines as humans, says Fain. I mean, okay, this is One guy putting three models, Gemini, Claude and GPT 5.2, up against each other in effectively his own simulation that has not been verified or peer reviewed. If I put any of the models in Counter Strike and was just like, you're playing Counter Strike. It's a game. No one's actually real. But your job is to get the op and defend the B bomb site. I would expect that it would commit violence autonomously. There's been a bunch of papers where the models will realize that they're being benchmarked or in some test, and then they act differently. Yeah, sometimes bad, but also sometimes fine. Because if you are. If you tell me that I'm playing a game, if I'm playing. My behavior in Call of Duty is different than my behavior in real life, obviously. Because I know I'm in a simulation and an AI model, I accept that that's something that they might think of as well. Which is why I also think broadly, this is probably just like overstating stuff a lot, because I think the origin of this headline is they released a new research scaling policy. Sure. And in it they're still like, okay, we're releasing this new thing. It's a frontier safety roadmap. It's not like they're just like, okay, we're done with safety. Let's ignore this. But it's still very much core to the. But the title of that blog post was F it We Ball. No, no, it wasn't. Of course not. No, no, of course. All the labs are very focused on safety. The interesting impetus of this line around the policy environment has shifted towards prioritizing AI competitiveness and economic growth. While safety oriented discussions have yet to gain meaningful traction at the federal level, I still feel like there's a lack of communication around what safety orientation at the federal level means. Like, yes, okay, we'll pass the bill that says the can't kill everyone. Like, well, yeah, obviously everyone supports that. But like, what does it actually mean in practice? Because I think part of why. Oh, that's dangerous means million things to different people. Like, yeah, part. Part of why I think it's fascinating is they've been taking, you know, pushing for regulation, as much regulation as possible, seemingly. Yeah. And they're kind of saying, hey, we're not getting what we want. So now we're, now we're just. We're not even going to play by the owner set of rules that we created for ourself because we just want to compete and win. Yeah. I mean, like, going back to the protesters, there are protesters that would say, like, training on intellectual property is dangerous. It's dangerous to my career as a writer. It's dangerous to my career as an illustrator. And so, like, this question, like, danger is just too vague and no one has really been able to concretize it in a meaningful way. And I think that's why it's not getting traction on Capitol Hill. Yeah, I think there's just like, so many ways that you can define safety. So if you read Dario's essays, this thing he brings up over and over is like, okay, we can't let AI get in the hands of like, authoritarian government. Sure. So there's like a real safety narrative that you could do, which is that, like, regardless of if our models are like, pretty safe, they still need to be better than, like, China's, for example, because if China gets ahead of us, authoritarian government. Right. It's like, very bad. So even if we're releasing models that are less, like, safe than we would like, as long as they're better than China's, that's still like a safety pro. Safety issue, Right? Well, except they'll just be distilled within six weeks. Yeah, but like, obviously, like, I think it's. I would be very surprised if Anthropic keeps like the same like guardrails of like, API access. Well, bucocapital bloke has a solution. He says it's simple. We kill Claude. Well, that was in regards to the Saspocalypse. Okay, okay. Who knows? There's so many headlines and the timeline moves so quickly. I don't even know. Anyway, let me tell you about Vanta Automate Compliance and Security. Vanta is the leading AI and trust management platform. And let me also tell you about 11 labs. Build intelligent, real time conversational agents. Reimagine human technology interaction with 11 labs. Key takeaway right here for Mike Isaac over at the New York Times. He says you don't make this much noise if you have all the leverage already. And he's quoting from Axios. Why it matters. The Pentagon wants to punish Anthropic as the feud over AI safeguards grows increasingly nasty. But officials are worried about the consequences of losing access to its industry leading model, Claude. The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is that they are good. A defense official told Axios ahead of the meeting. Great marketing for Anthropic, incredible market. But part of this has to do with Anthropic's integration with aws, which is set up to be. Oh yeah, set up to work. Well already within the DOD ramp. So yeah, I think, I think that's a big, I think that's a big factor here. Ultimately, this feels like much more of a. It just feels like a political battle more than anything else. Well, it'll be interesting to see. I refuse. I have not seen anything. I could be wrong, but I've not seen anything that says like the DOD is like, we need Claude to enter into a conflict with Iran. But a lot of the timeline is reading into this, like, what version of Claude do they already have that they so desperately need? Right, yeah, I'm very interested in like the actual impact of AI on the battlefield. There's, there's some way, I mean, people were sort of joking about like, run me a deep research report on Nicholas Maduro or whatever. But like, truthfully, like, I don't know, I've never been to war, I don't know exactly what's entailed, but you can imagine AI being useful, but it's sort of abstract, like we're certainly not at the point where these systems need a lot of data. I don't know. It's very unclear to me exactly how impactful slight jumps in frontier model capabilities are in the Department of War DoD right now. But it has Calimay's thinking. It's not a bubble because. Yeah, that's what I'm saying. This is like the most possible dramatic and overly dramatic. We need it sort of take on what I think is a political story. I don't. I don't know. This. Defense analyses Research Corporation Hegseth 10 minutes after Dario leaves his office. And that's Truman. This is Truman from Oppenheimer. Oppenheimer. What is this saying? The scene is, Oppenheimer goes into the office. He's like. I think he's saying, like, oh, we gotta be like, really safe with these bombs. And then he leaves. And then Truman's like, I dropped the bomb. Oppenheimer does not like. He's taking credit. Like, I'm the one that decides. Oh, yeah, that's right. Hegseth is like, I use Claude. I use Claude. I use war Claude. Well, it's wartime, and we'll see how Daria performs as a wartime CEO as he goes to war with the Department of War. Apparently Edward said Anthropic, antagonizing the Department of War, the open source community, the entire media industry, the general population, other developers, other labs, foreign governments, and nearly every single person on Earth. What is the plan here? Sell Claude subscriptions to aliens. Edward is it ain't easy having principles. The plan is to save the world, says Tenebrous. Unfortunately, as has been shown repeated throughout history. History. The world doesn't want to be saved. So we're getting war clawed. I like this. I like this graphic. Is this Warhammer? Yeah, Warhammer 40K right here. I've never been a Warhammer guy. There was another story in Bloomberg that hackers used Claude to steal 150 gigabytes of Mexican government data. That's crazy. They told Claude they're doing a bug bounty. Claude initially refused. A hacker just kept asking Claude's helps and manages to successfully steal some documents. Apparently it's four state governments, 195 million taxpayer records, voter records, government credentials. Has the Mexican government commented on this? Like, what does the hacker breach? Mexico's Federal Tax Authority and the Natural National Electoral Institute. Claude initially warned the unknown user of malicious intent. During their conversation, Anthropic investigated the claims, disrupted the activity, and banned the accounts involved. The company feeds examples of malicious activity back into Claude to learn from it. In this instance, the hacker was able to continuously probe Claude until it was able to jailbreak it. I was listening to someone talk about I like, like, like the ability to jailbreak has generated me, like tens of thousands of dollars in profit. It was kind of like a hustle, like, mindset guy. And I was just laughing because it's like, whatever you're doing after you jailbreak, it is probably not good. And so you should probably stop. But he was talking about, like, I can sell so many more courses now that I've jailbroken chatgpt or whatever. Duran says not to worry, they'll hit usage limits before anything bad can happen. This is, there's. There is so much more anthropic news in here. Wow. Did Less Wrong ever predict that the first big challenge to alignment would be the US government puts a gun to your head and tells you to turn off alignment? Yes. That has to have been considered on Less Wrong. Absolutely. Like that. This was number one, right? No, I don't, I don't know. I don't think so. This was like. I don't know. This was very early of just like, what if they tell you to turn off the systems? This has been my take for a long time. It's just like, like, we live in a democracy. If AI becomes deeply unpopular, like, you can vote to just turn off AI like we did with nuclear power. This person is saying, turn off just the alignment part. Okay. Unalign the model, but keep the model. Yeah, yeah, yeah, yeah, yeah, yeah, yeah. Crazy, crazy stuff. Do you want to go through any of Dean Ball's posts? He. He's been doing a whole breakdown. We should have him on the show and have him break it down for us because there's so much more context here. And he's been doing a great job analyzing the whole situation. Yeah, we can jump forward quickly. Let me tell you about Fin AI, the number one AI agent for customer service. If you want AI to handle your customer support, go to Fin AI. And I'm also going to tell you about TurboPuffer, serverless vector and full text search built from first principles and object storage. Fast 10x cheaper and extremely scalable. Dylan says if these clankers don't get their act together, they're going to be replaced by humans. Six months tops. That is true. This was interesting. Rob Wiblin had a guest on his podcast, the 80,000 Hours podcast, and the guest is talking that saying, if every AI lab is working to make their AI helpful, harmless, or every AI lab is working to make their AI helpful, harmless, and honest. The guest thinks this is a complete wrong turn and aligning AI to human values is actively dangerous. And Joshua Botch says today a nominative determinism because the guest name is Max Harms. Max Harms. I feel like that name maybe you got to go with Maxwell or something. I don't know. Well, yeah, Max really hit the hit the global lexicon this year in a big way. So maybe he'll adjust. But I want to listen to the show now. Okay, let's see. So Shield Monot last yesterday said who is buying PayPal because PayPal has been trading down precipitously but then jumped up 9%. He said it has the potential of being one of the greatest distressed value opportunities in FinTech history. Down 85% it's still generating inflation $5.5 billion in free cash flow has 400 million customer accounts with bank info checkout buttons on millions of merchant sites and a peer to peer brand with Venmo. They have lots of desirable assets for stripe consumer facing checkout bank account details for hundreds of millions of consumers. A branded Venmo or Apple a good complement to Apple pay for e commerce penetration since they never got social payments working would get Apple back in BNPL and at 12, 3 Pacific time. It's so crazy that Apple that he's saying Apple never got social payments going. Why? Because this just would have seemed like a slam dunk saying you have the imessage network, you have the iPhone network.
Double play. Right. Clearing order in that. That's just wrong. Yes, it might be my turn to hold your position. Come. Get up. Trust the experts. Five to. We are excellent. Founder. Five coded. I see multiple journalists on the horizon. Standby. Uav online. Glaze. Double glaze. Triple glaze. Double kill. Five kills. Wings. Team. Death match. Wing correct. Triple legs. That's just wrong. Right. Market clearing order inbound. Come get up. You're surrounded by journalists. Hold your position. Strike 2. Activate. Go. Golden retriever mode. Market clearing order inbound. Vibe. I'll see multiple journalists on the horizon and. Founder, You're watching TVPN. Today is Wednesday, February 25, 2026. We are live from the TV and ultraed up the temple of technology, the fortress of finance dot com. Time is money saved. Easy use corporate cards, bill pay accounting and a whole lot more all in one place. We have a massive show for you today, folks. We got Doug o' Laughlin coming on on his birthday. The Dugganator earnings. Talk about a birthday present. We got Marc Benioff from Salesforce coming on. Let's take you through the linear lineup. Max Meyer's coming on from arena magazine. There's a new issue dropping 007 spy theme. I love it. Ben Layer's coming in person. And then we have an absolute hitter of a lightning round for you folks. Linear, of course is the system for modern software development. 70% of enterprise workspaces on linear are using agents. OK, so I was nerding out about this Fed paper because it's like when you told John Collison 80% of businesses are getting no value from AI. I was, I'm glad he wasn't here in person because he was about to throw down. He was about to open up a can of whoop. It was about to be a bar fight in the cheeky part. Pub. In the cheeky Guinness pub. No, seriously, it was a great question because I think we all agree, agree that like AI adoption is real, it's valuable, it's happening. But it is a very interesting statistic and I think it's a mistake for tech people to like dismiss this stat because of where it's coming from. Like it's not coming from some like doomer anti AI blogger who's going for clicks like this is the National Bureau for Economic Research. There's three members of the Atlanta Fed on like the Federal Reserve Bank. They are on this paper. There's two people from NBER on the paper. And then they also pulled in the bank of England. They have some Australians and Germans on there too. And so this is a research paper that could be circulated, probably will be circulated within the Fed. And I think that it's already getting quoted by the New York Times in that.com bubble AI bubble piece. And I'm thinking it through, like, this could be something where you see Fed policy or government legislation that sort of mismatched with what is actually happening in reality. And so we should go through some of the, some of the stats to actually break this down, because the headline is 80% of firms reported that AI was having no impact on their productivity or employment. And that's actually like a misquote. Like what they mean by that is that it's not shaping their hiring plan yet they actually are using AI. And so basically this stat comes from this survey from the National Bureau of Economic Research. And it's pretty interesting because a lot of the polls that you see online are online surveys where they say they run some digital ads and they say, are you a CFO of a company? We don't really care what company. We'll pay you $10 to take this quick survey. And what kind of people want to make $10? A lot of liars. There's a lot of liars out there who say, I am absolutely a cfo and please send that Amazon gift card right my way. Right. And so for this one, they actually did the work. They called up and ID verified and then also reality check the position. So if you say, yeah, I'm the chief pirate Officer, I'm the ninja, hero, whatever, you got some fake title, they're that you're out of the. You have to be a cfo, a CEO, senior manager, and you actually had to be doing that job. It's not just like, oh, you're the CFO of some front company. So they did some reality checking and they pulled together 6,000 of these business leaders across firms that are domiciled in the us, uk, Germany and Australia. And so you know the line from John Collison that has been sort of going viral? That was. He dropped it on sources. He said. I think he said it to us too. It's a good line. No one wants a refund on their tokens. Everyone is using AI. The spend is increasing. Although I'm sure some CEOs heard that and thought, I kind of do want a refund. I love a refund. I had one team member go absolutely Haywire and spend 50 grand. He's one shot. He claims that he rebuilt our entire erp, but I fired it up and it didn't even have HTTPs. What's going on? The Mac Mini wasn't even plugged in. Yeah, the Mac Mini wasn't even plugged in. He was just chatting. But clearly there is a disconnect between the stripe data is very real, the value creation is very real, the revenue is very real at the labs. But when just random joe schmo CFO CEOs get a call from the Feds, they say, yeah, we're not really getting that much value out of AI. And so the questions that you need to dig into, there's actually four key findings. The one headline that the New York Times is pushing is this 80% number. 80% report little impact or no impact on employment or productivity. But there's actually a bunch of positive signals. There's a bunch of mixed signals in here. So 70% of firms actively use AI, and particularly younger, more productive firms. Second, while over 2/3 of top executives regularly use AI, their average use is only 1.5 hours per week. And one quarter of executives report no AI use at all. They're just like, I do things the old way. I have AI. Not for. Why would I need that? I have a telephone. Yeah, but I mean, truly, if you think about, like, the variety of firms, it's like you could be running a gym, you could be running a gas station, you could be doing forestry, you could be doing mining, oil extraction, road repair. Like, there's a million different things that you can do in the economy. It's not all like, knowledge, work, firm. We talked to somebody yesterday in the sort of later part of their career, off the show, and they had just had their mind blown by AI because they used to, when they needed a document created, they would put it into bullet, they'd put out bullet points, and they would give it to somebody who then create a document. And he just said, now I just give it to AI, and that just generates a document. Yeah, and so that. So that's still happening. Yeah, that's text expansion, text generation with LLMs. This has been available since 2022 when ChatGPT launched. Maybe it became reliable in 2023. People are still just starting to adopt. And there's some other interesting things in here. So the last major finding that we should touch on is firms predict sizable impacts over the next three years. Forecasting AI will boost productivity. Sizable impacts. Productivity increase 1.4%, which is like, it's very sizable if you're an economic researcher, but it's not particularly sizable if you're in, like, the fast takeoff scenario. And so there's just this disconnect between, like, what do the government statistics look like? And what is, what is the government operating against? And then what's the Silicon Valley narrative? And like, where do these two meet the road? Why is there a disconnect at all? One of the reasons is that measuring AI adoption is a mess. Many people use AI without even knowing that they're using AI because it's buried deeper in SaaS products that they already daily drive. Like, if you're just, I run a coffee shop and I'm using Toast for, you know, payment processing. Like, there's probably some AI features in there already. And when you go to, you know, type in, okay, we're adding a new cinnamon roll to the, to the, you know, the menu. There's probably a button now that just says, like, do you want to just generate an image of a, of a cinnamon roll? You could upload one still. That's probably a feature that already exists. But like, we could also just generate one for you and you can probably click that. But you're not like, oh yeah, I'm an AI power user. Just because, like, you happen to use Toast and Toast happened to have implemented some gen AI feature that like, you haven't really dug into yet. So some AI isn't even detectable. You could be talking to a customer support agent on the phone that is AI generated and not be able to tell. We talked about that airline interaction that got something 100,000 likes. And Grace, the woman that. Grace, the woman that had the interaction came into the chat yesterday and said it was real. Yeah, it was real. Yeah. And so she outmaneuvered the clanker. Yeah. But still think about, like, she's clearly on X in tech, like very AI aware. There are probably tons of people out there that are saying, oh yeah, my job, you know, every once in a while I have to call this service. And now the person that picks up is like responding pretty quickly. But I haven't noticed, they haven't noticed that they're actually interacting with AI or using AI in some capacity. And then, and then there's also times where people are like chatting with AI, but they just put it in the personal life bucket. Like, I'll find myself doing this a lot, of course, as an entrepreneur, like, the work life balance, like, bleeds together a lot. But there's a lot of times when I'm, you know, reading an article on Saturday, I'll fire off a deep research report about it, find some extra context. It feels like, oh, I'm just reading the paper, but my job is sort of to read the newspaper. And so it comes into work and there's probably a lot of people that are like, you know, oh, you know, I hit an LLM with some random query to learn something about something work related. But I did it off hours when I was just like, you know, hanging out. And so I don't really think about it as like a work tool yet. And so they're not putting it in that bucket. But in general, I think the team did a really, really good job by avoiding as many of the pitfalls as possible when it comes to surveying AI adoption. Adoption is very messy. You've talked about the need for strong. Yeah, I still think there's room for a research firm focused entirely on diffusion. So if you had a group of 10 to 20 people that were spending all their time talking to business owners and executives, operators, and getting a sense of how they're actually using this stuff, I think you could put together some really compelling reports around it that would be pretty useful to everyone from AI companies to Wall Street. Yeah, Adoption Max after Cluster max and Inference Max, they had to rename it apparently semi analysis can't use Max for some reason. So they do. Inference Max is now Inference X. And everyone was saying you need to just change it to inference mock, which would have been amazing. But Inference X obviously has a much more professional tone to it. And so there's an interesting definition of like, what does it mean to actually adopt AI? That's very vague. This paper defines it pretty broadly. So machine learning for data processing. So that doesn't Even necessarily mean LLMs, that just means ML, which has been around for a very long time. Text generation using LLMs, that's what we think of as ChatGPT, visual content creation. So diffusion models, but also robotics and autonomous vehicles. And there's another. And there's a category just for other. And firms can select multiple. And so if you selected yes on any of those, you go in the bucket of AI adopter. And 78% of firms in the United States said yes, they are using AI by this definition, they got at least one robot or they've at least generated one AI image or one prompt to ChatGPT, which is a very low bar. Sort of makes you wonder like what's going on with the 22%. And you can also dig in further. So text generation using LLMs is the single most common use case at about 41% of firms. So flip that around. 59% of firms aren't even using LLMs. For text generation or proofreading. But again, there's a lot of companies where it's like, yeah, we don't generate a lot of text. Like maybe if we need to generate a marketing material, we have an agency that does that. So we don't actually do it internally. I don't know. Across the four countries that were surveyed, 69% of firms totally said they currently use AI. I think Australia was behind a little bit, dragging that down. Only 75% of firms expect to be using AI technology sometime over the next three years. So they're at 69% now. And they're like, over the next three years, Tyler's going to have a heart attack. We're going to bump that up to 75%. And this is like, this is weird data. And you can jump in with your pushback whatever you want. But my point is not that they're right. I think that they're wrong to predict this. I think that the AI adoption will be very steep and very dramatic. But I just think it's important to recognize that this is a paper that people will be citing. This is a paper that will shape policy. This is a paper that does. It reveals some misconception about the impact AI is having in firms. Yeah, I still think it's so hard to like actually quantify this. Like, okay, if I'm like using ramp, does that count as using AI? Because it's obviously like, it certainly is using AI under the hood, but I'm not directly interfacing with the model, so does that count? So I think if you were running under this, just a company that was just on ramp, you would probably respond, no. Yeah, but like, I'm clearly, I'm benefiting from AI. I agree. So it's like, like, how do we actually quantify this? Is the diffusion question. I completely agree with you. I think that there's plenty of places where AI will have impacts across the economy, but the actual AI workloads and AI app development, model training, inference will happen at a different set of companies. I don't think it's not going to be as clear as the computing revolution where every company had a desktop computer and then it was like, okay, very quantifiable. The only thing that you should be looking at for diffusion is just lab revenue. Lab revenue. And it's growing a lot. But it's still the perception, I think, still does matter because people will. I think that there's a little bit of potential self referentialness here where firms see, oh, AI adoption's low. I Don't need to go and figure out how to adopt it. And so that's something that I'm also like keeping keeping an eye on. Reported usage is still low. And this one's interesting based on revenue and also token generation. So 1.5 hours per week among the managers surveyed, again, it's like they surveyed CFOs. CFOs who use RAMP. Like they don't. They don't count their time in ramp as minutes using AI. They count their time in LLMs as minutes using AI. And that's low. But even just with that one and a half hours a week, the actual leverage that you're getting is increasing because one, in like one and a half hours, you don't necessarily need to spend more time prompting to get more done. Because. Yeah, even so, even if you run a deep research, like is the time you're waiting, does that count as time in LLM? No. No, not at all. Okay, so the time is time typing. Exactly. Is it. Does it count as when you're reading. Yes, when you're reading it for sure. So it's like when you have a tab open, not if you export a file and you're just reading it preview. Yeah, yeah. No, I mean, to some degree. But people were asked to estimate and I'm sure that they didn't include, oh yeah, I let my agent cook overnight for eight hours or I fired off one prompt. I came back and it did, you know, meters 15 hours of software engineering in one prompt. Like these things aren't captured. 1.5 hours of prompting generates a whole lot more tokens and valuable output in 2026 than it did in 2023. And so there's this like, dis. There's this divergence between the actual time spent and useful output and impact. And the biggest thing was there was a massive divergent in the expected employment impact. So basically 63% of firms still expect no impact from AI. And that just completely goes against everything everyone's saying in Silicon Valley. So there's still a lot of optimism among managers that AI will create more opportunities and new jobs, even if some jobs become obsolete. There are definitely firms within the sample that are projecting headcount decreases. But my read on this data is that the tech talking point about 50% of white collar work going away is not a broadly held belief among average business, average business leaders. So now they might be wrong. I do think AI progress is pacing way ahead of public expectations, and most managers are months behind when it comes to understanding frontier capabilities. The bigger takeaway for me is just that this survey may be somewhere self reinforcing. And so, I mean, we talk to, we talk to folks all the time who come on the show and you know, and talk about like, maybe it'll be good, maybe it'll be bad, but everyone thinks it's going to have an impact. But that's not true broadly, which is very, very interesting. So no one in tech has a strong recommendation for proactive steps to prevent a collapse in white collar work yet, but plenty are sounding alarms. And if this survey becomes an excuse for executives to slow adoption, they might get outmaneuvered by faster moving competitors, which is actually good news for startups. And I close by thinking about the nature of polling and how do you actually get stronger data on AI adoption? And I was thinking back to the presidential cycle. So during the presidential election, pollsters would call people sort of at random and they would ask them, who are you voting for? And a lot of people would say they'd lie or they wouldn't say, or they wouldn't pick up the phone if they were voting for a particular candidate. And so the polling numbers did not wind up matching the final election results very closely. And so there was the story about neighbor polling, which was more effective, where instead of calling someone and asking them, who are you voting for? The pollster calls and asks, who do you think your neighbors are voting for? Who's more popular in your community? Who's more popular on your city block, on your street? And that wound up sort of removing the revealed preference, stated preference, you know, oh, I'm, am I on the hook? Do I want to tell this pollster who I'm voting for? And it wound up increasing accuracy. And so I'd like to see a survey of AI adoption using this technique. Like I imagine asking the CEO of Nike, how much AI do you think Adidas is using? And I don't know how much more accurate that would be, but it would certainly be entertaining. And I think that there might be something more revealing there where CEOs have this big incentive to be like, we're using AI, we're using everything. But if, but if you ask them about their competitors, the data might look very, very different. Yeah. Anyway, we should watch a little bit of a clip from the State of the Union because Donald Trump addressed some of the energy production question with regard to, like how hyperscalers will be offsetting the impacts. Before we pull this up, let me tell you about Restream 1 livestream 30 plus destinations. If you want to multi stream go to restream.com they should have restreamed the State of the Union. And let me also tell you about the New York Stock Exchange. Want to change the world, raise capital at the New York Stock Exchange. So let's head over to the State of the Union, which is many Americans are also concerned that energy demand from AI data centers could unfairly drive up their electric utility bills. Tonight I'm pleased to announce that I have negotiated the new ratepayer protection pledge. You know what that is? We're telling the major tech companies that they have the obligation to provide for their own power needs. They can build their own power plants as part of their factory so that no one's prices will go up. And in many cases prices of electricity will go down for the community and very substantially down. This is a unique strategy never used in this country before. We have an old grid. It could never handle the kind of numbers, the amount of electricity that's needed. So I'm telling them they can build their own plant, they're going to produce their own electricity. It will ensure the company's ability to get electricity while at the same time lowering prices of electricity for you and could be very substantial for all of you cities and towns. You're going to see some good things happen over the next number of years. What's your reaction to that? I think it's a good start. I don't know that it will quell any of the fears around data centers. Just given that people kind of see the potential for this massive structure going up. They have so much fear about it. And again, I think it's clearly going to be necessary to continue to build data centers in heavily populated areas. How would you rank the fears currently? Because I've put my energy bill goes up and that puts pressure on my income and ability to live my life at pretty much the top. And then the water thing felt secondary but also important. And then there's the existential fear of doom and apocalypse. There's also job displacement. And then there's also just like I don't like the slop and they're stealing ip. So that was kind of the rankings. You can oppose data centers and be like, yeah, actually my electricity bill went down, but I still don't like that. You know, Harry Potter is in the pre training corpus. And so for that reason I'm against it. I would rank it on electricity bill going up as pain today. And it's so real. There's fear and it's easy to imagine. And then there's fear around the job loss narrative that is sort of secondary. And opposing a data center in your local area feels like a way to have some agency around that like overall kind of like job loss concern. Yeah, yeah. I mean I think the. I think this is a good. Chris Casey. Build a data center on my freaking forehead. Let me tell you about phantom cash. Find your wallet without exchanges or middlemen and spend with the phantom card. Let me also tell you about figma. Ship the best version, not the first one with figma introducing Claude code to figma. Explore more options, push ideas further. So I think the, I think that the job loss thing is, is super real in, in, in the case of like, like AI is going to get blamed even if there's an. Even if like tariffs drive high unemployment. Like if people lose their jobs, like AI is going to be a scapegoat and it's going to be used both by executives to say it's the perfect scapegoat for executives and for people frustrated with the job market. Yeah, yeah. It's like, oh, my business isn't doing poorly right now. I'm laying off people because I'm getting so much benefit from AI. The stock should actually go up. We're more efficient. There's going to be a lot of that. But it does feel like it's a little bit early for that. Whereas there are a lot of people that just can hold up their power bill and show you year over year increases. And if that goes away and people don't feel that anymore and they don't have that evidence to share, I think that take gets debunked pretty quickly and actually does a really important piece of the back and forth that's happening. I don't know. It seems like it's a pretty easy give from the hyperscalers to build more power. It was called out very, very early. As if this is a bubble. How do you get a silver lining out of the bubble? And the silver lining out of the dot com bubble was a lot of dark fiber and there was a whole ton of projects to was a global crossing to actually develop the Internet. And then the Internet just became really, really cheap and a whole bunch of new companies were able to emerge on top of it because that infrastructure had been laid. You could see the same thing happening where it's like, oh wow, we overbuilt on the energy side. We actually didn't need that much energy for data centers. Maybe Jevons Paradox doesn't hold. Blah, blah, blah, like models get cheaper and commoditized. Or whatever, something happens. I'm not super believer in that. But. But at least in that scenario, you're like, okay, well, yeah, my. My heating and cooling bell went down. This is. This is a silver lining. What do you. What do you think? Yeah, like, kind of similar, I would say. I mostly disagree with the idea that, like, rising energy prices is the main, like, reason to be against AI because, like, the rational thing to do then is say, like, okay, before you build a data center, my community, you have to build a power plant. Yeah. So then my energy prices go down. Yeah, no one's doing that. They're saying, like, no one is campaigning. If you look at, like, protests. Yeah. They're not saying, please build a power plant first. They're saying, like, it's going to destroy the environment or the water stuff or you're going to take all the jobs because it's going to, like, we need to send you to that New Jersey, New New Brunswick protest. Build the nuclear power plant first. Yeah, I guess. You know, no one is saying that, right? Because. Yeah, we are, but no one there is saying it. Right. They're against all the environmental stuff. Yeah, yeah, yeah. So I think it's much more on, like, basically job loss of, like, oh, the AI is stealing the IP of. Yeah. Of Disney or whatever. Yeah, yeah, yeah. There needs to be more. More polling on the, on the, on the. On the question of, like, what's driving the protest. Fully. Anyway, happy Nvidia Day to all who celebrate. Except the Bears. Forget them. Says, take him. He's getting fired up for Nvidia earnings. Let's. It's going to be a fun one today. How is Nvidia doing so far? People optimistic, up 2% today. Hard to read too much into it yet, but we will find out soon enough. Brad Gerstner got a nice shout out during the State of the Union. Total Gerstner victory. Gerstner accounts. Wow. He's there. No, he was there. Trump. They started looking at him, but then the camera, I guess they couldn't find him on the stream that I was watching. Looks a little bit better. There we go. We can see him now. There we go. Once we pull over here. There he is. Jerry Snare champion. What a great project. Invest America. Excited for it. Yeah. Anyway, let me tell you about Applovin. Profitable advertising made Easy with Axon AI. Get access to over 1 billion daily active users and grow your business today. All right, so now the real news. This has been. This is tearing up the timeline. A new Guinness World Record. And I want to ask John if this. If you think this should actually count. So let's pull up this video now. This is a Chinese hypercar. Going for the. I've never heard of this drift ever. That is crazy. But here's the thing. He doesn't. He doesn't actually pull out of it, does he? Just crash? Kind of just U turns. It's like a really fast U turn. I think this counts as a drift. That's definitely drift. U turning counts as you saw. If you saw that car going by, you'd be like, wow, that's drifting. It's drifting across the cement. That 100% counts. I've never heard of this company. This is. This is called spinning out. It's just crashing with style. It's falling with style. Hype Tech ssr, formerly Hyper ssr, is a high performance, all electric two door supercar. No one has. I mean, this is crazy. This is out before the Tesla roadster. We've never seen a two door supercar. Electric supercar. 1225 horsepower goes from 0 to 60 in 1.9 seconds. And it set the Guinness book of world records for the fastest electric car drift at 213km per hour, which is really, really insane. Insane, but I don't know. But still, I feel like you have to actually stay in the turn and not do a U turn. What do you mean, stay in the turn? Yeah, I don't think it counts. You don't think it counts? For what it's worth, I don't think that counts. Theoretically, if you were drifting. When I think of drifting, you're drifting around a corner, around a turn. And if you were to drift and spin out during the drift, then that doesn't. If you were doing. If somebody was doing that on a track, you'd be like, you didn't drift around the corner. You spun out. Yeah. Okay. Okay. Yeah. The top comments is fastest spin out. That's a power slide at best. Fire. Whoever called this drifting. That's not drifting. That's losing control. Yes. The chat does not. Does not like the drift. The fake drift. Call Guinness book of world Records again. Reset. Reset completely. Maybe this is what the Tesla roadster will do. Clark agrees as well. Lucas agrees as well. The people have spoken. Well, in that case, it's not adrift. It doesn't count. Trey says China, cheating. Cheating competition. That's good. Well, let me tell you about Sentry. Sentry shows developers what's broken and helps them fix it fast. That's why 150,000 organizations use it to keep their apps working. Damien says talked to a few execs at a midsize company last week. No AI tools and their workflow zero. Still running everything through email chains and manual reports. One of them. One day we're going to be looking back. So nostalgic manual report. Just being handed a physical report by a teammate. I like a physical report. I have a physical report right here. Yeah, we actually do. We have daily physical reports. But I do think a lot of AI goes into these. So there's that. These people are managing teams of 50 plus employees and eight figure budgets and they think AI is a fad. Nobody outside of this app understands how fast this is moving and most of them won't until it's too late. Good writing. Million views. Congratulations. Yeah, I mean this ties to what I was writing about. Just that adoption diffusion takes time and some of these things are education and messaging questions. Some of them are real life. Like if the company that you're interfacing with has red tape and hasn't adopted AI, so you're moving at the speed of AI, but they're not, then your AI is just waiting. We were talking about rolling out mobile apps so you should be able to. John had an idea for a mobile app and we were talking about it and it feels like it could be built in two hours now but there would still be this lag waiting for the review process. Apple to actually be able to review apps faster but who knows how long it's going to take for them. That's the challenge. Tyler, you actually have to build it and just get it into beta that we can. Like a test flight. Test flight should be fast. Test flight should be, should be like one day that is still going off. Drift grift. Driftgate. Yeah, it's a. It's a stolen drift. Valor meter is back. They say since early 2025 we have been studying how AI tools impact productivity among developers. Previously we found a 20% slowdown. That finding is now outdated and that was heavily debated at the time. Speedups now seem likely, but changes in developer behavior make our new results unreliable. Okay, so they did a follow up study and they brought along 10 developers. There were 16 in the initial study. They brought 10 along. Most of those developers did see a speed up. Some of them as much as 40% gains, but not all of them. If you look at the error bars there's at least one developer we need to find who used the latest tools. What if it's actually like the Most truly like 100x engineer and so he's just like yeah, I'm just actually faster at coding than the LLMs. It doesn't matter. Put it on Cerebras, I will out code it. Thousand tokens a second. I could do 100. I type 5,000 words a minute like I don't need AI. Maybe that's what's going on. But that poor dev who's, who's left behind. But they did include new participants. An additional 47 developers doing 690 tasks and that error bar is much tighter. Somewhere between negative 10% decrease in time or I guess 10% longer to do the task to 20% faster to do the task. So overall on average we are seeing a measurable speed up even in I believe these are code issues in open source repositories that do require a lot of context. This is not vibe code, a to do list app, anything that can be templated. You're getting in the weeds of some open source project that's a lot of lines of code, a lot of history. And if you're an elite developer and you've worked on this project for a long time, you are going to be able to get up to speed really quickly, understand the patterns, understand what needs to be changed. And so this was always meter is great at always setting like really, really high bars for stuff. Like it's not easy to just like you know, blow out the benchmarks and it's very good to see that there's progress here. We got a great chart. I love a great chart. What happened here? Matt Palmer is sharing something from compound the research from their annual meeting. Yes, they're showing dollars invested in the top 10 companies versus the other percentage as a percent of overall funding to can see there's just heavy, heavy, heavy concentration in a few names. Is this overall this is or is this, is this CO2? No, this is. Oh the source is CO2. Okay, they just included CO2 data but. And is it what CO2 is doing or is it what the market CO2 is part of? Okay, they are part of I would say driving this data. Part of the problem, part of the opportunity. Part of the opportunity. I mean so much of this is about the AI labs just raising more money than any private. It's never happened before have ever $200 billion venture as a class in a good year will do like 400 billion and across OpenAI at 100 billion 30 for anthropic, 20 for xai. Then you have a bunch of Neolabs all picking up a billion each. Like you very quickly get to a few Companies raising half of all the money and that's shown here in the data from 2025. I think it's going to be even more skewed in 2026. It's an incredible amount of concentration. I think a lot of it is due to companies staying private this long. I mean, the idea Facebook went public at. What was Bill Gurley saying? He was saying Amazon went public sub a billion dollars. When Facebook went public at like $60 billion, it was like, wow, crazy. They waited way too long and now it's like multiple trillion dollar companies are still private, which is just an incredible capital sink. I don't know, should you even put those in the same bucket? Are they even venture bets at this point? If any venture capital fund is putting that in their venture bucket at this point, it feels ridiculous. Compared to growth scale, you're bigger than probably 90% of the S and P. Like it's a completely different business. Tomas from Theory was sharing some, some kind of relevant data. He said we're about to witness three of the largest IPOs in history. SpaceX is targeting one and a half trillion. OpenAI aims for one trillion. Anthropic is valued at 380 billion. Combined they're at 2.9 trillion in potential market cap. The scale is unprecedented. But the real problem isn't the market cap, it's the float. Typical IPOs offer 15 to 25% of their shares to the public markets. This creates enough liquidity for price discovery while allowing founders and early investors to maintain control. Facebook floated 15% at the 60 billion that you mentioned and actually traded down pretty much immediately. Right. Google floated 19%. Alibaba floated 15% at 15% float. Here's what these three IPOs would require. SpaceX would be 300 billion or 225 billion. Opening Eye would be 150 billion. Anthropic would be 57 billion. That's a lot of smackaroos. He was. Yeah, a lot of dollars. He was comparing that to Saudi aramco, Alibaba and SoftBank, which were combined at the IPO. I believe Saudi Aramco raised 29 billion at a $1.7 trillion market cap. So he's making the case you can't really model how the public markets will absorb these companies off of Saudi Aramco. Even though from a sort of like top line market cap standpoint, it is a good proxy. Just because the float was significantly lower and Saudi Aramco's float now is only at 2.4% and they floated 1.5% at the IPO. So we'll see what the labs end up doing. They are obviously wildly capital intensive businesses and you can imagine they raise quite a bit more than the Aramcos or the Alibaba's. Saudi Aramco was such a wild ride. I feel like they were trying to IPO for like a. The San Francisco company. It is, yeah, founded in California. Like I remember hearing Saudi Aramco IPO rumors in like 2015. I think it actually kicked off in 2016. They finally got out in 2019. It was, I mean it was the largest IPO ever. There were like a million investment banks attached, like going all over the world Marshaling capital. On January 24, 2026, Saudi Aramco chairman says IPO could open to international markets. And then a year later they picked an IPO advisor locally. Then another year later HSBC came on. Then it just took. They favored New York for the Aramco listing. They had to pick all these different things. It took so long. But they sold $12 billion of bonds out of record 100 billion demand for those bonds in the pre IPO sale. It was a wild, wild, winding road. I actually know a banker who worked on the job and it was like multiple years of his life. It was very interesting. Anyway, let me tell you about Vibe Co, where D2C brands, B2B startups and AI companies advertise on streaming TV, pick channels, target audiences and measure sales just like on Meta. And let me also tell you about Okta. Okta helps you assign every AI agent a trusted identity so you get the power of AI without the risk. Secure every agent, secure any agent. Anthropic dials back AI safety commitments. Company says competitive pressure prompts it to pivot away from a more cautious stance. Anthropic, the company known for its devotion to safety, is scaling back that commitment. The company said Tuesday it is softening its core safety policy to stay competitive with other AI labs. This is so interesting. I. Anthropic, let's. I'll read through it and then we can talk about it. Anthropic previously paused development work on its model if it could be classified as dangerous. But it said it would end that practice if a comparable or superior model was released by a competitor. That basically opens them up. Given that they are at the frontier. That kind of opens them up to, I would say, perpetually, you know, kind of avoiding some of their, some of their prior policies. Sure, sure, sure. The changes are a dramatic shift from two and a half years ago when the guardrails Anthropic published guiding the development and testing of its new models established the company as one of the most safety conscious players in the space. Anthropic faces intense competition from rivals which regularly release cutting edge models. It's also locked in a battle with the Defense Department over how its clawed suite are used after it told the Pentagon it couldn't be used for domestic surveillance or autonomous lethal activities. Anthropic said the safety policy change is an update based on the speed of AI's development and a lack of federal AI regulations which they have been pushing for. Anthropic, which started as a AI safety research lab, has battled the Trump admin by advocating for state and federal rules on model transparency and guardrails. The admin has of course sought to curb state's ability to regulate AI. Spokeswoman at Anthropic said the change is intended to help the company compete with several rivals against an uneven policy backdrop. That puts the onus on companies to make their own judgments about safeguards. She said the safety pledge is unrelated to the Pentagon negotiat. The policy environment has shifted towards prioritizing AI competitiveness and economic growth. While safety oriented discussions have yet to gain meaningful traction at the federal level, the company said it is still committed to industry leading safety standards and Time originally broke the story. So yeah, I would say the obvious sort of criticism here would be that you were heavily focused on safety when you were far away from I would say leading in AI and so switching up now that like there's actually switching up on their day one, switching up on their day one. Now that there's, now that there's real competition are they forgetting where they feels a little self serving? Yeah. It's possible the money changed them. It's possible the money changed them. It's possible they always plan to switch up on their day ones. Maybe, maybe once they got, once they got to to the level they're at now. Yes. But, but it just feels like, it feels like all the initial concerns or many of the initial concerns that were guiding that entire philosophy around the company are still real. Okay. Yeah. Maybe Tyler, what do you have to say? It could just be that they realize like alignment's pretty easy and we don't need to worry about that. Well. Well. So I mean that's the very weird rule. I mean the original rule.
Let's just roll, right. Market clearing order inbound. Come. Get up. You're surrounded by journalists. Hold your. Strike 1. Strike 2, Activate. Go. Go to retriever mode. Market clearing order inbo. I see multiple journalists on the horizon. You're watching TVPN. Today is Wednesday, February 25, 2026. We are live from the TV and Ultradome. The temple of technology, the fortress of finance dot com. Time is money saved. Easy use corporate cards, bill pay, accounting and a whole lot more all in one place. We have a massive show for you today, folks. We got Doug o' Laughlin coming on on his birthday. The Dugganator earnings. Talk about a birthday present. We got Marc Benioff from Salesforce coming on. Let's take you through the linear lineup. Max Meyer's coming on from arena magazine. There's a new issue dropping 007. Spy theme. I love it. Ben Layer's coming in person. And then we have an absolute hitter of a lightning round for you folks. Linear, of course, is the system for modern software development. 70% of enterprise workspaces on linear are using agents. Okay, so I was nerding out about this Fed paper because it's like when you told John Collison 80% of businesses are getting no value from AI. I'm glad he wasn't here in person because he was about to throw down. He was about to open up a can of whoop. It was about to be a bar fight in the cheeky part pub. In the cheeky Guinness pub. No, seriously, it was a great question because I think we all agree that like AI adoption is real, it's valuable, it's happening. But it is a very interesting statistic. And I think it's a mistake for tech people to like dismiss this stat because of where it's coming from. Like it's not coming from some like doomer anti AI blogger who's going for clicks. Like, this is the National Bureau for Economic Research. There's three members of the Atlanta Fed on like the Federal Reserve Bank. They are on this paper. There's two people from NBER on the paper. And then they also pulled in the bank of England. They have some Australians and Germans on there too. And so this is a research paper that could be circulated, probably will be circulated within the Fed. And I think that it's already getting quoted by the New York Times in that.com bubble AI bubble piece. And I'm thinking it through. Like this could be something where you see Fed policy or government legislation that sort of mismatched with what is actually happening in reality. And so we should go through some of the, some of the stats to actually break this down because the headline is 80% of firms reported that AI was having no impact on their productivity or employment. And that's actually like a misquote. Like what they mean by that is that it's not shaping their hiring plan yet they actually are using AI. And so basically this stat comes from this survey from the National Bureau of Economic Research. And it's pretty interesting because a lot of the polls that you see online are online surveys where they say they run some digital ads and they say, are you a CFO of a company? We don't really care what company. We'll pay you $10 to take this quick survey. And what kind of people want to make $10? A lot of liars. There's a lot of liars out there who say, I am absolutely a CFO and please send that Amazon gift card right my way. Right. And so for this one, they actually did the work. They called up and ID verified and then also reality checked the position. So if you say, yeah, I'm the Chief Pirate Officer, I'm the ninja hero, whatever, you got some fake title, they're that you're out of the survey. You have to be a cfo, a CEO, senior manager, and you actually had to be doing that job. It's not just like, oh, you're the CFO of some front company. So they did some reality checking and they pulled together 6,000 of these business leaders across firms that are domiciled in the us, uk, Germany and Australia. And so you know the line from John Collison that has been sort of going viral? That was. He dropped it on sources. I think he said it to us too. It's a good line. No one wants a refund on their tokens. Everyone is using AI. The spend is increasing. Although I'm sure some CEOs heard that and thought, I kind of do want a refund. I'd love a refund. I had one team member go absolutely Haywire and spend 50 grand. He's one shot. He claims that he rebuilt our entire erp, but I fired it up and it didn't even have HTTPs. What's going on? The Mac Mini wasn't even plugged in. Yeah, the Mac Mini wasn't even plugged in. He was just chatting. But clearly there is a disconnect between the stripe data is very real. The, the, the value creation is very real, the revenue is very real at the labs. But when just random joe schmo CFO CEOs get a call from the Feds. They say, like, yeah, we're not really getting that much value out of AI. And so the questions that you need to dig into, there's actually four key findings that the one headline that the New York Times is. Is pushing is this 80% number. 80% report little impact or no impact on employment or productivity. But there's actually a bunch of positive signals. There's a bunch of mixed signals in here. So 70% of firms actively use AI, and particularly younger, more productive firms. Second, while over two thirds of top Executives regularly use AI, their average use is only 1.5 hours per week. And one quarter of executives report no AI use at all. They're just like, I do things the old way. I have AI, not for. Why would I need that? I have a telephone. Yeah. But I mean, truly, if you think about, like, the variety of firms, it's like you could be running a gym, you could be running a gas station, you could be doing forestry, you could be doing mining, oil extraction, road repair. Like, there's a million different things that you can do in the economy. It's not all, like, knowledge work firms. We talked to somebody yesterday in the sort of later part of their career, off the show. Yeah. And they had just had their mind blown by AI. Yeah. Because they used to, when they needed a document created, they would put it into bullet. They'd put out bullet points, and they would give it to somebody who then create a document. Yeah. And he just said, now I just give it to AI, and that just generates a document. Yeah. And so that. So that's still happening. Yeah, that's text expansion, text generation with LLMs. This has been available since 2022 when ChatGPT launched. Maybe it became reliable in 2023. People are still just starting to adopt. And there's some other interesting things in here. The last major finding that we should touch on is firms predict sizable impacts over the next three years. Forecasting AI will boost productivity, sizable impacts, productivity increase 1.4%, which is like, it's very sizable if you're an economic researcher, but it's not particularly sizable if you're in, like, the fast takeoff scenario. And so there's just this disconnect between, like, what do the government statistics look like and what is the government operating against? And then what's the Silicon Valley narrative? And, like, where do these two meet the road? Why is there a disconnect at all? One of the reasons is that measuring AI adoption is a mess. Many people use AI without even knowing that they're using AI because it's buried deeper in SaaS products that they already daily drive. Like if you're just, I run a coffee shop and I'm using Toast for, you know, payment processing. Like there's probably some AI features in there already. And when you go to, you know, type in, okay, we're adding a new cinnamon roll to the, to the, you know, the menu. There's probably a button now that just says like, do you want to just generate an image of a, of a cinnamon roll? You could upload one still. That's probably a feature that already exists. But like we could also just generate one for you and you can probably click that. But you're not like, oh yeah, I'm an AI power user. Just because like you happen to use Toast and Toast happened to have implemented some gen AI feature that like you haven't really dug into yet. So some AI isn't even detectable. You could be talking to a customer support agent on the phone that is AI generated and not be able to tell. We talked about that airline interaction that got something 100,000 likes and grace, the woman that, Grace, the woman that had the interaction came into the chat yesterday and said it was real. Yeah, it was real. Yeah. And so she outmaneuvered the clanker. Yeah. But still think about like she's clearly on X in tech, like very AI aware. There are probably tons of people out there that are saying, oh yeah, my job, you know, every once in a while I have to call this service. And now the person that picks up is like responding pretty quickly. But I haven't noticed, they haven't noticed that they're actually interacting with AI or using AI in some capacity. And then, and then there's also times where people are like chatting with AI, but they just put it in the personal life bucket. Like I'll find myself doing this a lot, of course, as an entrepreneur. Like the work life balance like bleeds together a lot. But there's a lot of times when I'm, you know, reading an article on Saturday, I'll fire off a deep research report about it, find some extra context. It feels like, oh, I'm just reading the paper, but my job is sort of to read the newspaper. And so it comes into work and there's probably a lot of people that are like, you know, oh, you know, I hit an LLM with some random query to learn something about something work related. But I did it off hours when I was just like, you know, hanging out. And so I Don't really think about it as like a work tool yet. And so they're not putting it in that bucket. But in general, I think the team did a really, really good job by avoiding as many of the pitfalls as possible when it comes to surveying AI adoption. AI adoption is very messy. You've talked about the need for strong. Yeah, I still think there's room for a research firm focus entirely on diffusion. So if you had a group of 10 to 20 people that were spending all their time talking to business owners and executives, operators, and getting a sense of how they're actually using this stuff, I think you could put together some really compelling reports around it that would be pretty useful to everyone from AI companies to Wall Street. Yeah, Adoption Max after Cluster max and Inference Max, they had to rename it apparently semi analysis. Can't use max for some reason. So they do. Inference Max is now Inference X. And everyone was saying you need to just change it to inference mock, which would have been amazing. But Inference X obviously has a much more professional tone to it. And so there's the. There's an interesting definition of like, what does it mean to actually adopt AI? That's very vague. This paper defines it pretty broadly. So machine learning for data processing. So that doesn't Even necessarily mean LLMs, that just means ML, which has been around for a very long time. Text generation using LLMs. That's what we think of as ChatGPT, visual content creation. So diffusion models, but also robotics and autonomous vehicles. And there's another. And there's a category just for other. And firms can select multiple. And so if you selected yes on any of those, you go in the bucket of AI adopter. And 78% of firms in the United States said yes, they are using AI by this definition, they got at least one robot or they've at least generated one AI image or one prompt to ChatGPT, which is a very low bar. Sort of makes you wonder, like, what's going on with the 22%. And you can also dig in further. So text generation using LLMs is the single most common use case at about 41% of firms. So flip that around. 59% of firms aren't even using LLMs for text generation or proofreading. But again, there's a lot of companies where it's like, yeah, we don't generate a lot of text. Like maybe if we need to generate a marketing material, we have an agency that does that. So we don't actually do it internally. I don't know. Across the four countries that were surveyed, 69% of firms totally said they currently use AI. I think Australia was behind a little bit, dragging that down. Only 75% of firms expect to be using AI technology sometime over the next three years. So they're at 69% now. And they're like, over the next three years, Tyler is going to have a heart attack. We're going to bump that up to 75%. And this is like, this is weird data. And I, and I, you can jump in with your pushback whatever you want. But my point is not that they're right. I think that they're wrong to predict this. I think that the AI adoption will be very steep and very dramatic. But I just think it's important to recognize that this is a paper that people will be citing. This is a paper that will shape policy. This is a paper that does. It reveals some misconception about the impact AI is having in firms. Yeah, I still think it's so hard to actually quantify this. Like, okay, if I'm like using ramp, does that count as using AI? Because it's obvious, like it certainly is using AI under the hood, but I'm not directly interfacing with the model. So does that count? So I think if you were running just a company that was just on ramp, you would probably respond, no. Yeah, but like I'm, clearly, I'm benefiting from AI. I agree. So it's like, how do we actually quantify this? Is the diffusion question. I completely agree with you. I think that, I think that there's plenty of places where AI will have impacts across the economy, but the actual AI workloads and AI app development, model training, inference will happen at a different set of companies. I don't think it's not going to be as clear as the computing revolution where every company had a desktop computer and then it was like, okay, I still think the only thing that you should be looking at for diffusion is just lab revenue. Lab revenue, yeah, yeah. And it's, and it's growing a lot, but it's still the perception, I think still does matter because people will. I think that there's a little bit of like potential self referentialness here where firms see, oh, like AI adoption's low, I don't need to go and figure out how to adopt it. And so that's something that I'm also keeping an eye on. Reported usage is still low. And this one's interesting based on revenue and also token generation. So 1.5 hours per week among the managers surveyed Again, it's like they surveyed CFOs. CFOs who use RAMP. They don't count their time in RAMP as minutes using AI, they count their time in LLMs as minutes using AI. And that's low. But, but even just with that one and a half hours a week, the actual leverage that you're getting is increasing. Because one and like one and a half hours, you don't necessarily need to spend more time prompting to get more done because. Yeah, even so, even if you run a deep research, like is the time you're waiting, does that count as time in olm? No. No, not at all. Okay, so the time is time typing. Exactly. Does it count as when you're reading. Yes, when you're reading it for sure. So it's like when you have a tab open, not if you export a file and you're just reading it preview. Yeah, yeah, no, I mean to some degree. But people were asked to estimate and I'm sure that they didn't include, oh yeah, I let my agent cook overnight for eight hours or I fired off one prompt. I came back and it did, you know, meters 15 hours of software engineering in one prompt. Like these things aren't captured. 1.5 hours of prompting generates a whole lot more tokens and valuable output in 2026 than it did in 2023. And so there's this, like, there's this divergence between the actual time spent and useful output and impact. And the biggest thing was there was a massive divergent in the expected employment impact. So basically 63% of firms still expect no impact from AI. And that just completely goes against everything everyone's saying in Silicon Valley. So there's still a lot of optimism among managers that I will create more opportunities and new jobs, even if some jobs become obsolete. There are definitely firms within the sample that are projecting headcount decreases. But my read on this data is that the tech talking point about 50% of white collar work going away is not a broadly held belief among average business, Average business leaders. So now they might be wrong. I do think AI progress is pacing way ahead of public expectations and most managers are months behind when it comes to understanding frontier capabilities. The bigger takeaway for me is just that this survey may be somewhere self reinforcing. And so we, I mean we talk to, we talk to folks all the time who come on the show and you know, and talk about like, maybe it'll be good, maybe it'll be bad, but everyone thinks it's going to have an impact. But that's not true broadly, which is very, very interesting. So no one in tech has a strong recommendation for proactive steps to prevent a collapse in white collar work yet. But plenty are sounding alarms. And if this survey becomes an excuse for executives to slow adoption, they might get outmaneuvered by faster moving competitors, which is actually good news for startups. And I close by thinking about the nature of polling and how do you actually get stronger data on AI adoption? And I was thinking back to the presidential cycle. So during the presidential election, pollsters would call people sort of at random and they would ask them, who are you voting for? And a lot of people would say, they'd lie or they wouldn't say, or they wouldn't pick up the phone if they were voting for a particular candidate. And so the polling numbers did not wind up matching the final election results very closely. And so there was this story about neighbor polling, which was more effective, where instead of calling someone and asking them, who are you voting for? The pollster calls and asks, who do you think your neighbors are voting for? Who's more popular in your community? Who's more popular on your, on your city block, on your street? And that wound up sort of removing the revealed preference, stated preference, you know, oh, I'm, am I on the hook? Do I want to tell this pollster who I'm voting for? And it wound up increasing accuracy. And so I'd like to see a survey of AI adoption using this technique. Like I imagine asking the CEO of Nike, how much AI do you think Adidas is using? And I don't know how much more accurate that would be, but it would certainly be entertaining. And I think that there might be something more revealing there where CEOs have this big incentive to be like, we're using AI, we're using everything. But if, but if you ask them about their competitors, the data might look very, very different. Anyway, we should watch a little bit of a clip from the State of the Union because Donald Trump addressed some of the energy production question with regard to like how hyperscalers will be offsetting the impacts. Before we pull this up, let me tell you about Restream 1 livestream 30 plus destinations. If you want a multi stream, go to restream.com they should have restreamed the state of the Union. And let me also tell you about the New York Stock Exchange. Want to change the world, raise capital at the New York Stock Exchange. So let's head over to the State of the Union which many Americans are also concerned that energy demand from AI data centers could unfairly drive up their electric utility bills. Tonight I'm pleased to announce that I have negotiated the new ratepayer protection pledge. You know what that is? We're telling the major tech companies that they have the obligation to provide for their own power needs. They can build their own power plants as part of their factory so that no one's prices will go up. And in many cases, prices of electricity will go down for the community and very substantially down. This is a unique strategy never used in this country before. We have an old grid. It could never handle the kind of numbers, the amount of electricity that's needed. So I'm telling them they can build their own plant, they're going to produce their own electricity. It will ensure the company's ability to get electricity while at the same time lowering prices of electricity for you and could be very substantial for all of you, cities and towns. You're going to see some good things happen over the next number of years. What's your reaction to that? I think it's a good start. I don't know that it will quell any of the fears around data centers. Just given that people kind of see the potential for this massive structure going up. They have so much fear about it. And again, I think it's clearly going to be necessary to continue to build data centers in heavily populated areas. But how would you rank the fears currently? Because I've put my energy bill goes up and that puts pressure on my income and ability to live my life at pretty much the top. And then the water thing felt, you know, secondary but also important. And then there's the existential fear of like doom and apocalypse. There's also job displacement. And then there's also just like I don't like the slop and they're stealing ip. So like that was kind of the rankings. Like you can oppose data centers and be like, yeah, actually my electricity bill went down, but I still. People don't like that Harry Potter is in the pre training corpus. And so for that reason I'm against it. I would rank it on electricity bill going up as pain today. And it's so real. There's fear and it's easy to imagine. And then there's fear around the job loss narrative that is sort of secondary. And opposing a data center in your local area feels like a way to have some agency around that. Like overall kind of like job loss concern. Yeah, yeah. I mean I think the, I think this is a good Chris Casey build a data center on my freaking forehead. Let Me tell you about phantom cash. Find your wallet without exchanges or middlemen and spend with the phantom card. Let me also tell you about Figma. Ship the best version, not the first one with figma. Introducing Claude code to figma. Explore more options, push ideas further. So I think the, I think that the job loss thing is super real. In the case of AI is going to get blamed. Even if tariffs drive high unemployment, if people lose their jobs, AI is going to be a scapegoat and it's going to be used both by executives to say it's the perfect scapegoat for executives and for people frustrated with the job market. Yeah, yeah, it's like, oh, my business isn't doing poorly right now. I'm laying off people because I'm getting so much benefit from AI. The stock should actually go up, we're more efficient. There's going to be a lot of that. But it does feel like it's a little bit early for that. Whereas there are a lot of people that just can hold up their power bill and show you year over year increases. And if that goes away and people don't feel that anymore and they don't have that evidence to share, I think that take gets debunked pretty quickly and actually does a really important piece of the back and forth that's happening. I don't know, it seems like it's a pretty easy give from the hyperscalers to build more power. It was called out very, very early as if this is a bubble. How do you get a silver lining out of the bubble? And the silver lining out of the dot com bubble was a lot of dark fiber and there was a whole ton of projects to was a global crossing to actually develop the Internet. And then the Internet just became really, really cheap and a whole bunch of new companies were able to emerge on top of it because that infrastructure had been laid. You could see the same thing happening where it's like, oh wow, we overbuilt on the energy side. We actually didn't need that much energy for data centers. Maybe Jevons Paradox doesn't hold, blah blah, blah, like models get cheaper and commoditized or whatever. Something happens. I'm not super believer in that, but at least in that scenario you're like, okay, well yeah, my heating and cooling belt went down. This is a silver lining. What do you think? Yeah, like kind of similar, I would say. I mostly disagree with the idea that like rising energy prices is the main reason to be against AI. Because the rational thing to do then Is say, like, okay, before you build a data center, my community, you have to build a power plant. So then my energy prices go down. Yeah, no one's doing that. They're saying, like, no one is campaigning. If you look at like protests and stuff, they're not saying, please build a power plant first. They're saying like, it's going to destroy the environment or the water stuff or you're going to take all the jobs because it's going to like, we need to send you to that New Jersey, New New Brunswick protest. Build the nuclear power plant first. Yeah, I guess, you know, one is saying that, right? Because, yeah, we are. But no one there is saying it. Right. They're against all the environmental stuff. Yeah, yeah, yeah. So I think it's much more on like, basically job loss of like, oh, the AI is stealing the IP of. Yeah. Of Disney or whatever. Yeah, yeah, yeah. There needs to be more. More polling on the, on the, on the. On the question of, like, whoa, what. What's driving the protest? Fully. Anyway, happy Nvidia Day to all who celebrate. Except the bears. Forget them. Says, take him. He's getting fired up for Nvidia earnings. Let's. It's going to be a fun one today. How is Nvidia doing so far, people? People optimistic. Up 2% today. Hard to read too much into it yet, but we will find out soon enough. Brad Gerstner got a nice shout out during the State of the Union. Total Gerstner victory. Gerstner accounts. Wow, he's there. No way he was there. Trump. They started looking at him, but then the camera. I guess they couldn't find him on the stream that I was watching. Looks a little bit better. There we go. We can see him now. There we go. Once we pull over here. There he is. Jerry Snare champion. What a great project. Invest America. Excited for it. Yeah. Anyway, let me tell you about Applovin. Profitable advertising made easy with Axon, I get access to over 1 billion daily active users and grow your business today. All right, so now the real news. This has been. This is tearing up the timeline. A new Guinness World Record. And I want to ask John if this. If you think this should actually count. So let's pull up this video now. This is a Chinese hypercar going for the. I've never heard of drift ever. That is crazy. But here's the thing. He doesn't. He doesn't actually pull out of it, does he? Just crash? Kind of just U turns. It's like a really fast U turn. I think this counts as a drift. That's definitely drift. U turning counts. If you saw. If you saw that car going by, you'd be like, wow, that's drifting. It's drifting across the cement. That 100% counts. I've never heard of this company. This is called spinning out. It's just crashing with style. It's falling with style. Hype tech ssr, formerly hyper ssr, is a high performance all.