LIVE CLIPS
EpisodeĀ 3-19-2026
That's going to be application specific. So I'm not afraid of that at all. And I also think that, you know, we're talking about agentic applications, I think, particularly for small, medium sized businesses, and some large businesses, they're not going to have that skill set. It's not going to be natural for them to do that. And I think kids coming out of school today that have taken some Python, don't have to be comp. Sci majors, but have done agentic AI like, when I go talk to schools, that's what I tell them. You know, get into cloud, you know, teach yourself all the agentic stuff, and then go to small businesses, because they're not going to understand how to do any of that shit at all. Yeah, that makes a ton of sense. What, what advice are you giving to friends, portfolio companies, etc.
Button. It's fine. You don't need to email me. In other news, okay, Rolls Royce has scrap plans to go all electric by 2030. As quote, drivers prefer V12 engines. Would you look at that? I mean, and this is just total shock. Total shock. Yeah. Total shock. Yeah. Drivers totally had to experience, you know, being forced EVs forced upon them for the last few years to know that they preferred combustion engines after all. Of course. I'm kidding. I think a lot of people were just sort of saying this over and over. Manufacturers were not listening. Everyone said Elon has been saying the Roadster reveal will blow your mind if it has a V12. People are going to be going crazy if he drops a V12. That would completely break the Internet. It would be incredible. Anyway.
What else is Anthropic doing? They're hiring for a policy manager who will be in charge of chemical weapons and high yield explosives. This reads like you're going to be building high yield explosives, which sounds like an Andrew job posting, but it is in fact, for a policy manager who will be hopefully stopping people from. No, no, no. I think I read this as somebody whose job it is to decide how Claude is used to. To create chemical weapons and high yield explosives. I think it's. I don't know, I think it's probably like this person decides, like, where's the edge? So if you're asking like, okay, I have a firework and I want to make sure it doesn't go off, like, should I throw it in the trash or put it in the recycling or take it to a special place, like, Claude should answer that. But if you go to it and you ask it, like, how do I build the C4 or. Or something like that, there's all these policy edges where if you're talking about Counter Strike and you say, let's plant the bomb, it shouldn't flag that as, okay, you're actually trying to plant a bomb. It's like, no, you're asking about a video game. We know how to interpret that appropriately. But there needs to be a human in the loop to decide where that frontier is and where that particular.
I mean, speaking of Arnold, are you, it sounds like you're in a good mood, you're optimistic about things. Does the question of AI doom come into your mind? These runaway robotics, Are you worried about that at all? No, not even the least for the reason I just mentioned, right? In order, like right now, LLMs are basically bimodal with some video, right? Where it's almost all text and pictures. And with some video you can't, you can't model the world with that. You just can't. AI right now doesn't understand the consequences of its recommendations. It has no idea what happens next. A two year old kid with a high chair and a sippy cup knows if it pushes the sippy cup over the off the high chair, Mom's coming running and the kid's going to start laughing at mom, right? Large language models don't understand every time, every time, every time, right? And there's no funniest thing ever, ever, ever. And it's hysterical, you know, unless your mom. Right, but, but you get the point, right? The large language models we have today can't do that. And so we have to evolve to models that can capture the world and physics and deal with the latency of capturing or not being, having access to video that you can't see. And so you have to try to model that. And not only does that take up a lot of processing power, but it takes up a lot of bandwidth as I mentioned before. And so the Terminator is taking over. I just don't see how it's going to happen. Now. You can have, you know, localized brains for, for military applications and power get better and manual dexterity will get better, all that. But that, that's not going to allow you to take over the world, right? That's going to be application specific. So I'm not afraid of that at all. And I also think that, you know, we're talking about agentic applications. I think particularly for small, medium sized businesses and some large businesses, they're not going to have that skill set. It's not going to be natural for them to do that. And I think kids coming out of school.
I mean, speaking of Arnold, are you, it sounds like you're in a good mood, you're optimistic about things. Does the question of AI doom come into your mind? These runaway robotics, Are you worried about that at all? No, not even the least for the reason I just mentioned, right? In order, like right now, LLMs are basically bimodal with some video, right? Where it's almost all text and pictures. And with some video you can't, you can't model the world with that. You just can't. AI right now doesn't understand the consequences of its recommendations. It has no idea what happens next. A two year old kid with a high chair and a sippy cup knows if it pushes the sippy cup over the off the high chair, Mom's coming running and the kid's going to start laughing at mom, right? Large language models don't understand every time, every time, every time, right? And there's no funniest thing ever, ever, ever. And it's hysterical, you know, unless your mom. Right, but, but you get the point, right? The large language models we have today can't do that. And so we have to evolve to models that can capture the world and physics and deal with the latency of capturing or not being, having access to video that you can't see. And so you have to try to model that. And not only does that take up a lot of processing power, but it takes up a lot of bandwidth as I mentioned before. And so the Terminator is taking over. I just don't see how it's going to happen. Now. You can have, you know, localized brains for, for military applications and power get better and manual dexterity will get better, all that. But that, that's not going to allow you to take over the world, right? That's going to be application specific. So I'm not afraid of that at all. And I also think that, you know, we're talking about agentic applications. I think particularly for small, medium sized businesses and some large businesses, they're not going to have that skill set. It's not going to be natural for them to do that. And I think kids coming out of school today that have taken some Python, don't have to be comp Sci majors, but have done agentic AI like when I go talk to schools, that's what I tell them, you know, get into Claude, you know, teach yourself all the agentic stuff and then go to small businesses because they're not going to understand how to do any of that.
Interesting, interesting. You mentioned maybe a garage not existing in the future. Is that a way to say that you're excited about self driving cars? What are you thinking is going to happen there? I played around. I have a Tesla and I upgraded for a couple months and it terrified the shit out of me. Not that I didn't trust it. Oh my God. Because when you're going 25 miles an hour, it's no big deal. You go on a highway and you're going 70 miles an hour and there's a median right there in the middle of the highway. I was like shaking like, I was like, I don't, you know, Elon's cool for whatever, but you know, I ain't trusting him that much, right? You know, And I'm like, are you in Mad Max mode? You know about Mad Max? No, no, no, no, no, no, no. You know, but you can set a delimiter where it's like how much above the speed limit are you willing to go? And if the speed limit is 65 or 70, you know, you gotta go the speed limit. And it was scary as fuck trying to go 70 miles an hour. And I don't want to be there when somebody paints some adversarial. You know how there's graffiti in the weirdest places? Wait until there's adversarial graffiti. Oh yeah, yeah. Like somebody. If somebody paints, if it's a roadrunner brick wall, it's a roadrunner. Paint it to look like the road. Roadrunner. Wile E. Coyote is going, no, that's like ridiculous shit, right? Paint the tunnel. And then you're slamming. There's somebody somewhere trying to figure out how to fuck up self driving mode, right? For sure. Because it's just taking video input. And you know what? It could be some pattern. And all of a sudden you're seeing this pattern on medians or overlaid on stop signs or whatever because you know, somebody's got to do that because it's just too easy not to. Hyper realistic. Camo wrap gets confused, you know, eventually, whatever, right? You wrap your car in camo and if the camouflage is effective, you're going to confuse the AIs. It's a risk. And there could be a predator, right? Alien versus Predator. Predator could show up, right? Arnold isn't there to save us. It's possible.
And I'm going to tell you what, in terms of, I'm going to take it down a different path because I'm contrarian on this. And that's with robotics. I think everybody's making this push for humanoid robots. I think they might have a five year lifespan and then they'll fail miserably. Maybe ten. Yeah, you mean the companies or the device or the individual physical robots or both. Right. Because I think everybody defaults to, well, we live in a human world and humanoids will take the place of humans for various functions, particularly in the home. And I think there's just no chance. I think if you look at warehouses and what Amazon does, they're not humanoid robots carrying boxes. They're robots that are designed to fit the environment. And I think, you know, I've heard people say, well, a house is a house, you need a humanoid. I think houses are going to be redesigned completely so that whatever the optimal robot is that allows it to simplify the house, that's where houses will go. So I'll give you an example. If we had robots that look more like spiders, that could, you know, but had, you know, the ability to carry and lift things where more like ants, I guess, maybe. Right. But you could create a house where the pantry and the refrigerator and the washing machines were hidden behind the garage, if you even have a garage. And that way you could redesign it so all the living space was for people. Because you know that the robots aren't going to be full form humanoids. They're going to be whatever the optimal shape is and they're kind of co design. You design the house to fit the robot and you design the robot to fit the house. And I think you could go expensively on both. You know, the humanoid founders will tell you, you know, how do you solve stairs? Right. Like it doesn't work for wheels. But if the robots are really great people, you could put like a mini robot elevator, right? Like if it's on wheels, like you can just put it. Yeah, like you see like the old house. Dumb. Wait. Dumbwaiters, right? Where there's just the thing where you pull it, you put it in there and you pull it up and it goes up to the next floor and somebody opens it up. You're going to see a mechanized equivalent to that where it recognizes the little ant robot that's coming up and it opens up a little door that leads to the size of whatever it is, it needs to carry or whatever and then it goes up the dumbwaiter, does this thing on the next floor. The next floor, the next floor and does what it needs to do. I don't think stairs are an issue at all.
Yeah. What about sports? I've seen some robots playing tennis. They're going to be playing basketball soon. I don't think I'll be watching robot basketball. But how do you think live events, sports, basketball will change over the next decade? I mean maybe for the referees like you've seen in tennis but that's it. I think in reality more people will want to go to in real life events than before because if you're just managing agents looking at output, looking for exceptions, you're going to want some human touch, right? You're going to want to be able to engage And I think that that's really, really important and I think that's where sports will grow. I mean you're starting to see that now with you know, what happened with the Olympics, the World Baseball Classic, you know, became much more popular because people want that disengagement from all the stress that that's happening right now.
Yeah. What was the anatomy of? I think we talk about what makes for a great company all day long. We'll talk about that throughout the show today. But what makes for a company that will put on a particularly captivating Shark Tank appearance? You gotta remember, it's for tv. You have to be entertaining. And if it's not entertaining in some shape or form, it's just not gonna work. Regardless of the quality of the business, if you don't have charisma, you don't have a compelling pitch that's entertaining, it doesn't matter. You could be selling dollar bills for 50 cents and it would fail. Interesting. How important is, like, the visual component? There's a lot of physical products, but at a certain point it gets too big for the studio. How important is it, like, the physical product presentation? Well, the good news is the producers will work with you on that, and they make them practice over and over and over. And of the hundreds, if not thousands of pitches I saw in 15 years, we only had one. Really just. Just choke right where they couldn't spit it out, maybe two, which is amazing. It's a testament to Mindy and the producers there how hard they prepare them. Yeah. How do you think Shark Tank and shows like it will change in the era of AI video generation? Endless content. Is it stronger than ever because it's a known brand or is there some weakness there? How do you see that playing out? It all depends on platform. It's like, you guys, right? It really just depends on reach of the platform and quality of the product. I think for Shark Tank, it's not going to have to change because of AI or technology, simply because you're really communicating to a family audience. And the message you're communicating isn't, hey, here's a bunch of businesses that are great. The message you're communicating is that could be you on the carpet. The American dream is alive and well, and so that's really what makes Shark Tank successful, not the quality of the businesses.
From our merch. Like just T shirts and stuff? Yes. Oh, yeah. Merch is crazy. And then ip, too, right? All the DMCA takedown notices, because they're just scraping and, you know, reposting all that shit. Right? That's easy to fix if, you know, someone has the guts to do it. What is the anatomy of using your likeness? Once you've made an investment, what does the. The best relationship look like? I imagine it's very open and transparent, but. But I imagine that anyone who's been associated with you at all is trying to, like, slap your face next to their product and like, flip it all over. And maybe you haven't invested yet and you just said, oh, it looks nice. And then they're like, clip it. He said, it looks nice. You know? Yeah. I mean, it depends on the company, you know, Usually I don't even care. But two things. One, you know what Synthesia is? Synthesia IO? Yeah, yeah, I think so. Yeah, they've been on the show. Yeah, yeah, yeah. They have the avatars there. Victor and all those guys. Well, I was their first investor, so I send them there. Okay. Yeah, yeah. Dog, dog. That's a unicorn. Come on, dude. And I gave them, like, a lot of money. I gave them a lot of money. This was 10, 12 years ago. Way ahead of the curve. Yeah, There we go. Okay, so Synthesia. Yes. And so the. So I'll push into them or, like, I'll just fuck around. Like you saw with Sora, they had. So I just. I put one picture of me out there, but I was playing with it because I want to learn all this stuff. And they. Sora had this thing where you can put conditions.
It, you know, sort of an, effectively a theological choice where it's really not, it's just a technical issue. Economic opportunities on the moon. Yeah. That you think are interesting. We've heard about regolith and, and maybe like a mass driver. But there's so many opportunities obviously besides tourism. The most obvious one is helium three. So this is pushed out by the sun. There's something like 18 kg of it in the United States and a strategic resource. It's a byproduct of nuclear weapons manufact. Now what are you going to use it for? Well, you can use it for a couple of things. Clean fusion energy. It's one of the few fuels that doesn't create radioactivity as a byproduct. So that's obviously desirable. Right. The second part is for quantum computing which a lot of these have to be near zero. So this is one of the few substances that you can cool to near zero temperature and the other is an absolute zero in temperature. So there's a huge demand on that. Now once you start mining it does that, you know, that price collapse probably to some degree. The second one that I see is this, this mineral, this rare earth mineral kinds of deposits and we honestly don't know enough about what's up there. We have a pretty good idea from the Apollo missions, but there's probably a lot of rare earth deposits. My guess, and I'm no geologist, but I would guess there's probably some rare earth minerals that, you know, mining from the moon which would be more palatable than tearing up our beautiful earth would be as long as we can solve the transportation problem. It all comes back to the rocket, by the way. Yeah, makes a lot of sense. Well, thank you so much for taking the time. Yeah, great to meet you. Congratulations and we'd love to talk to you soon. Yeah, come back, have a good rest of your day. Let me tell you about Cisco. Critical infrastructure for the AI era unlocks seamless real time experience.
All the different newsletters and emails that I get, just that try to keep me up. How are you processing the flood of cold emails that appears to be thoughtful, but is AI generated? Actually, AI generated because you are notorious for your response rates and getting back to so many people that have reached out, but it feels like maybe an impossible task now. No, I do what everybody else does. I bought a Mac Mini. You did? You know, yeah. Yeah, for sure. I'm still learning. So you're just like, you hit me with AI, I'll hit you with AI Back right away. Right back. Right. It's not even like the cold emails, because that's pretty obvious. Yeah. You know, it's pretty easy to see. It's people subscribing me to shit. And, you know, the good news is Gmail has an unsubscribe button, so you just gotta train it to hit the unsubscribe button. Then I just review it and all that shit. So it's still a work in progress, but at least I have a path. Well, the issue with. With us is that historically, if you had a podcast and somebody wrote you an email and said, hey, I really appreciated this moment where you're talking about this one thing. Totally. You're like, oh, they actually listen to you could actually tell, like, hey, at least they. They press play. And at least they found a moment. But now I just does it instantly. So there's no way to clock whether. Whether something's real or not. Yeah. And that's okay. Right? Because they're going to. The response rates most likely will be so low. We're in that trial and error phase where people are like, we're going to try it, see what happens. Maybe we'll get lucky and then they'll get bored and then it'll drop off. Yeah. Is owning a Mac Mini a green flag for entrepreneurs these days? Talk to me about what.
I was gonna try and tease something up, but you really delivered. Talk to us about both of your agreements or disagreements, maybe. I'm sure you've been debating the future of enterprise software, the future of what's going on, the SaaS apocalypse, public companies like just the nature of business changing what remains. True now that hasn't changed and won't change for decades, decades versus what maybe has changed in the last few years and needs an update in terms of how people think about how businesses grow, how businesses flourish when they're working in the technology industry. Yeah. I'll let Pat start and then I'll give you my perspective because I've answered this question about 17 million times in the last three and a half years at workday, and I have a different perspective than probably most. Yes. So, you know, it's funny, the first thing that comes to mind is a line that I learned from a man named Carl Eschenbach who was a partner at Sequoia from 2016 to 2022. And the line is people do business with people. And I think there's a foundation model, maximalist point of view that the labs themselves are going to do everything and every nook and cranny of the economy. Yeah. And I just have a hard time imagining that version of the future coming to fruition because people do business with people. And I think that between a job to be done and the raw capabilities of a model, there's a lot that needs to happen. But like shape it into the path of least resistance for you to travel down as a user to get to the right answer with the least amount of paint. And there's probably a person in between who's going to do that work. And as a customer, you want to do business with that person. And so I think people do business with people is going to remain true. The shape that that takes in terms of what the businesses are is probably going to change. I think in the world of software, you know, the first wave of the on prem to cloud transition was this transitioning of systems of record. You know, the workdays and salesforces and servicenows in the world. The second layer on top of that was the systems of engagement. You know, those systems of record might own the core database, but then there are a bunch of different workflow applications that reside on top. I think what we're going to see with this, the wave of AI software is this third layer on top of those, which some people call system of intelligence. I don't want to call it that. It's the layer that does the work. You know, it's the, it's the agents getting deployed that may or may not need those workflows beneath them, but certainly need access to everything that's sitting in that system of record. I think that's what we're going to see. And as a result, I think those system of record companies are relatively safe. They may not catch a lot of net new workloads because a lot of the net new workloads might go to the AI native companies, but I think they're overall pretty safe. I think some of those workflow based companies in the middle are in trouble because they're neither the system of record nor the agency capability that's getting deployed. And so they'll have to figure out how to become like that agent harness, so to speak, for whatever job to be done. And then I think those AI native companies on top, the basic thing they need to achieve is figure out the context of this organization, figure out the guardrails, come up with some sort of an eval framework, come up with some sort of value function, basically wrap all the context around the capabilities of the foundation model to achieve the outcome that the business person wants. And so I think there's a very important job to be done for those that new layer of companies. And again, people want to do business with people. Like there's a lot of value in having somebody you trust take your hand and lead you into the AI future.
Unlimited number of Internet radio stations and started streaming until we sold it to Yahoo. That was the most obvious thing I'd ever seen in my career. What was the domain name negotiation like? Jordi's a big fan of great domain names. Great question. Great, great, great, great question. So when we started, it was audionet, and I just registered it. Nobody had it. Yeah. But then audionet.com or audio.net. no. Audionet.com? okay, I like it. Yeah. Because we were just doing audio in 1995. Yeah. And then by 97, we started to do video, and audionet wasn't going to cut it. And so I found broadcast.com because we wanted to broadcast everything and anything and found the guy and paid him $8,000. And he was thrilled to get the $8,000. Wow. Yeah. This is 1997. Did he ping you after that? Did he ping you after? Yes, he did. Yes, he did. But wait, it gets better. But wait, there's more. Right? And so I'm like, oh, shit, this is nothing. Right. It's an automatic traffic generator. And so I started going out there and just glomming up, just grabbing all kinds of URLs so that we could put content on them and then drive it back to broadcast dot com. So, literally, I own Final four dot com. I own baseball dot com, I own sandwich dot com. You name it, I bought it. I bought it. I would buy, like, just packages of URLs, right. And just. And this is because people were just going to their browser and being like, sandwich.com and they would type it in. Google didn't. Exactly. Exactly. Everything was a portal, right. Everything was a front door. And so I was like, anything that generated traffic, and I've done it since. Like, I own Mr. President dot com. I own democracy dot com. All kinds. He privatized democracy. He checked democracy private. I was worried. That's the most American thing I've ever. I've ever heard. I love that. That's incredible. Okay, the last question for the chat, and we'll let you get back to your busy day. I want to flip it around. What's up? What's a business.
China that had like 20 different companies that were reselling it all and they were making a killing. Oh yeah, they were. Oh, yeah, yeah. Is it still possible to create a widget and make like $100 million from it or does the co. Or do the clones come? Because I know the guy who made like the fidget spinner, like his claim to. Right, right. That's cool. But. But he didn't. It got knocked off like that. Yeah, I mean, it was, it was the kind of thing that like was a hit product. All on Amazon, all on am. So it's. Right. So. So I started talking to some Amazon resellers like Mid24, because I was just curious about some things. I see some things on. On X. And as it turns out, if you're an American seller, it may have changed. So correct me if I'm wrong. If you're an American seller, you can have one company that sells on Amazon. Right. For. For your. But if you're Chinese, there's no limit. Yeah. And you don't even have to have a nexus. So if you're that American company and you're making sales and making money, then you have to pay taxes and define your nexus and you know. Oh, so you're just screwed because you. Because you're screwed. Yeah, yeah. Because so these Chinese companies, to this day, as far as I know, these Chinese companies don't have to have a nexus, don't pay the taxes even though they're supposed to. Right. You can literally have a Chinese bank account and Amazon will send the money right to that Chinese bank account. And I was proposing to these guys and talking to some legislators at the time that Chinese companies should have to post a bond before they can sell the product and post it on a website that whatever, whatever.gov so that, you know, the fidget spin guy, spinner guy could come in and say, you know, we have an agent now that continually continuously checks to see if there's a knockoff of their product and then can challenge it. And then at least there's that $10,000 or $25,000 bond that offsets the risk for that fidget spinner. Okay, I know, I know one. I know one widgets company that bought the next five most popular widget companies in the category that we're knocking them off. Yeah. And they just continue to operate them. Yeah, but they have enough ranking on Amazon and they have scale control. But, but it's just wrong. It's just wrong that. Yeah. Because any, whether it's China or Vietnam, any country, if you're outside the United States, you immediately have a cost advantage, not the manufacturing, but just from an IP from and from an Amazon cost perspective, why in the world is it cheaper for a Chinese or Vietnamese company to sell on Amazon and to easily knock off than it is for an American company to sell the original product? That makes no sense. And legislatively, you could fix it in a heartbeat. You got to post a bond, $25,000 bond, depending on the size of the market, maybe more, and then give everybody 90 days to check it. And all of a sudden, the whole industry changes and American manufacturing skyrockets. And because that, that cost of knockoff isn't just about the cost of losing sales. It's the administrative, the legal cost that there's just so many nuanced things that you have to spend money. We have, we have, we have a knock, we have knockoff issues. And like, we.
The stress that that's happening right now. Yeah. At the same time it feels like there's almost an opportunity for. Not to bring it back to targeted advertising. But AI can tell me, okay, my favorite team's in town, I should go to this particular game. I should remind me at the right time instead of just signing up. Yeah, but that's not AI. Yeah, that's not really AI. Yeah, that's just targeting. Right. And I'm going to tell you what, in terms of, I'm going to take it down a different path because I'm contrarian on this. And that's with robotics. I think everybody's making this push for humanoid robots. I think they might have a five year lifespan and then they'll fail miserably, maybe 10. You mean the companies or the device or the individual physical robots or both? Both, both, both. Right. Because I think everybody defaults to, well, we live in a human world. And humanoids will take the place of humans for, for various functions, particularly in the home. And I think there's just no chance. I think if you look at warehouses and what Amazon does, they're not humanoid robots carrying boxes. They're robots that are designed to fit the environment. And I think, you know, I've heard people say, well, a house is a house. You need a humanoid. I think houses are going to be redesigned completely so that whatever the optimal robot is, that allows it to simplify the house, that's where houses will go. So I'll give you an example. If we had robots that look more like spiders, that could, you know, but had, you know, the ability to carry and lift things where more like ants, I guess. Maybe. Right, but. And you could create a house where the pantry and the refrigerator and the washing machines were hidden behind the garage, if you even have a garage. And that way you could redesign it so all the living space was for people. Because you know that the robots aren't going to be full form humanoids. They're going to be whatever the optimal shape is and they're kind of co designed. You design the house to fit the robot and you design the robot to fit the house. And I think you could go, the other thing is, you know, the humanoid founders will tell you, how do you solve stairs? Right. Like it doesn't work for wheels. But if the robots are really great people, you can.
And I gave them, like, a lot of money. I gave them a lot of money. And this was 10, 12 years ago. Way ahead of the curve. Yeah, there we go.
Rushing it. Yeah, fantastic. But give us a sense of scale. Give us a reminder of the strategy, reintroduce the company.
Market clearing order inbound Vibe put it. I see multiple journalists on the horizon. And mine. You're watching TVPN. Today is Thursday, March 19, 2026. We are live from the TVPN Ultra Dome, the temple of technology, the fortress of finance, the capital of capital. Let me tell you about ramp.com time is money save both easy use corporate cards, bill pay accounting and a whole lot more. Don't test me with the soundboard. Don't go soundboard for soundboard with me. You know I got you. That's a narrative violation. We're having some fun. We're out of control. We got a great show for you today, folks. Carl Eschenbach is Eschenbach at Sequoia. We love to see it. We had the pleasure of chatting with Carl a couple months ago and I've always been a big fan of his. But we'll let him introduce himself. Let's pull up the linear lineup. Linear, of course, is the system for modern software development. 70% of enterprise workspaces on linear are using agents. And you should be too. We also have Mark Cuban coming on the show. The Cuban. What a fantastic return to form for us because the first time we had him on the show we discussed. And we can talk about Cuban in a second, but of course we have our Lambda Lightning round and Alex Conrad from Upstarts Media is joining as well. Anyway, last time we had Mark Cuban on the show we were debating ads in LLMs. And since then we've gotten a bunch of data points about ads and LLMs. And I think that some of his takes have probably aged well. Some of our takes have probably aged well. And it'll be an interesting time to reevaluate what's actually happening. There are a lot more points. I don't know, John. We said that ads would be fine. Well, and now the world is ending. Yes, now we had. It's not because of the ads though. It's not because of the ads. It is. It is much more complicated than that. But here's a white pill. Samsung is investing $70 billion to advance their fab capacity. They're getting back in the AI chips game. They've always been in the AI chips game. So brief history of Samsung. You probably know them from the phones, from the TVs. They of course, a major player in HBM high bandwidth memory. They are a massive company. Over a quarter million employees. They're close to touching $1 trillion in USD market cap. They pull in around 200 billion USD a year in revenue, maybe 250 billion this year in revenue. Really good. All that's USD. When you look up Samsung, you get South Korean won. But I like to think in USD because I'm an American and it's actually kind of complicated thinking in foreign currency. They're the global leaders in memory and OLED displays as well. So a lot of the displays that you see in other electronics, even it has a different brand name still, Samsung actually making that OLED display. But they're second in smartphones to the iPhone and Apple and they're second in the semiconductor foundry business to TSMC. Semiconductors still make up 30 to 40% of their business and they supply HBM to Nvidia for the H100 and Blackwell system. So it's not like they're sitting out the bull market. They are doing great, they are definitely participating. They're incredibly important in the AI buildout. But if TSMC is bottlenecked and TSMC is sort of risk off and they're not going to be guiding to insane Capex numbers while every American hyperscaler is, well, that creates an opportunity for Samsung. And so Samsung is stepping up and they're announcing that, hey, we're going to put another 70 billion to work on this particular business. So Tesla has been working with Samsung on the foundry side in AI for a while. So Samsung's never really been on the frontier with a direct competitor to the H100 or the Blackwell chip. That's been more of like AMD's game. And AMD also fabs a TSMC. So there hasn't really been this like neck and neck battle between TSMC and Samsung. But it's like you can do AI inference on a Samsung chip. And we know that because Tesla went to Samsung years ago and said we need a chip that can take in pictures from the road, decide where the lines are and decide they want their chips with the dip. They want their chips with the dip. And that's what Samsung does too. That's all you know. And so the FSD system, if you have a Tesla, you might be familiar with like HW3 hardware 3 that has been deployed into millions of cars. And it was fabbed on Samsung, Samsung's 14 nanometer process, which is a lagging node. We're not in the 3 nanometer, the crazy frontier stuff, but it's working and it's on the road. And According to a US regulatory probe, there were 3.2 million vehicles Teslas on the road in America with FSD systems that were basically all running Samsung chips inside. And so now to be clear, Tesla, just like any foundation model lab company, they have training and then they also have inference. They're a little bit different than many of the labs that you know and love because they do training in a data center using what's called the Dojo chip. And that is fabbed at tsmc. But so they train the system, they take all the data in from every Tesla camera, every road, all the information that they have. Every time that there's a disengagement, that's feedback to the reinforcement learning system. It says, hey, we were in FSD mode, but then someone grabbed the wheel or someone stepped on the brakes. You made a mistake. Understand what happened to get you to that point where you made that mistake. And so that all that data gets collected in a Tesla data center runs on these Dojo chips. They do the training and then they deploy the model onto the Samsung chips in the actual cars. So if you're driving a Tesla, you have a Samsung chip in there that was trained and the model was trained on TSMC chips. And so the Dojo D1 is one example of their training chip that was fabbed at TSMC on 7 nanometer, and it's completely separate from the in car FSD chips. So with the backdrop of Nvidia's massive GTC news cycle, they've done so much press around GTC and so many different launches, you know that Nvidia is just going to suck a lot of the air out of the semiconductor discussion this week. Out of the clean room. Out of the clean room, yes. Which is recycling all of the air every three seconds or something like that. So Samsung dropped this update. It was pretty quiet. We were actually struggling to find it. There was one Wall Street Journal article about it, but it has not SEO'd. Well, maybe they need to do some more podcasts or something. But they did, in fact, I mean, Jensen's doing a whole fleet of shows and interviews. Podcaster says companies should podcast harder. Yes, yes, yes, yes. The solution to everything is more podcasting, talking. My book here. I think this is particularly important, especially this morning. The. I guess the CCP put something out in the last 24 hours, basically saying, hey, Taiwan is going to have an energy crisis due to the broader global energy crisis. So we need to reunify peacefully. There's an opportunity for peaceful reunification, but peaceful reunification, even if it's completely peaceful, and all the Taiwanese people just say, hey, we want to be part of China, they all vote for it democratically. That's going to be rough for the American Chip buying industry if you're a buyer of chips that are fabbed there. And so having another chip on the board, metaphorically to make physical chips is probably a good thing. You know, I was writing yesterday, I'm very excited about Samsung, very excited about intel, very excited about all of the new fabs. The Gigafab. The Terafab is the one that Elon's talking about. Launches in five days. Oh, did he actually say that? He said seven days. Seven days ago. Wait, five days, like the plan. Launches. He just said Terrafab launches in seven days. Okay, so I don't know what launches mean. And Tyler, what's the lead time for an ASML lithography machine? Yeah, I mean it's at least like, you know, it's five, three, five years. Yeah, it depends on which. Like tools. So the Terafab will be ready in five and the ASML machine boost asml. It's alien technology. It's possible Elon's going up and back to space all the time. Maybe he got some of his own. An extreme ultraviolet lithography machine on the moon. On the backside. On the dark side of the moon. They just had them stacked up there in crates potentially. So yeah, Samsung's been doing well over the last five days. Stocks up 11% during a time when the Nasdaq is down 2.2% and geopolitical tensions continue to rise. The compute bottleneck. We know it's important, we've been discussing this constantly and it's going to be very constraining over the next few years. So every increase in capex in the supply chain is a step in the right direction. And so Samsung gets the first gong hit of the. Congratulations, Douglas. Samsung making. Making a big bet. Who else is making big bets? Cursor is making big bets. Before we talk about Cursor, let me tell you about Phantom cash. Fund your wallet without exchanges or middlemen and spend with the Phantom card. And let me also tell you about Label Box, RL environments, voice robotics, evals and expert human data. Label Box is the data factory behind the world's leading AI teams. So Cursor is out with Composer2. Composer2. It is frontier level at coding, priced at 50 cents per million input tokens and 2 1/2 dollars per million output tokens. It's also they have a fast version. They say we're able to significantly improve the model quality and cost to serve. These quality improvements come from our first continued pre training run, providing a far stronger base to scale our rl. It performed quite well. On. What is it? Cursor Bench. Yes. Which is a funny bench, but which is. Well, yeah, yeah. TVPN actually relevant. TVPN performs quite well on TVPN bench, too. Yes. It's a little silly to design the bench and then publish the bench. Your score on your own bench. But I mean, to be fair, like, they're putting GPT 5.4 high and medium above them. So it's like they didn't. It's not one of these graphs that's just like, oh, look, we made some arbitrary X and Y axes. And like, we're in the front top right corner, of course, because the axes are, like, good and cool. We're the only ones. TVPN bench. Yes. Like, technology podcasts publish at least three hours of content every week. Yes, naturally. Exactly. Naturally. We are right at the top. Right at the top. Right at the top. And it's actually. There's no one else on there. Yes, but. But yeah, I mean, this seems fair. It is a little bit odd to read this because the cost. The cost is on the X axis and it's inverted. So the further you are to the right, the cheaper you are. Which makes sense because people associate an X and Y graph with. You want to be in the top right quadrant, and they certainly are. And it does seem like, in terms of this Pareto frontier, you want to be on the frontier, you want to be pushing out across every single curve. Maybe if you are interested in sparing no expense, you'll go with the GPT 5.4 high or medium model and you can align cursor to GPT. I'm sort of surprised that Opus is not doing as well on cursorbench. That feels surprising based on the general vibes around Opus 4.6 generally. But cursor has specific needs for specific customers. And I don't know. What else do you think is going on here? Yeah, I mean, the cost is really big. Like, this is basically like 10x cheaper than opus. Yeah. So I think also, you know, Cursor has kind of been like, not really a, like, dark horse. Like, everyone knows about it, but in the coding race, it's like, everyone's like, okay, there's Codex versus cloud code. Yep. But, like, you know, if you imagine that, you know, cloud code and codecs are kind of like these environments for getting a ton of, like, really good data for training coding models. Like, Cursor's had that for way, way longer than OpenAI or Anthropic. So you should imagine that at least in the near term, they actually have really, really good data that they can train these good models on. And obviously this is a very specific model. They've said it. You're not going to write poems with this model. It's this very specific kind of almost like point solution model where it's just don't listen to them. Tyler, write a poem with the model. Poem Bench. Poem Bench. Yeah. I would be interested to know like how many sacrifices were made because it's at a certain point, like I remember talking to an AI researcher, actually a semiconductor entrepreneur who was saying that like he actually thinks, he actually does believe that importing like the Odyssey and like Homeric epics is key to humanoid robots learning to walk. Yeah. Well, I think like if you look back at just like the general history of like machine learning AI, like the lesson is that like big general models always beat these small specific models. Yes. But if you kind of zoom in on the timescale, like you can still train, you know, glm, some open source model on a very specific task like accounting or something and you can like hill climb and you can actually make it better than the frontier models right now at that specific thing. But the question is like, especially at cost. Yeah, especially at cost, yeah. Yes. Yeah, very much so. But like on the long term, if you zoom out what actually went here, it seems like it's basically always going to be these big general models. And I wonder if that's true. I mean we talk about this a lot where the big general model outperforms the smaller model, but at the limit. If you were to think about a Python if statement, just like flow control, that is truly deterministic. Yes. If you piped the same question of the if statement, like is this number bigger than this number? You pipe that into 5.4, it's going to get it right all the time. It's going to be very expensive compared to an if statement, which takes like no. No compute whatsoever. Right. But the IF statement is 100% accurate. Like unless there's some bit flipped from cosmic radiation, like it is deterministic. And so if you're in a world where the small model that you've built, the classifier that you've built, whatever machine learning pipeline or small model you've built is actually functionally at 100%, well then there's this upper bound that even like the bigger, smarter model doesn't get you any benefit at all. So you're just purely in cost control mode, I would imagine. But I don't. Yeah, that's reasonable. But I think this is not a great example because coding is more coding. We're not at 100% saturation on just coding. Yeah, just literally one. I mean, look at the, look at the chart. Like the best performing models are sitting at 65%, 60, 63, 64%. So there's clearly more, more room to saturate this particular bench. I'm actually super interested to know about what goes into Cursor bench at this point because I feel like when I see every benchmark, it's like 100% now. But that's just from, you know, the old, the old models. Legendary poster. Senkalp says all sh I t s and giggles on that headline till Anthropic or OpenAI decide to cut off their access to cursor. Referencing the Bloomberg article Cursors taking on Anthropic and opening eye with a new AI coding model. Would that matter? Like at this point, if they have Composer two and it's a small model, but it's good at writing code and it performs well on Cursor bench. And the Cursor users are satisfied with the Composer 2 model and they do. Cursor does get their access cut off. And when you install Cursor, you roll it out to your organization, you just get Composer two and you know what? It's, you know, maybe there's taste that would pull you. Yeah, we're just saying at this point right now, I don't think we have any visibility into like, how much of Cursor's revenue right now is tied to using OpenAI or anthropic models. Well, I think like, in some ways all their cost is. But is all their revenue. It depends on the perception of the user base because the revenue might be. Well, I pay for Cursor, like I don't really care what they use under the hood. There's a lot of people. This was Ben Thompson's argument for a long time was that there's. For a lot of people, they just show up to ChatGPT and they wouldn't care if the model was powered by Gemini because they're just like, I just ask it a question and I get an answer. And so if you're a cursor user, it's possible that you're in the same boat where you don't really mind what happens under the fold. What else is going here? George says, I'm hearing tons of complaints from Cursor customers at enterprise companies. A silent change put almost all models Cursor uses behind ma Max mode devs who used to manage to spread out monthly credits over a month. See all of it used up in one to two days. Oh, interesting. Are furious and switched. It does feel like there's a little bit of like an economic war here. Yeah. And this is what came up, you know, earlier this month around the lab, sort of subsidizing. Yeah. You know who's they're not. They're not in an easy position, but they're such a talented team. Yeah. Well, you know who's great at Pareto Frontier pushing models. Gemini 3.1 Pro. It's here and it has a more capable baseline. It's great for super complex tasks like visualizing difficult concepts, synthesizing data into a single view, or bringing creative projects to life. And let me also tell you about Graphite, which is owned by Cursor Correct code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster. Nikita says we're rolling out summaries for articles now. Just tap the summarize button if you want to know if it's worth your time to read it. And yeah, it's basically Grok. Turn this into a regular tweet. I am excited about the listen button. I've had this on my commute. There's so many moments where I'm like, I wish I could just have somebody read this article. I actually wound up doing this with a number of will menitis long form essays. I would copy them, put them into 11 reader from elevenlabs and have it read it to me in sort of a silly voice. A silly voice. It was a good time. Well, I was actually trying to use Grok. I was trying to use Grok in the X app to just take an article, paste it into Grok and say, hey, can you read this to me? And it said cannot find the post. It couldn't kind of get the content. It is sort of crazy. Reflecting on the fact that there is so much software out there that we talked about Doordash. How can you completely reimagine the experience with agents on the platform? And there's so many different things that you can do to sort of like re architect what your product is if you've built a successful software company. But then there's all these like little things where like yes, every product probably does need an LLM that can summarize text and expand it. And that's what Grok does. And also everyone wants text to speech and everyone wants speech to text and everyone wants. Okay, if there's an image I want you to, you know, have a text version so it's searchable. Like, I've complained about if I see a funny meme on X and then I want to go search for it later. Impossible. Like, there's just no way you could ever do it because that is stored in Twitter's database or X's database as like image number 762542. And someone's comment was just like, this changes everything. And I'm never going to remember what they said. But now you can run every image through a image model to understand what's actually going on and then potentially service that in search. Now that's going to be a long project to actually implement that. Make it fast, make it cheap, make it affordable, make it, like, fit within the business model of X or whatever social platform is out there. And I mean, you can see Instagram struggling through these things right now as search becomes more important and as ranking becomes more important. Meta is already seeing the lift from machine learning applied to ad ranking and whatnot. But this is a response to, you know, every article people would post, people would always say, grok, summarize this. And now there's just a button. I wonder if this button will be gated by X Premium, because I recently learned that you can only ask Grok, like at Grok. Is this true? You can only do that if you're paying for X. And sort of underrated how? Well, X has seemingly. I don't know how big the subscriber base is, but that was a crazy idea to have a paid social network. Dalton Caldwell from formerly YC Partner actually launched a competitor to Twitter back like maybe a decade ago that was paywalled only, and the whole pitch was like, better content, more substack model, no ads. And he never really got it to perfect product market fit. But it was an interesting idea. Yeah. I think it's because people are deeply addicted to X. It is very valuable to them to be on there to participate. And the paid functionality, the way that it was marketed and the way that it generally worked was like you were going to have a bad time on X. Like if X was valuable to you and you didn't pay the $10 a month, yeah, it was going to be like significantly less valuable to you probably. You know, you might, you might. Depending on what kind of business you're running or what you use X for, it might be the equivalent of like losing thousands of dollars a month of value. Or you could just pay the $10. Yeah, so it was a good trade. Yeah. But it was also just, it was weird how the targeting never seemingly got dialed to the point where you could actually target the CEOs of companies who were on X. Like, I mean, you see Travis Kalanick on acts like replying to things. It's like he's raising money, he's growing a business. There's a lot of value in advertising to him because he's going to be picking a corporate card soon or he probably already has or might be in that market. He might be picking a payroll suite. There's all these things where if you could deliver that to that audience, it would be incredibly valuable. And the CPM should be like through the roof. But I think for privacy reasons and for a variety of other reasons and sort of like, like really monetizing that long tail has been very difficult across every platform. So they've just gone with scale. And the products that have sold the most on social networks have been very broadly marketed. And the, and the, the criticism that we saw from the Oscars is always like YouTube ads are generic. It's just like for a pillow or like injury or like something that applies to every single person. But there's always this like hyper targeted opportunity there. Yeah. The other, the other thing is, is the paid program with X has seemingly worked in that we know a lot of people that happily pay and have no plans to churn. But it would be a failure in the context of like meta scale. Right. I think the last reported number that I saw was something like they had like one to one and a half million paid subs at $10 a month. Oh, on X. Yeah. So you're talking about somewhere in the range of 100 to 200 million of like air are. Yeah. And if Mark, if Zuck could launch a product like that, he would just wind it down. Right. Reels went from 0 to 50 billion of run rate in like a handful of years. Right. That's what, that's what a home run looks like. And so I think it makes sense for X, but it certainly is not a home run from a consumer application standpoint. And they still need the, you know, the overall business. Yeah. Olivia Moore had some extra context there around monetization of via ads versus versus subscriptions. So Neil Patel, who is the founder of NP Digital, a New York Times bestselling offer, shared. This is how ChatGPT ads. He's like the SEO guy, I think. Okay. He said the data is only from five businesses, but these businesses also run Google and Meta ads. Compared to Meta, ChatGPT's lead quality is 256% higher. On the flip side, lead quality is 49% lower than Google. I mean, that seems like a miracle to be in between two hyperscalers on day one, basically. But on the bright side, due to add cost, it's substantially cheaper from a CPA perspective than Meta. And this was sort of what we were talking to the good folks over at Ridge about was that at least in the early days, like being being early to a new ad platform that can, you can potentially scale on can drive a bunch of new conversions. But Olivia Moore said a big story that most people are missing in the AI race for the consumer ChatGPT versus Claude IS ads. Right now most consumer AI revenue is coming from power users who are willing to pay high subscription costs. This currently skews positive for products like Claude, but this will not be the end state. Google makes $460 per user per year in the United States, mostly on ads. I didn't know that their ARPU was so high. Meta makes around 250. I mean, I guess those Google Ads are really, really valuable and it's so intent driven that it makes sense. I would argue, or she would argue, that ChatGPT's ad based ARPUs will be even higher as they will ultimately have deeper, more frequent user engagement. Even at the $460 level, monetizing everyone in the US via ads is 152 billion in annual revenue. By contrast, if you're able to monetize even 5% of the population at $200 a month subscription, which is a stretch, that's only 40 billion. That's, that's actually a crazy difference because $200 a month subscription is like super high. Like, you know, you're talking 20 times like Netflix or something else that's, you know, premium and like really important. Yeah, the $200 subscription at the time was crazy. Yeah. But even at that point, some of the people that were more kind of just like AI pill, generally we're like, oh, it's actually possible that someday you could spend $20,000 a month. I was like, give me the $20,000 month plan. And it sort of came via API, but it was heavily subsidized. So she says, I suspect this will be even more drastic outside of the United States where users are even less willing to pay or directly pay for subscriptions. And the earliest data from a very small rollout shows ChatGPT ads are already outperforming Meta in effectiveness. This just gets better, just gets better over time. So interesting. The Question about Will Menidis and the article summaries. Should he move to substack? He's threatening Nikita. He says I'm defecting to sbstck. He won't even type it out. He says they pay more and Nikita didn't reply. I think people would follow Will over there. I think people would read his articles anywhere potentially. But if you're looking to advertise, why don't you head over to Applovit? Profitable advertising made Easy with Axon AI. Get access to over 1 billion daily active users and grow your business today. And let me also tell you about Gusto, the unified platform for payroll, benefits and hr built to evolve with modern small and medium sized businesses. So Carl says it's time to return to the place where I know I can have the most impact. I'm beyond excited rejoining Sequoia as a partner. Here is what I shared with the team on how I am approaching my next chapter. He wants to serve the ecosystem. Fire to win. Let's see what this means. Being a servant leader does not mean I have lost my edge. In fact, the fire in my belly burns brighter than ever. The difference now is that I'm not using that fire to light my own path. I'm using it to light the spark in others so their fire burns brighter. Leading from behind I have no interest in the view from the front of the room. I will leave that to our two great leaders, Pat and Alfred. I want to lead from behind, empowering each of you Egoless impact, contagious energy, mentor and build great leaders. Always ready to serve. Carl's the man. We will have him on in just 30 minutes. So we can wait to cover more of that story. But first we got to go to his Allen Co. Photo shoot. Dual wielding coffee, jump rope and a faded Sequoia T shirt, Andrew Reed says immediate overwhelming response to VC push up debacle. Welcome back to Sequoia, an all time great partner. That's a great. That's a great photo. He's looking great there. Let me tell you about Plaid. Plaid Power is the apps you use to spend safe, borrow and invest securely. Connecting bank accounts to move money, fight fraud and improve lending. Now with AI. This is an interesting story. This is an interesting story. Apple is way behind in AI and still making a fortune from it. Let's see. Begs the question, are they actually behind AI? Revenue is set to top 1 billion this year. Reassuring investors wary of rivals sky high spending. And keep in mind they have a chart here showing gross revenue from Genai Apps as well as Apple's commission. Look at this. The beginning of 2025 was really the boom of gen AI app growth. 400 million is this, is this monthly App Store revenue. Wow, they're really cooking. And then, and then sort of a flat line. Yeah, it's so, so interesting that, that it actually dropped. Yeah. Well, we did read that article a few days ago about how Apple has been pushing back against some of the Vibe coding apps. And there's this question about where are the bounds. Obviously Apple's had pretty strict App Store rules around adult content and what else you can do. Even just the app reconstituting itself, pushing changes because they want to review every line of code that goes in the App Store. If someone's pushing 10, 20, 30,000 lines of code a day, that's a lot of code for Apple to review. It's going to slow things down. So that could be a little bit of what, what we're seeing. Maybe they've capped out on their ability to review all the Vibe coded apps that are flooding the App Store. But let's go to the Wall Street Journal and dig into this. Apple is on pace to surpass 1 billion in AI revenue this year, a tidy sum that demonstrates the company's AI advantage even as it struggles to deliver an AI strategy of its own. It's Siri Chatbot is still weak by modern AI standards. What Apple does have that the other AI players don't is a dominant position making devices. However, however fancy OpenAI, Google, anthropic and XAI make their Chatbots, iPhones are still a primary way to deliver them to customers. That means they typically pay the App store tax roughly 30% of subscription fees in the first year and 15% a year thereafter. Though rates vary, Gen AI apps paid Apple nearly $900 million in App Store fees in 2025, with almost a billion of revenue and very, very, very little capex. Three fourths of the revenue Apple rakes in from Gen AI apps in its App Store come from ChatGPT. Next at about 5% is Xai's Grok. There we go. Grok. I mean there's so many different funnels. They did the essay competition, they did the video competition. And I mean I've talked to people that are just still, they're like, you know, like people that are in the Apple ecosystem, they're like in the Tesla ecosystem. And so they're like, yeah, I talked to Grok on my way to work. I'm not kidding. Grok in the iPhone App Store is at. Did 12 million last month. Yeah. And I know like the true like AI heads will be like, Grok's behind on this benchmark model or whatever. Tyler, is that a correct characterization? Yeah, Grok did more revenue, Grok did more revenue last month than Claude in the iPhone app store. But, but like I've, I, I've started having conversations with, I mean, I'm using ChatGPT, but I, I wanted to just, I, I wanted to get up to speed on, on Taiwan and the, the, the, just the, like, what was the, the reason for the original Civil War and stuff. And so I was just having a conversation back and forth and at no point was I like, oh, I really. GPT 5.4 Pro. It's like, these are things that exist just like with one search to Wikipedia or one search to any, it's probably baked into the weights of 3.5. But so like, if I'm just going to be like chatting with someone who's like, reasonably smart, like, I would say Grok is there. And so what do you think? Yeah, but you could be talking to someone who's really, really smart. No, not if you're asking like basic, basic knowledge retrieval questions that like, they're like, any model's gonna one shot and just be. Yeah, but you're just describing stuff that you could just like actually Google. Yes, but I can't Google via voice in my car on the drive. And for someone who's driving a Tesla and has a Groq integration right there, they're just like, sure, like, this is great. Okay, yeah, that's fair. It's like, not like the frontier use case is important. That's where the action's happening. That's what's driving the next order of magnitude of growth. But like, there are plenty of people who are like, Google's search overviews are amazing, you know, and they're like, that's like, that's my level. And like, that's good. And they're like, yeah, but like, I don't think those people have actually tried like, GPT 5.4 Pro. So good. It is good, but it's slow. And truthfully, like, like you can fire off the exact same query to 5.4 pro and 5 point pro and 5 point foe, 5.4 fast. Fast. And if the query is simple enough, the answer will be exactly the same. Because if I ask 5.4, 5.4 extended thinking, like, what is the capital of California? And it thinks for 10 minutes and it just tells me, sacramento. See, there you go. That's why you need to think a lot of people. I told you, I run my life on CPT2. I hallucinate a lot. But people have said I have the mind of GPT too. It's true, it's true. But, but so, so I think, I think for. I think for some use cases, you know, a smaller model, something's a little faster. Something that's not, you know. Absolutely. Frontier is fine. So I don't know. What do you think? So imagine that. Do you think there's something else going on? 5.4 Pro Spark. So it's on, you know, Cerebras chip. Yes, yes, yes. Would you hit that every single time? When would you not use it? I would not use it if I was doing like a deep research report necessarily, because I just want extended reasoning for certain things. No, but I'm saying, like, you could have that. Like, oh, oh, oh, oh. You can use like 540 Codex Spark, which is like super fast extended reasoning, but it's still super fast. It's on service chips. Right. If you had that. Yes. Money is no object. Absolutely. Like, you know, I'm happy to pour out the glass of water to get the best and the best intelligence possible. Like, I just think that even if. If you just care about speed, there's still better. I think, in my opinion, like, right now, there are better models than. Than usual. Okay, so, so, so walk me through it. Like, 5% of App Store revenue seems really high. What's driving that? Yeah, not everyone is, like, extremely tapped into. Like, you know, the current model that came out two days ago, use it. Like, do you agree with me? I think you agree with me about, about the fact that that, like, good enough intelligence is still like, a good business to the tune of 5% of App Store revenue from Gen AI apps. Yeah, but I'm saying that, like, you, because you know about this stuff, you should, like, there's better things that you can do. I told you. I told you. I'm talking to ChatGPT. Look, don't. Don't shame me. I'm not the one. But I'm just saying, like, I don't even think you should be shaming someone who's talking to a model. I'm not shaming them. I think I'm saying you're model shaming could be better. Like, you could have a much better experience. Okay, okay. So you want to evangelize the frontier. You want to evangelize the frontier. But I mean, I'm just wondering if we did. We need a new Turing test. So we need to have random people come in and they get to talk to 5.4 or 4.0 or something. And can they tell the difference? And which one do they prefer? They might prefer 4.0. They might prefer 4.0. This is the New New York Times writing test. We should put one of these out and be can you actually tell the difference? This is interesting because I feel like a lot of people say they can, but they probably unless they're really, really grinding and they're trying to do something that requires a really long reasoning chain, it's totally possible that they're just like, yeah, gave me the right answer. It looks that's a narrative violation. Anyway, let's continue. Apple's revenue from generative AI apps rose from about 35 million in January to a high of 100 million in August. Do nothing, win. Do nothing, win. Create an app store over a decade ago and just keep reaping the rewards they sowed. And now they're reaping. Sales have fallen from their peak, partly because ChatGPT downloads have declined, according to the data. As a proportion of Apple's total sales, $1 billion is small. Yet gen AI apps are the growth driver for Apple services business, which investors have focused on in recent years because it has grown faster than device sales and boasts higher profit margins. Apple's dominant share at the top of the smartphone market affords it another luxury time to get its own AI strategy right. So they're making money while they figure everything else out. Apple's AI plans plan runs counter to strategies of competitors that are spending hundreds of billions of dollars on chips and data centers to build frontier language models. Apple is spending a few fraction of that, aiming instead to use all of the personal information people store on their iPhones, together with the chips that it designs itself to power an on device AI strategy. That strategy could prove a winner if, as some AI researchers have suggested, access to user data and strong privacy makes on device AI the dominant way consumers access the technology. Apple investors want to see progress from Apple's own AI strategies, such as, said Charles Reinhart, chief investment officer of Johnson Asset Management, an Apple shareholder. Quote, if they can act as a toll road for providers of AI, then they'll probably end up looking good long term for not having the big capex overhang. Now I have to imagine that Apple is not capturing any revenue from enterprises developers, Claude Code, Codex, any of those developers. They're probably not. Even if they even if they are winding up using like a ChatGPT subscription in Codex they're probably setting that new subscription up on desktop. It's a toll road on the actual side. Yeah, but it's a toll road on consumer, which is consumer sales. All the more reason to get into ads, honestly, because Apple does not tax those. Apple's and AI is exciting for Apple because they need, they need a new product that they can just randomly bill you like 299. Yeah. Anytime they need a cat. Like what are you talking, cash 2.99? Like don't, don't you get just random bills from Apple like here and there? $2.99. Like $2.99. Yeah. Like I feel like every time I check my email, it's like Apple has charged you some random amount for some. Or something for some subscription. No, I do get emails from Apple, but it's always like two days after I bought or rented a movie on Apple tv and it just says like, you rented this movie? And I'm like, yeah, I know. I click the button. It's fine, you don't need to email me. In other news, Rolls Royce has scrap plans to go all electric by 2030 as quote, drivers prefer V12 engines. Would you look at that? I mean, and this is just total shock. Total shock. Yeah, total shock. Yeah. Drivers totally had to experience being forced EVs forced upon them for the last few years to know that they preferred combustion engines after all. Of course, I'm kidding. I think a lot of people were just sort of saying this over and over. Manufacturers were not listening. Everyone said Elon has been saying the Roadster reveal will blow your mind if it has a V12. People are talking crazy if he drops a V12. That would be. That would completely break the Internet. Yeah, it would be incredible. Anyway, let me tell you about public.com, investing for those that take it seriously. Stocks, options, bonds, crypto treasuries and more with great customer service. Let me also tell you about MongoDB. What's the only thing faster than the AI market your business on MongoDB? Don't just build AI own the data platform that. Let's talk about this Tesla that you were following yesterday. Oh yes. Did you drop this in the chat already? I sent it to you. Oh no, we shouldn't, we shouldn't. We shouldn't share the actual picture, but I saw a Tesla that was a very funny mix of. It had the anti elon club on it, but also an 1199 license plate and it was a plaid and it just like mixed every possible political ideology and it Had a vanity plate that was very sci fi. Sci fi. So mixing like I do want to go to Mars. Yep. But not with Elon. Yep. The license plate basically said beam me up. Beam me up. So they want to go to Mars but not with Elon. They support. They have an incredible amount of disposable income based law enforcement. They enjoy high trim levels but they, they, they do not agree with Elon's actions. Well maybe they work for a rival AI lab or something. And so they, they're extremely sci fi pill but they just don't like, they just feel like they're competing with XF. California has now spent over $100 million on a new bridge to nowhere. It is a wildlife bridge which I've driven by hundreds of times. I've been seeing it. I've been experiencing the traffic that it causes. I'm not against the concept of a wildlife bridge. In fact, I think it's fantastic. It does feel like, but it is in a concrete jungle. This is beautiful. Totally. This has a lot of opportunity to actually improve the visual aesthetics of this particular part of the state. Caleb Hammer says, bro, this state cannot be real. Isn't. Isn't Caleb. It's very real. Isn't Caleb Hammer. He's like a finance. Yeah. He's got like the number one. He's like the one person you'd come to, to be like should I spend $100 million in a bridge? And he'd say and it's actually, it's actually quite a bit more than 100 at this point. And the funny thing is like it's just kind of a bridge but it doesn't, it's lacking the entrances to the bridge. I feel like it basically looks like even just a little bit of wood to smooth it out so that it looks like there's at least going to be start of a ramp to get on the bridge. Like the bridge looks solid. The actual center part looks solid. It doesn't feel that hard to finish this bridge. I'm optimistic that this gets done in the next hundred years. Like top, apparently Colorado built a wildlife bridge for a low cost of $15 million. Oh, that's not bad. Functionally, something very, very similar. The interesting thing is apparently the bridge is in some part for cougars. Cool. And the wild thing is like on one side of the bridge you have a bunch of residential homes and on the other side you have a bunch of cougars. And so they're now going to. The cougars are able to go basically hang in all the backyards. So we'll see how this goes. But I'm excited for this to be finished up. It's been as long as I have lived in Southern California they've been working on this bridge and it's about time. Well, former partner of TVPN Fall, a generative AI model hosting service that you know and love, is in talks to raise 300 million to $350 million at an $8 billion valuation. Annualized revenue has hit $400 million, up from 200 million in October. That's the same. They've executed really, really, really insane. So congratulations to them growth and look forward to having having them on after they move past the advanced talks phase. Yes, yes. What is Miles Brundage saying? He says I'm a bit worried that Anthropic is an org wide case of AI psychosis that makes them think Claude is good enough that they can ship random product features without breaking things. But they in fact do keep breaking things and they're not online enough to notice people complaining. Yeah, I don't know about the last part. They seem very online. Yeah. And I don't know too much about the issues, but there's, there is like if they're truly competing in consumer like there are like low hanging fruit like text to speech on deep research reports is not a feature that exists yet feels very obvious. But it's so interesting being this dynamic where you know, you can ship things so fast and yet there's some obvious product improvements that are just sort of stuck in the queue because there's a lot to do and it's an exciting time. What else is Anthropic doing? They're hiring for a policy manager who will be in charge of chemical weapons and high yield explosives. This reads like you're going to be building high yield explosives, which sounds like an Andrew job posting, but it is in fact for a policy manager who will be hopefully stopping people from. No, no, no. I think I read this as somebody whose job it is to decide how Claude is used to create chemical weapons and high yield explosives. I think it's, I don't know, I think it's probably like this person decides like where's the edge? So if you, if you're asking like okay, I have a firework and I want to make sure it doesn't go off. Like should I, you know, throw in the trash or put in the recycling or take it to a special place, like Claude should answer that. But if you go to it and you ask it like how do I build this C4 or something like that, like there's all these like policy edges where if you're talking about counter strike and you say like let's plant the bomb, it shouldn't flag that as okay, you're actually trying to plant a bomb. It's like you know, you're asking about a video game. We know how to interpret that appropriately. But there needs to be like a human in the loop to decide like where that frontier is and where that particular train is. Anyway, Orf says what terrifies me is if AI were to cure cancer and saved 50 million Americans, imagine the backlash from hard working scientists who wanted to cure cancer themselves. They will be involved for sure. Well, it's interesting that was like at least one person's response to the Australia dog story where they were like yeah, we've been able to do this for a while but don't do this. Which was an interesting response to somebody who went on multi year journey to try to save their dog. Seemingly is having some success outcomes. Very, very odd. Let me tell you about Railway. Railway is the all in one intelligent cloud provider. Use your favorite agent to deploy web apps, servers, databases and more. While Railway automatically takes care of scaling, monitoring and security. And let me also tell you about Vanta Automate Compliance and Security. VANTA is the leading AI trust management platform. So PG says anything made before 2028 is going to be valuable. And he's quoting an OpenAI employee who he says implicitly discloses their timetable. Anything made before 2028 is going to be valuable. That is such a vague. Someone was hanging out with, with Paul Graham and was like let me vague post irl. I require content. I require content. This, this is going to be very valuable. I'm not, I would keep this up. We are working on something that will be incredibly valuable. One of our team members had a major breakthrough and it could be the blockbuster product of the year. I'm very excited for this. This actually is. We were just messing around this morning with, with an existing product that we had had a breakthrough. We had a major breakthrough. We. I had nothing to do with it. We. And when I say that I mean Ben had a breakthrough that I think will change one of America's pastimes forever. I think so actually it'll be before and after. I'm actually so confident in this that I'm willing to make post about it. My patent for sure. Get this young man a patent. I'm on a patent. I have a patent. It's great. Yeah, when you get a patent you can also like frame it Get a little tombstone. It's very nice. Regardless of what happens with the business, it's like a good moment in your business career to, like, have a patent. Do you have a tombstone for your patent? I don't. I need to order one. It has been issued. Like, my name's on it. I'm like the seventh name on the list or whatever, but I'm technically on it, and so I should get my plaque or whatever. I'm sure you can just buy them. There's probably something out there. What did PG give? He gave some more context. He said, this was after I mentioned the idea of buying rare old things as a hedge, since the one thing AI won't be able to do is go back in time. That we know of. I don't. I mean, that's a whole plot of Terminator. Just say you're not. Just say you're not AGI pilled. Like, just. Everyone's saying it's going to be like Terminator. It's going to be like Terminator. Well, what happens in Terminator? They invent time travel. And so you're going to be able to go back? You're going to be able to go back? Yeah. I feel like after we get agn, we can, like, build incredible things. Like, those will be really valuable too, right? Yes. Yes. But. Yes. I mean, I could tie it. He is correct that, like, the, like. Like, you know, images in ChatGPT and VO3 and Mid Journey, like, don't decrease the value of the Mona Lisa. Like, that's just, like, obvious. And everyone agrees on that. That. Except you. You're like, I would actually like to go to the Mid Journey, Lou. Maybe that's why Banks. Maybe Banksy sort of intentionally kind of, kind of reveals himself. I don't. I didn't follow that story. Like, how did that happen? Because I feel like most people would just. I think somebody just caught him with a. Hopefully Scott is watching and can fill in. But I think he. I think he was just kind of, like, caught in the act, really. But maybe he's confident enough. Hey, AGI is coming. AGI is here. My stuff's still going to be worth a lot. Even I don't want to be in the shadows anymore. That's very interesting. So anyway, Paul Graham clearly does not believe in the Terminator thesis of the AI future where time travel is possible. Imagine being in the Terminator future and creating the time machine and just being, okay, I'm going to go back in time and paint new paintings that then I can acquire over Time and have new Mona Lisas. He's like, no, we sent you back to save the human race, but I gotta hang out with Leonardo da Vinci. I gotta build my art collection. He said, I don't put too much weight on the specific year. But the shape of the idea is interesting and I agree, it is an interesting thing to noodle on. Similarly, CrowdStrike, interesting idea. Your business is AI, their business is securing it. CrowdStrike secures AI and stops breaches. Let me tell you about TurboPuffer as well. Serverless vector and full text search built from first principles and object storage. Fast 10x cheaper and extremely scalable. Martin Shkreli, what does he say? He's coming on Monday for the great debate. The great peptide debate says good music is the last mile of AI. And Lil Wayne has some thoughts on AI music. Let's play this clip. Let's play this two minute clip from Lil Wayne on a podcast. Let's see, here we go. How you handle AI in this, in this business Now. Challenge the challenge. I love it. AI is a better thing. I love that AI is what it is. Yeah, because, man, I love to be able to stand right next to whoever AI is. He, she, they, whatever, or whatever AI is stand right there and I'm still better. Keep telling me what you do again. Yeah, run your list. I do this, I do that. I love it. I love the challenge of it. The first time I seen somebody was my friends was a little worried. They was like, man, bro, they got this AI stuff where you can just ask it to do, give you a verse like Lil Wayne. And so I did it. I said, let me have verse like little. And you gave me her best shot. Yeah. On a couple devices that, you know, not only a phone, a computer, even for a commercial. I was shooting for the Alexa thing I want to hear and I have a thing called Proto at home. They got his own little robot thing. Asked her to give me one and they all, you suck. We gonna be okay. I with that Beanie Siegel. I think he had to start using it because he likes a little bit. Yeah, another rapper to mog basically. That's his take. That's so funny. That's great. Well, how did you. How are designers feeling about AI these days? Samir says, bro, it's so over for designers. Google Stitch is insane. Google launched a new generative AI design tool where you can sketch something out on a piece of paper, turns it right into an image. And there was a lot of back and forth over, you know how this debate is Playing out between Google and Figma. Hadley Harris says 12 years later, the VC who passed on Figma seed because Google could kill them is finally feeling seen. Lots of amazing work done. Of course, in the interim, very, very silly. Will be interesting to see how Google pipes this into the other tools. I was on Google, apparently this is engagement bait. Other people are testing similar prompts and getting much, much, much worse results. Yeah, I don't know. I was on AI.google.com or just AI.google, not even AI.google.com looking at all the different Gemini features and it feels like the next challenge is just integrating all these different things. They have so many great models. Nanobanana, VO3 Notebook, LM, Gemini Flow, AI mode. There's so many. And actually piping a workflow from one to another is certainly going to be like the next question and does this all live in the Google search box? Is it in Google Apps or something? Either way, they're certainly investing in AI across the organization. And Ryan Peterson says whoever named Gemini at Google really named it. In the mythology, the immortal twin gives up his immortality to save the life of his mortal twin. It's just like Google giving up 100% of its free cash flow to make sure DeepMind survives. That is not actually the reason for the name Gemini, but do you know the name for why they picked Gemini? Because they had two internal teams that they brought together. Together. Yeah, the twins. It's a good name and it's really. They were suffering from a bit of a naming crisis for a while with Bard and Palm and they were definitely shipping a little bit of like, not, not necessarily shipping the org chart, but shipping some of the internal naming schemes that were like abbreviations. Even like Nano Banana Pro, the image models used to just be like Gemini 3.1 image. Yeah, whatever. They didn't get 3.1, but there we go. There you go. Like Nano Banana was famously just like the internal name used, right? Yeah. And sometimes these internal names can. Can really fly in consumer context. Mostly I don't know what it is, but something about like Nano Banana, like really sticks out. It's so funny. There's. It doesn't sound like an AI product. Like, like when you have Siri and Alexa and then you have Rufus and Argus or something Sparky from Walmart, like doing another human name can actually be a disadvantage because you just get lost in the clutter. But if no one's really using the Nano Banana name. I know Strawberry was used by OpenAI before, but no one had really used that as a public name. It certainly helped them break out. But Gemini has been an interesting name. It's good. It balances both. It sounds like a product, but it sounds. It's just one word. It sounds like it doesn't anthropomorphize too much, but a little bit it is reference. There's a lot of deep knowledge there. Quickly, let me tell you about 11 labs. Build intelligent, real time conversational agents. Reimagine human technology with 11 labs. And let me also tell you about Vibe Co, where D2C startups for D2C brands, B2B startups and AI companies actually advertise on streaming TV, pick channels, target audiences and measure sales, just like on Meta. So without further ado, we have our first guests of the show, Pat Grady and Carl Eschenbach from Sequoia Capital in the studio. What's going on, guys? Carl, Pat, how are you guys doing? We're doing great, man. We're doing great. It's great to be with you guys. Yes, thank you so much for hopping on.