LIVE CLIPS
EpisodeĀ 2-18-2026
Tend to go poorly is because the wealth leaves. Can you give us a white pill? Something that I don't think from a comps perspective, don't tax me is going to be effective for the billionaire class. But what should they be focused on in a world where a lot of people are seeing billions of dollars in wealth creation from what they see as slop, what they see as scams, what they see as a variety of water usage, energy price, blah blah, blah. What are the white pills that you think tech has delivered or is in the process of delivering over the past few years and into the future? Yeah, I mean, that's the question of the hour. Because inequality continues to get worse and kind of fuels the politics behind these things. I'm kind of a gundo Stan. Like I believe in this patriotic vision for tech and this idea of keeping regular people in mind and building businesses that last and create value and don't just kind of addict people to things that they shouldn't be doing or shouldn't be spending their money on. And I think who knows what billionaires could do. There's a lot of ideas out there. Mike Salon actually just wrote a Christmas wish list for billionaires with like 20 of these really beautiful white pill ideas. One of the easiest things is building like beautiful public works libraries, statues, like things that people can see with their own eyes and feel gratitude about. Whereas right now I feel like it's. Not that compelling to be like, don't hate me. I built this short format instead of. Like the public library. It has the Rockefeller name on it or something. Like there's a lot of examples as you walk through New York City where you see a beautiful building and you're like, oh yeah, and that's free to the public. Now I'm a big fan of like the Hearst Castle. Like William Randolph Hearst, like just built this castle for his entire life, spent all his money, died before it was even done. And then just California really ramping up something like an adopt a highway program, like driving around la. The roads are just so, so, so bad. Yeah, but the tangible. Yeah, the tangible thing that you can see is very, very, it's very impactful as opposed to like the anonymous donation that just kind of works its way through the economy might be more impactful, but certainly less grounded in reality anyway.
Teachers union is feeling kind of left out in this particular scenario. How has Reagan reacted to everyone leaving? Are they processing this? Because. Play this out. Yeah. You. Have a number in mind. You're like, I'm getting 5% of 60 people's wealth, and that's probably, like, a couple billion dollars. And then there's. And then 20 leave. And you're like, okay, now I'm getting like, 3 billion. And then another 20. Like, now I'm getting one. Yeah. There's. Also an insane power law. Yeah, right. Oh, yeah, sure. So the biggest ones leave. The people that stay might be the guy who's at 1.2 is like, I. Guess I'm good for my 20 mil. You can take it. Yeah. I mean, so far, the. The folks behind the ballot measure have not acknowledged that people are actually, in fact, leaving. They continue to call wealth flight largely a myth, which is crazy. We'll be talking to these people on Twitter. Like, we literally spoke to 20 of them. They literally are leaving the state. It's not a myth, but they point to research mainly looking at the movement of millionaires and just generally wealthy people after taxes are increased. And. It'S true that in those cases, there's not a lot of mobility, but I think they have made millionaires. Yeah. Is part of the strategy.
A lot of that work. Kind of like the white pill case for why you should believe in someone. But then we also do. We do a lot of work that I think centers original thought and maybe the kind of in the spirit of the contrarian. Yeah. We'll look at an issue, what everyone else is saying, and then kind of tell people, actually. But actually, this is what you should think. Yeah. And so lately we've been doing a lot of that with the proposed wealth tax in California. We've kind of flooded the zone there and been duking it out, actually, with the New York Times and the Wall Street Journal. So I kind of feel like I'm back at BI all of a sudden. Exactly. Well, yeah. What has been the response? I mean, you've interviewed a ton of billionaires in California. It feels like the base case is just like everyone leaves. Maybe they're not all loud about it, but Mark Zuckerberg's not out there. But like, he did buy a place in Miami, and it seems like he might be a Miami resident soon. Yeah, there's kind of two waves. There's people that were like, I'm gonna get out in 2025, so I never have to pay the tax. And there's a second wave, which I think is the Mark Zuckerbergs that are like, if this goes through, it'll get fought. I might not have to pay it. But then I'm still kind of like hedging, potentially hedging and getting out. And so I think there's. Again, this is. We're kind of in the midst of the second wave of people that are like, I might have to pay, but I'm certainly not going to pay it every year for the rest of my life. Yeah. Yeah. A couple weeks ago, Mike Solano published a piece where he interviewed more than 20 billionaires in the state of California, which just, like, wow, might make him the most well sourced tech journalist ever. But they literally all said that they were leaving or planned to leave. So that's really striking. That's like a meaningful percentage of the total billionaires in California. And what's really interesting is they're not. They may be leaving because they don't want to pay the tax, but actually the tax is retroactive. So many of them. Many of the folks who are leaving probably will still have to pay it if it goes through, but they're leaving anyway because of the kind of overarching political landscape of California. Like, they're. They're just kind of over it, and they find the kind of lefty politics to be too risky not only to build wealth but to build companies. Sure there's language in the proposed ballot measure that has really thrown people through a loop where so the government has this challenge of tallying people's net worth which is not easy when it comes to founders of private companies. They're using kind of a shortcut where founders will be presumed to be owners of anything they control. You guys have probably seen the scary math with this on your timelines. But basically since founders often have their shares often come with outsized voting rights, what this means is you could be presumed to be an owner of 10 times the value of your actual economic position. Yeah. And that's all subject to a rebuttal process and stuff. It's not supposedly meant to be the final word but I think founders hear that and it's like okay, like this is a significant risk to the business and to my longevity in the state. How are you.
Intense fear based marketing to justify, to kind of catalyze adoption success with fundraising, et cetera. But then again, it's kind of coming back to bite in the sense that everyone's saying, well, no data, I don't want a data center in my backyard, I don't want my company to even be investing in this, et cetera. I mean, speaking of fear, you just mentioned nuclear war, right? And I just think that you can believe as I do, that AI is going to be a very meaningful technology. But the fact that people are more scared of a robot apocalypse than nuclear war. Look, right now Russia has multiple bore class nuclear submarines off the coast of the east coast of America that have the capability of raining nuclear fire down, thermonuclear fire down, all up and down the seaboard, right? The eastern seaboard. I mean, like, you know, a single modern thermonuclear bomb detonated above Central park would destroy 80 plus percent of the buildings in Manhattan and hit parts of New Jersey and Connecticut, et cetera. Right. And again, this is the Mott Bailey thing, right? Which is like, you might say, well, that's a very extreme scenario, but every day I am opening up my web browser and reading about, oh, AI is going to exterminate the human race or, or AI is going to put us into this utopia where no one is ever going to die again, right? And it's like part of what I'm trying to do is just claw out like a normal space in this, right? To just say there is a very obvious future where these tools are meaningful, eliminate some jobs, have a lot of cultural importance, but where we're not suddenly faced with a fundamentally different version of human life. So if it's.
Investing in this, et cetera. I mean, speaking of fear, I mean, you just mentioned nuclear war, right? And it's, I, I just think that you can believe as I do, that AI is going to be a very meaningful technology. But the fact that people are more scared of a robot apocalypse than nuclear war. Look, right now this Russia has multiple bore a class nuclear submarines off the coast of the east coast of America that have the capability of raining nuclear fire down, thermonuclear fire down, all up and down the seaboard, right, the eastern seaboard. I mean, like, you know, a single modern thermonuclear bomb detonated above Central park would destroy 80 plus percent of the buildings in Manhattan and hit parts of New Jersey and Connecticut, etc. Right. And again, this is the Martin Bailey thing, right? Which is like, you might say, well, that's a very extreme scenario, but every day I am opening up my web browser and reading about, oh, AI is going to exterminate the human race. Or, you know, AI is going to put us into this utopia where no one is ever going to die again. Right? And it's like part of what I'm trying to do is just claw out like a normal space in this, right, to just say this. There is a very obvious future where these tools are meaningful, eliminate some jobs, have a lot of cultural importance, but where we're not suddenly faced with a fundamentally different version of human life. So if it's not nuclear war and it's not.
I was a laser cat for Halloween once. Oh, that's cool. Yeah. Talk about like, what is the shape of like actually working in the factory? Is this stuff risky or does it all happen in a clean room? How much of this is like a TSMC type fab? Like what is it like today? Yeah, the clean room is here. We have a clean room. We took some panels off to get the machines in, but yeah, it's, it's a pretty like we take the approach of like question the requirements and delete the part of process if we need to. And you know, we start in like a semi dirty environment to make sure like we want to, we don't want to go overboard. Right. You don't want to just go all in on a clean room when you don't have to. So actually back at SpaceX we started making the space lasers in a tent. And so like there's actually a tent behind me or in front of me. Sorry. And so like you, you, you want to not, you don't want to go too all in on like it has to be in a clear. You, you do tests, end of line testing and, and qualification to make sure that like if it starts to impact your yield then you would implement procedures and processes to keep it cleaner. This is like a pretty good clean room, but not the best. And so we'll, we'll see how our yield looks with, with this one and we'll implement stronger strategies if we need to. But yeah, all of our semiconductor packaging is happening here and you know, we, we get some dyes from other foundries that are all outside of Asia and then we bring them here and package them. Last question from me, what was the biggest lesson you learned from working at SpaceX? I mean just what I said is like question the requirements and delete the part and process. It's like, it's so simple but it's so useful. And like when we're designing something like you, you want to try to know why each part is there. So in our design we've deleted quite a few parts and most people don't delete that part, but it ends up work. Anytime you delete a part, you delete a potential failure mode. And you want to be like a really reliable system. You need to delete as many parts or processes as possible and also helps you like assemble things faster. And so that was one of the big lessons I learned. Yeah, that feels like, I don't know, easy to say, really hard to do in practice. Like it feels like you have to experience it like I've heard that a million times. And, like, I'm sure that there's things that I could delete, like even my daily workflow that I haven't figured out how to do. That's. I Love that. Yeah. IK6 is a company where, like, you learn learning things the hard way in the most extreme way. Yeah, yeah. You ship a soft. You ship a piece of software, it doesn't work. It's like, okay, let's patch it. You ship a rocket. It has some, like, dependency that you didn't think was maybe that important, and it blows up and you got to. Live with that or the whole laser mesh doesn't work. So let's hope that happened. That was, you know, like putting a kid in charge of that is really a crazy thing that SpaceX does, is they hand the baton to a really young person and put the whole company, like, bet the company on those people. And I think that forges things within people to be able to deliver and then the delete the part and process helps them understand the whole system really well. I love it.
Which we love. You signed it, but it was distracting from the horse. But now the horse is front and center. But more importantly, your direct revenue is front and center. Give us the news. Yeah. So we've reached a billion dollar annual run rate on our. Oh, wow. Thank you. Incredible. Is the job finished and 25 million subscribers. That's huge. So I think that's like ESPN size for subscribers, which is like next stop Hulu, but really exciting for as we work to diversify our revenue and create this whole new business line. What was. The. Narrative violation too around social media? There hasn't been a lot of people will pay for entertainment products, but there hasn't been a bunch of scaled actually a social media. Product. With that kind of SaaS. Yeah. What was the reaction like when you initially launched subscriptions? Well, I think what's so cool is people are really passionate about Snapchat and so they want all these new features and they're asking us all the time like, hey, can you chat background or can we have a Bitmoji pet or whatever it is. And in the past we would be like, oh man, this is really a feature for power users. We can't build this. For a. Billion people. So this gave us the justification and the resources like, okay, fine, pay us two bucks a month. Have your Bitmoji pets and your chat backgrounds and all this fun stuff. And so people just keep the request coming. We. Keep building all sorts of fun stuff and actually it's great for the team because otherwise we never would have prioritized all these really fun features. How do people actually submit requests? Literally email. Email, customer support. You know, we do like, you know, research and things. There's a lot of the tech companies, it feels like they're. They don't snap you. Well, it's evansnap.com so it's like a little too easy to send straight to my phone too. We love. That. Dangerous. Dangerous to say that on this show. So yeah. What has been.
Important to do very hard things in the real world at a time when software is being disrupted in this way. Yeah. But at the same time, I feel you guys are in a unique position because it doesn't feel, well, like looking at the video game industry. I think that AI will present like a pretty huge challenge for a lot of these smaller studios that were just in the business of spending a few years making a game, releasing it. If it becomes one way easier to build a game, that's bad for them. But it might be great for like a Fortnite or a Roblox or one of these platforms that have these big existing social networks. And so I feel like your core business of actually still being a social network where people are connecting with other real humans is in a good position. And so, yeah, the hardware is just like a bonus. And one of the things we've been thinking a lot about is both with the friendgraph and in terms of our distribution, how do we leverage all these amazing tools to just start building more apps? One of the things that we always love to do is come up with new ideas. In the past we've been like, oh, we got a great idea, but we got to build it. Oh man. Right now I was just looking. Today we have a really. I mean, I shouldn't even be saying something. Coming soon anyway. We've got a wild new idea that we're working on. Well, yeah, I was going to ask, have you been pitched by the vibe coding? Do people want to be able to send an app around basically that they just prompt on the fly? I think so. And I think what's going to be really interesting about all these companies, like before, so many of their resources were dedicated to engineering. Now I think people are going to be much more focused on marketing, on distribution. Right. And that's a big shift in the. Way that these being able to send somebody an app that you just made for them that historically you would have just like sent a funny picture to your friend. This was my experience sending you soras of me walking on the beach in the Volta themed suit. I was able to make an in joke with me and my three friends in a group chat. That would not do well on social media broadly. You have to have all this context. But because the cost of generating new content had dropped so low, I could do something that didn't require a costume department and cameras getting set up or all those different things. Talk a little bit more about.
The weather is fantastic, although yesterday it was a little rainy. Yeah, yeah. Basically, California has like, karma for forever flexing the weather on the rest of the state. Like. How many times I've messaged a friend and they'll say, like, oh, it's like, it's my. It's like five degrees out right now. And I'm like, oh, it's like 71. It's gonna be that way all week. And so in exchange, in exchange, we get like the gnarliest political. It's a little. Chilly today. It's 56. That sweatshi weather. It is. Freeze it. It's freezing. Freezing. It's actually freezing. Stay indoors. This is crazy. But, yeah, broadly. How do you think California is going? Yeah, you know, I'm concerned. What gives me some optimism is that it looks like more and more people are increasingly concerned. Right. What I was most worried about, you know, even six to nine months ago, was the number of people that, like, thought things were going really well in California because they were contrasting with what they. They were seeing at the federal level and feeling like, oh, California is better. It seems like there's less chaos here or whatever because we've got a single party state, essentially. And so I think now more and more people are hearing that we're number one in terms of homelessness, number one in terms of poverty, number one in terms of unemployment. And they're like, whoa, that doesn't line up with the California that I love, that I want to be a part of. How can we change that and fix that? So I think the awareness is really important because in a democracy, without awareness, you're not going to get changed. And so I think, you know, hopefully that will continue to build. I think, you know, if Newsom decides to run for president, that's going to, I think, raise even more awareness of California and the challenges that we're facing here, which, again, I think will be helpful. So right now, to me, it's really an awareness game of helping make sure Californians understand, like, this is not going in a direction that I think we want. And if we want change, then we're going to have to ask for that. Right. And advocate for that. But I think that's happening more and more, you know, which. Which gives me some hope. Yeah, Makes a lot of sen.
Most of you ship a piece of software, it doesn't work. It's like, okay, let's patch it. You ship a rocket, it has some dependency that you didn't think was maybe that important and it blows up. And you gotta live with that or. The whole laser mesh doesn't work. So let's hope that doesn't happen. Putting a kid in charge of that is really a crazy thing that SpaceX does is they hand the baton to a really young person and put the whole company, like bet the company on those people. And I think that forges things within people to be able to deliver and then the delete the part and process helps them understand the whole system really well. I love it. Well, we are not far from Gardenia. Yeah, come by, come by for your next appearance. I've got a feeling you'll be back on the show this year. Great. Yeah, I would love to be there. You guys are welcome to come anytime as well. Love it. Thank you so much. Lasers. I would love to have a great. Rest of your day. We'll talk soon. Let me tell you about Plaid. Plaid powers the apps you use to spend, say, borrow and invest securely. Connecting bank accounts to move money, fight fraud and improve lending. Now with AI and you saw it at the opener. Jordy, get those juggle balls ready because it's time to tell you about Turbo Puffer, Serverless vector and full text search built from first principles and object storage. Fast 10x cheaper and extremely scalable. Sent me these. These little puffers. They're incredible. Jordy's jungling is very fast. They're going to be hard to keep here at the studio. Well, without further ado, we have Evan Spiegel from Snap in the TVP and Ultra Dome. Good to see you again, Evan. Welcome to the show. Welcome back to the show. Great to have you. Thank you so much for coming by and stopping by. The TV pin Ultradome always. You're out suited me. You outsuited. That is a beautiful suit. It's always a great excuse to wear a suit. My wife was like, where are you going today? I like. I like the buttons that don't show through. I don't know what that's called, but that's a touch of like some. It's very tasteful. Thank you. I appreciate that. Are you following the taste discourse? People are saying that taste is important. Do you have a stance on this. In the context of building software? It's more of like a post AI thing. Like AI is going to be able to do everything but not Taste. And it's just like, it's sort of always been obvious, but it's also fun to write about, it's fun to talk about. There's a whole bunch of interesting examples. It's funny you say that, because our designers are literally becoming engineers right now. So it's kind of like. I mean, if you think about, like, 10 years ago, right. Even, like, the power dynamic in a company, the hard part was building things. Right now, the hard part is, like, having a great idea. So I think taste is important. I guess the question I was joking about earlier was, like, there's two ways to build a new consumer product. One is you a B test the color of the button, and you just look at the data, and that's more of the engineering mindset. And then there's the tasteful approach, which is maybe like, I just know green's the right color for my brand. Did you engage in both throughout your journey? Is there a place for both? Like, how do you see those two? The gut instinct interfacing with the engineering reality. Somebody pulls up the chart and they tell you that it's got to be blue, but, you know, yellow's right. I think for us, like, there's a huge difference between, like, generation and iteration. Right. If you're trying to come up with a new idea, it is really important that you can exercise your sort of creative opinion or judgment. I mean, you know, the reason why we chose Yellow is there were no other apps in that top 100 that were yellow. So that was, like, an easy one to stand out. We didn't have to monopolize yellow. Verticalized yellow. Yeah. U.S. and McDonald's. Yeah. But I think what becomes very important very quickly is that you're able to iterate. So once you put something out there, a B, testing, experimenting, that really helps. Especially as you have a big organization, you don't want to bottleneck people's experiments. So anyone can experiment and learn. Yeah, it's interesting. One way to think about it is I feel like executives thrive if they have great taste because they have a lot of people coming to them with work or projects, and it's their job to do decide, like, what are we actually going to focus and prioritize? And now anybody can create, like, 20 different concepts for a website. And so they're having to choose from that to then go up the chain and decide, okay, which one of these should we actually implement? But there's like, taste is becoming more important because now it's so much faster to just create anything. So at Every level of the stack of the organization, you have just like more things to choose from. And taste is just choosing taste in your personal life is like choosing like, do I get this jacket or that jacket or do I put use this flooring in my house? Or how many logos should be on my shirt? That's great. Give us the news. Massive milestone. What happened? Well, I literally came here for the gong. You know what I mean? It's a great strategy. We got a bigger gong if you were here last. Is this a new. Is this like for Year of the Horse? No, we were early. We were early. Yeah, yeah, we were early. We. It was really funny. It was so funny. So. So we were walking by the store. Yeah. And John and I were walking and I look inside. The store is closed, but all you can see it's like kind of dark in there. And there's this massive. It's at the massive horse. And I was like, we need one of those. Immediately John's like, what do you mean we're not going to be able to get like a horse? And then we look on this website, for like a few thousand dollars, you can get this horse. That's like the hardest part of buying. The horse was convincing the production team that we actually weren't joking because Jordy sends it in the chat, hey, we need a. We need a horse statue. And he's like, oh, this is funny. And they were like, no, we're seriously. Okay, he's joking. And we're like, no, actually, go figure it out. This is your job. Anyway, we're not here to talk about. Congratulations on the horse. We got a horse. You got a big milestone. The horse was here last time, but it was wrapped in Christmas lights and we had a massive Christmas tree and there were so many other things going on in the studio. You gave us that very nice Christmas ornament, which we love. You signed it, but it was distracting from the horse. But now the horse is front and center. But more importantly, your direct revenue is front and center. Give us the news. Yeah. So we've reached a billion dollar annual run rate on our. Oh, wow. Thank you. Incredible. Is the job finished and 25 million subscribers, that's huge. So I think that's like ESPN size for subscribers, which is like next stop Hulu, but really exciting for us as we work to diversify our revenue and create this whole new business line. What was the narrative violation too around social media? Right. There hasn't been a lot of people will pay for entertainment products, but there hasn't been a Bunch of scaled actually a social media product with that kind of SaaS. Yeah. What was the reaction like when you initially launched subscriptions? Well, I think what's so cool is people are really passionate about Snapchat and so they want all these new features and they're asking us all the time like, hey, can you chat background or can we have a Bitmoji pet or whatever it is. And in the past we would be like, oh man, this is really a feature for power users. Right? Like, we can't build this like for a billion people. So this gave us the justification and the resources. Like, okay, fine, pay us two bucks a month. Have your Bitmoji pets and your chat backgrounds and all this fun stuff. And so people just keep the requests coming. We keep building all sorts of fun stuff. And actually it's great for the team because otherwise we never would have prioritized all these really fun features. How do people actually submit requests? Literally email. Email customer support, you know, and we do like, you know, research and things. There's a lot of the tech companies, it feels like they're. They don't snap you. Well, it's evanap.com so it's like a little too easy to. It goes like straight to my phone too. We love that. Dangerous. Dangerous to say that on the show. So yeah. What has been the key to scaling revenue there? Has it been just driving increases in ARPU or just onboarding more and more people into the premium product, doing more top of funnel like bringing in these prosumer users as net new users re engaging people? Like, what's been working? Yeah, a big focus has just been continually dropping new features, letting people know about those features, creating new entry points into the subscription service through those features. One of the big things that we rolled out last year was memory storage. So we've got people who are storing a lot of memories on Snapchat. So we give like 5 gigs free. But if people want more than 5 gigs of storage, then they can either pay just for the memory storage or they can join Snapchat Plus. It's so funny because I remember when Snapchat started, everyone was like, oh, this is genius. They don't have to have any cloud storage costs the business. And now it's like, oh, well, we're in the business but we have a monetization scheme on top of it. Well, it's 10 years later. It turns out there's a lot of cloud storage costs. We were paying them. Yeah, I've seen the Google belt suffering. Have you seen demand for AI features. And what does demand for AI features in a consumer context look like? Absolutely. I think one of the really exciting things, people don't realize how widely used the Snapchat camera is for generative AI, you know, images and videos. I think in Q4, 700 million people use generative AI lenses. So those are like our camera editing tools. And we have a feature called Lens plus which basically takes some of the most cutting edge gen features, you know, video generation, those sorts of things, and puts it behind a paywall. So if you want the most advanced gen AI, you know, image and video editing features, that's, that's part of Lens Plus. How have you thought about using camera as the end to end editing suite versus sort of bifurcating them? Looking at like what TikTok and Capcut are separate, edits and Instagram are separate. When do you want to go separate? When do you want to consolidate everything? Generally for Snapchat, one of our strengths is how much content is actually created in our camera because it's much more authentic and what we find today, especially because everything is so overly edited and stylized, because everything is created with generative AI. What our community tells us all the time is that we want authentic, original content. So for us, we really focus on stuff that's actually captured and made in our camera rather than uploaded. And even as we think about the types of content we distribute on Spotlight or you know, Boost and Spotlight for example, we're thinking a lot about what's actually made in the Snapchat camera. And this stuff is hilarious, but not edited in the same way that it might be on other platforms. Have you thought about how agents will take hold in a social media app? I've been trying to think about this, like the Manus acquisition. What does it look like if I have an agent that can go and, you know, work its way through my social network profile? It's like I could kind of tell it to like, like every comment that comes in that's positive and it could do some sentiment. I don't know if I actually want that. Like, how does that play out once you get to models that can actually run in the background? It's not just I ask for a question, I get an answer, or I say replace the background with a beautiful forest and it does it. Have you thought about any of that yet? We early on brought my AI into Snapchat and that was a great proving ground to experiment with things like personality, memory, all those sorts of things, or being able to bring MyEye into group chats. Or conversations. So I do think it's been useful in that regard. Certainly the my AI use case is very utilitarian. We see a lot of just questions, homework help, that kind of stuff. On the agentix side, I'm much more excited about what's happening inside the business. I think the potential for business transformation is off the charts. I think like, you know, if you look at small and medium sized companies for the last 10, 20 years, they've almost been like left for dead. Everyone's been so excited about mega cap companies. Like I think the next 10 or 20 years, the efficiencies that small and medium sized businesses can drive to grow using agents is going to be off the charts. So I think that's like that to me is where, where I'm most excited. So is anyone writing code anymore? Are you vibe coding stuff now? We see every level of the spectrum. It's either the CEOs or no one's writing code. It's always at the extreme. What's your experience? It's not vibe coding anymore, it's agentic engineering. Yes, yes, yes, yes. I think what's really interesting about what we're seeing in Snap is that to some degree, because the company's been around for a while, it's operating at two speeds. There are team members who have fully embraced agentic engineering and who are essentially not writing code. And then there are other teams that are still operating in a more traditional way. So because this change is happening so quickly, one of the things we're very focused on is driving these tools through the company. You know, really making sure that folks embrace this new way of working, helping train folks to do that. Because you know, certainly for quite a number of folks, they are not, not writing code the way they used to. What's the, like the next development that you're most excited about is it just understanding the code at a deeper level, sort of like a higher IQ model or thinking in systems and scalability. Like it's not, you know, some tiny app, like there's so many users on here. One tiny change can have massive ramifications across a database. The, the, the, the stakes are a lot higher. So what are you looking for in the advancing in the tools of AI tools to actually improve your business? Yeah, for us it's really just right now about building agents across the enterprise. Right. Like so whether it's like, you know, somebody reports a bug, the agent goes out, figures out who else is report a bug, like it actually tries to go figure it out. Right. Proposes a fix Right. Those sorts of things. You look at our sales team. Right. And all the work they're doing, everything from really trying to understand a client's objectives, generating insights for them, putting that together into a presentation, mapping it to our advertising solutions. Like, you know, I think there's just a huge opportunity. Yeah. Even, even thinking an Advertiser that spends $20,000 a year with you guys should eventually should in the relative near term get the same type of presentation and like almost like high touch experience that somebody spending $10 million should get. Right. And that doesn't feel like a lot of that is like people making slide decks, like really being on it, like, like being timely with like kind of feedback or you know, things like that. That feels like very within reach with agents. Yeah. Walk me through the current pitch to advertisers. When you meet with a new big company, I'm sure everyone's used your advertising product at this point, but let's assume there's some new hot company that is growing, they have a physical product or something and they want to grow their customer base, grow their reach. How are you positioning your offering on the advertising side? Nearly billion user platform, including more than 110 million monthly active users right here in the United States in this really, really important 13 to 34 demographic. And the reason why that demographic is so important is because they are forming lifelong relationships with brands, with products and not to mention they're making their first car purchase, their first home mortgage, I mean even their first tube of toothpaste. Right. So I think those are the sorts of really important long term brand relationships that are so critical. I'm sure I'm still using the same toothpaste brand I bought when I was 19 or something. I have not churned from that company. LTV is probably through the roof. So do you have to stress the importance of thinking in not a roas, not a one year LTV payback, but thinking about capturing a customer that could stick around for a decade or more because they're a younger audience. Is that something that's resonating? I think that fundamentally ROAS is critically important. It's something that we really optimize towards and a lot of people use lower funnel objectives on Snapchat. That's been a huge driver of growth for us, especially with the small medium customer segment because folks are very sensitive to their return on investment. But when I talk to advertisers about why they love using us, it's always the new customer metric that is why they're coming to Snapchat they say if I look at the percentage of new customers I'm getting when I spend on Snapchat, like that really moves the needle for my business and it moves the needle for me over the long term. And it feels less like a tax, which happens on some other platforms where it's like these people were already coming to my. They were looking for me and I had to pay for that. It's frustrating. Jordan. Yeah. How what are you excited about in AI hardware broadly and everything that you guys are working on? It feels like this will be a massive year for hardware across the board. You've got, you know, Apple, you know, people have been reporting, I think this week on multiple new hardware devices. There was that like somewhat believable looking like open. OpenAI. Oh yeah, OpenAI ad. But anyways, a lot of action going on, a lot of energy and excitement. You guys are in a good position because you've been working on it for better part of a decade now. Yeah, I mean, it's a transformational year for Snap in this regard. We just spun out Specs into its own standalone subsidiary. So it's really going from R and D science project to real company after almost 12 years. So that is really important for us. I think it intersects with some of the things that we were just talking about in terms of the evolution of AI, because one of the biggest things I think folks have been concerned about when it comes to building a new computing platform is how to compete with the lock in that the app stores have. How can you possibly compete with all these other app stores? And I think literally at the beginning of this year people realized, like, software isn't a moat anymore. Right? That like having an app store isn't a moat anymore because it's so easy to build software. You can even build software on the fly. Right. And that to me is really exciting and coming at an amazing moment. What can you do to lean in? I saw somebody had hacked a pair of smart glasses to work with OpenClaw. And I imagine like, is there anything that you would do on that front to really lean into the whole kind of like hacker movement around AI? Because there's like, you guys are building a bunch of experiences internally. We've used a lot of them, they're very cool. But at the same time opening it up and saying, like, hey, this is a platform that anybody can build on. And we've seen this. Even with the kind of the Mac Mini movement, Mac Minis are starting to sell out in different points around the country. People are obviously willing to spend real money to experiment around all these products. I think the big thing that you're sort of circling is the way that people are using their computers is really changing. Right. And they're really just supervising agents doing work for them. And that is a perfect fit for specs because the whole idea is to stop spending all this time hunched over your laptop or staring at this. We were at breakfast this morning, this guy, I didn't take a picture because it would have been rude, violation of his privacy, but his posture was literally like, it was the most insane posture he needed. He's getting ac one way or another, it's peak performance. He's getting fired. Calling it now, like, I know we're going to see this guy. But yeah, like we, we. It feels that what you're imagining, which is like we can all just spend all of our day walking around being productive, monitoring agents in a kind of a heads up display that feels within. It feels within reach. It does. Finally. Yeah. It's incredible, exciting. So it's cool that all of this stuff is coming together, you know, in this moment. And I think, you know, it is super important for us to be investing in hardware because we talked about software's not a mode anymore. Right. So it becomes even more important to do very hard things in the real world at a time when software is being disrupted in this way. Yeah. But at the same time, I feel you guys are in a unique position because it doesn't feel, well, like looking at the video game industry. I think that AI will present like a pretty huge challenge for a lot of these smaller studios that were just in the business of spending a few years making a game, releasing it. If it becomes way easier to build a game, that's bad for them. But it might be great for a Fortnite or a Roblox or one of these platforms that have these big existing social networks. And so I feel like your core business of actually still being a social network where people are connecting with other real humans is in a good position. And so, yeah, the hardware is just like a bonus. One of the things we've been thinking a lot about is both with the friendgraph and in terms of our distribution, how do we leverage all these amazing tools to just start building more apps? One of the things that we always love to do is come up with new ideas. In the past we've been like, oh, we got a great idea, but we got to build it. Oh man. Right now I was just looking today we have a really great. I mean, I shouldn't even be saying something coming soon. Anyway, we've got a wild new idea that we're working on. Well yeah, I was gonna ask like have you been pitched by the vibe coding? Like do people want to be able to like send an app around basically that they just prompt on the fly? I think so. And I think, I think what's gonna be really interesting about all these companies, like before so many of their resources were dedicated to engineering. Now I think people are gonna be much more focused on marketing, on distribution. Right. And that's a, a big shift in. The way that being able to. Being able to send somebody an app that you just made for them that historically you would have just like sent a funny picture to your friend. This was my experience sending you soras of me walking on the beach in the Volta themed suit. Like it was like I was able to make an in joke with me and my three friends in a group chat. That would not do well on social media broadly. You have to have all this context. But because the cost of generating new content had dropped so low, I could do something that didn't require a costume department and cameras getting set up or all those different things. Talk a little bit more about Gen AI. There's a lot of crazy video models releasing. It seems like a lot of companies are moving fast and breaking a lot of things, particularly in like the Hollywoods or Hollywood's not happy about this stuff. Like what is the responsible rollout of generative video features look like? What a great question. So I think for us we have a lot of safeguards in place both around basic things like you shouldn't be able to make somebody nude or put in a compromising position or that sort of thing. You shouldn't reproduce copyrighted content, those sorts of things. So as we look at even open prompt experiences where people can ask for to create different sorts of photos and videos, we try to layer in safeguards to prevent that sort of thing from happening and we do a lot of adversarial testing to make sure that that is unlikely to happen. And then in terms of lenses, sponsored lenses, different experiences that partners are bringing to the platform. I'm interested to know what's the shape of the ecosystem. Are there random developers out there who are making things and then earning some sort of rev share from that? Is there actually a flywheel there or is it particularly like you're working with a brand and they're going to do a sponsored lens and your is developing that for them? What does that side of the business look like? Yeah, it absolutely spans the spectrum. So there's a hundred thousand more than, well, gosh, maybe 400,000 developers now who built lenses for Snapchat and increasingly for specs as well. Those developers can apply to include their lenses in Lens plus and earn a revenue share out from that if people are engaging with their lens and that kind of thing. And then we've got a whole internal studio as well, so we can work with advertisers if they want to build a unique experience. Or we have tons of partner studios who we can connect them with. But you know, increasingly there's a tool called Easy Lens. It's pretty fun if you pull it up and just play with it. You can build a lens from a prompt. That's what I was going to ask. I imagine that that has to be acceleratory for you, right? It's huge for us. Yes. And again, we're thinking a lot about how to connect. You know, we have Lens Studio, which is more of the pro tool, and then we have Easy Lens, which allows anyone to create with a prompt. But I think even Lens Studio itself is going to become much more oriented around agentic engineering rather than. Yeah, because you just prompt it and it's wiring up whatever your domain specific language is. Very interesting. Yeah. Are you starting to see an actual kink in the graph of lenses that are being deployed yet or do you think that this is something that comes once people realize that it's actually easier to go into? Yeah, it's so interesting. There's some dynamic on the Internet that I feel like is somewhat real, where the, the more funny you think something is, the less likely it is to be actually viral. Yeah. I jokingly call this the Hayes paradox, which is also a Hayes paradox. It doesn't work. But yeah, it's this idea that something that is actually the funniest thing that you see on the Internet for an entire year might have a TAM of close to one or your group chat. But that's great if you're talking about using giving people tools that allow them to generate like hyper personalized things that are only funny to them or like a handful of their closest friends. It's like actually gives you the ability to create a lot more joy on your platform. And that's exactly what we see with Genai, that people are using it for these more communication oriented use cases. But on the content consumption side, it's the authentic, original, unedited, non AI content that does super well. So definitely a major contrast there. And I think as it pertains to EZ lens. When we start rolling that out, we saw a huge step change in the lenses that were being created. So that's become a big focus for us. But I also think you can imagine in a not so distant future, I mean now, the way that the models work, it's very hard to scale real time image transformations, which are one of the reasons why I think people love lenses, because it almost feels like you're looking in a mirror and transforming what you look like. I think in the not so distant future, a lot of lenses will just be prompts and those prompts are going to be shareable. And we have a whole feature right now around the Imagine lens and some of our other generative lenses where we have trending prompts and you can share prompts with your friends and iterate on them. And I think again, kind of tying back to the importance of the friend graph. I think you see that intersection of people creating these inside jokes, but then also being able to really easily share them with their friends. Have people create their own content inspired by those prompts. It's pretty cool. Yeah, that seems really, really important. I mean, whenever one of these new image models goes viral, the stuff, there's always some like ground truth meme, human element that's underlying it. I think of the Studio Ghibli moment. It's like I've seen cartoons, I could just go look at a cartoon, but I haven't seen a cartoon of me. And so there's a little bit of that in there. So the lenses make perfect sense. In that case, let's get the California update. What's on your mind? Broadly in the state going better than ever. Right. It seems to just get better and better. I don't know. I mean, if we didn't have this weather, we'd really be in a tough spot. I mean, it's incredible what we get away with. It really. It really is. It's got to be a big piece of it. The weather is fantastic, although yesterday was a little rainy. Yeah, yeah. Basically California has like karma for forever flexing the weather on the rest of the state. Yes. Like how many times I message a friend and. Or they'll say like, oh, it's like, it's my. It's like five degrees out right now. And I'm like, oh, it's like 71. It's gonna be that way all week. And so in exchange, in exchange we get like the gnarliest political event. It's a little chilly today. It's 56. That's sweatshirt weather. It Is freeze it. It's freezing. It's absolutely freezing. Stay indoors. This is crazy. But, yeah, broadly, how do you think California's going? Yeah, I'm concerned. What gives me some optimism is that it looks like more and more people are increasingly concerned. What I was most worried about even six to nine months ago was the number of people that thought things were going really well in California because they were contrasting with what they were seeing at the federal level and feeling like, oh, California is better. It seems like there's less chaos here or whatever because we've got a single party state, essentially. And so I think now more and more people are hearing that we're number one in terms of homelessness, number one in terms of poverty. Right. You know, number one in terms of unemployment. And they're like, whoa, like, that doesn't line up with the California that I love that I want to be a part of. Like, how can we change that and fix that? So I think the awareness is really important because, you know, in a democracy without awareness, you're not going to get change. And so I think, you know, hopefully that will continue to build. I think, you know, if Newsom decides to run for president, that's going to, I think, raise even more awareness of California and the challenges that we're facing here, which, again, I think will be healthy. So right now, to me, it's really an awareness game of helping make sure Californians understand, like, this is not going in a direction that I think we want. And if we want change, then we're going to have to ask for that. Right. And advocate for that. But I think that's happening more and more, which gives me some hope. Yeah, makes a lot of sense. I want to ask about live streaming. How do you think about it? It feels like it's having a little bit of a moment with the, with the clavicular stuff. I don't know if you even tracked any of this clavicular. I love this. So locked in. So locked in. You're like basically got frame mogged. It's a big deal. No? Yeah. So you're running, you know, have a billion users on the Internet, but there's basically a number of creators on KIK that have generated probably 100 billion views, like some absurd number. The one John referenced is from the looksmaxing community, which is effectively guys that try to be as good looking as they possibly can. So it's basically this whole drum, like it's kind of like WWE brought to very Internet native. There's all these different characters. One of them is running effectively a 24 7. Whenever he's sleeping, he's streaming. So it feels like IRL, IRL live streaming is hitting. Just having a really big moment right now. How have you thought about it historically? Does it play any type of role in Snap's future or is it not something that the users actually want? We decided to step our way there essentially with creator subscriptions. So we just started testing creator subscriptions with a small group of creators. What we find on Snapchat is that people have very, very loyal relationships with creators. So once they subscribe, like they want to come back every day and see what's new on their story and message with them and really get to know them better and build this deeper relationship. So we thought creator subscriptions was a good extension of what we're already seeing happen on Snapchat. We're going to test that out and see how that goes. But that will give us some of the infrastructure to start thinking about, you know, stepping from there into. And that would be an individual creator just going live, talking with their existing follower base. Yeah, potentially to start with their existing subscriber base or even with their, you know. Yeah, not necessarily even their followers more broadly, but their existing subscriber base and then, you know, layering in some of the replying and gifting and those sorts of things before opening it up more broadly. How have you been? Yeah, I felt like my personal stance is that like Twitch as a platform since landing at Amazon is just like not gotten the. Not just gotten the attention that I think it potentially deserves. I created an opportunity for the kicks of the world to step in. Well, I mean, I haven't seen Andy Jesse live streaming on Twitch once. Yeah. And so driving channel. I would love to see that. I would love to see earnings on Twitch, obviously. Just do it. That would be cool. Yeah. So you guys should, you should fire up your. And do just an earnings call on the platform. It would be. I want the CFO there explaining the whole financial model. What happened. Keep it wonky. It's fine. That's what Twitch is. Every social media platform is like a flourishing of niches and you find your niche and there's going to be someone there who's like, yeah, this is amazing. Switching gears a little bit, you mentioned subscribers are getting close to Hulu numbers. How have you been processing the somewhat polished produced vertical short form trend? That's sort of happening. I see them in the App Store. I haven't been a user really, but you're familiar with what I'm Talking about Real Short is one of them. They seem to be popular. There's obviously some organic creation that's happening on a fully UGC platform. But how have you thought about that? It feels like sort of revenge of Quibi in some ways. But where do you think that goes? How important is that? Is there a role for you to play in that ecosystem? Yeah, I know that we've got a lot of advertising partners who are marketing the short form videos on Snapchat. And I mean those ads are so sometimes I find myself watching one for like a minute and I'm like, oh my God. Who killed her? That's always what it is. So that seems to be working. But I think when we experimented with shows many, many years ago, I think for us we just found that the volume of creativity, the billions of snaps being created in the Snapchat camera, meant that really Snapchat is at its heart about UGC fundamentally. And I think in the places where we've really allowed UGC to flourish and creators to flourish, and actually where we didn't do as many shows or didn't do as many publisher stories, we really built a very vibrant, organic creator ecosystem. So I think in sort of the ensuing few years we've pulled back from doing that sort of more premium content because what we just see our community love is connecting with the folks that feel like they're next door. Yeah. And I mean YouTube did the same thing. YouTube Red, they had a whole bunch of like produced things and you'd see the. I'd be like, my favorite creator got paid a bunch of money to do something produced and it's getting less views because like I actually just want him to turn on the camera and just talk. I don't need all that other stuff because like, that's not what I'm there for. And I think that recognizing the desire of the user, the context, like all of that really matters, like the medium is the message. Right. What is your, how do you allocate Your time in 2026? Where you, where are you spending time that you know, how has that kind of changed over the years? I mean, at a high level it's probably 80, 20 Snapchat and specs. I think that's going to have to start shifting this year. You know, not maybe not totally 50 50, but certainly close, close to it. Just as we, as we ramp up and bring that product out into the world, my happy place is making new stuff with our team. That's what I love to do. Our design reviews, whether that's with specs or with the Snapchat team. That's really what I love to do. But a lot of what I've had to do over the past couple years is really work closely with the team to rebuild the ad platform and to create a totally new this small, medium customer segment of our advertising business. I mean, our business three or four years ago was almost all large customers and like highly concentrated in the United States. And when your business is concentrated on a small number of very big customers, it just creates a lot of unhelpful volatility. And so what we've done since then is, you know, build out a platform that can deliver lower funnel goals, you know, especially the small medium customers, and then really diversify. You don't get to work on fun stuff if you don't have the economic engine. And yeah, last question from me. Virtual reality. You've obviously looked at this, haven't gone super deep in it. Over the weekend I watched the Matrix in VR. I also watched on Apple Vision Prep. You watched the whole Matrix in VR? No way. No way. Yes. They said no one had ever done that before. I also watched. You made history. It's incredible. You should hit the gong for that. I mean, that's unreal the self. That's incredible. But I'll do you one better. At. Like five minutes in he like takes. I know. I didn't, I didn't. I watched the whole thing. I did. I did. I did. I did. But. But I went further. On Saturday night I watched Terminator 2 in V. I watched back to back films. I watched two and you don't have like a ring around here. It's not, it's temporary. It goes away after about an hour. And yes, my wife did say something about it, but what was she doing. When she was she there? It was a rare situation where she was out of the house with the kids. And so I just had free time. I never had. She just plugged in. But truly, truly around family, like you can't use it because it's so antisocial. Even with the eyes, it just doesn't work. So like I actually finished watching Terminator 2 just on my phone because it was less antisocial than putting this VR headset back on once the family came home. Anyway, I did successfully watch a full movie in VR. Let it be done. And I did. But I can't tell and I think I already know the answer now that you're laughing at me. But am I like one year early? Am I 10 years early to watching movies in VR or am I just weird? And it's never gonna happen. You're gonna watch movies and glasses for sure. Okay, right. And I think, like, what. You know, it's so funny. A couple of my buddies, you know, are big into finance, stuff like that. So, like, if we travel together, you know, they'll bring their monitor to, like. Oh, yeah, yeah. Come on. You can't get stuff done without your. You just got the new Dell one. It's like six feet long. That's amazing. You got to have your. You got to have your setup. Yeah. So, yeah, so they travel with like a big monitor or, you know, even two monitors. So I think, like, a lot of the early stuff you're going to see with glasses are people who just want the full setup, but, like, do not want to ship, you know. Sure. A monitor to wherever they're going. So I think, like, if, you know, if you're, you're traveling or you're, you know, on a plane or something, you want to really get work done. It's so hard to do that on a laptop. So I think you're. I think you're right on time, actually. Right on time. Or a pioneer. Next time you come on, we'll ask, have you watched a full movie in any VR product? And we'll see. We'll see if I'm early or just weird. What do you think about timelines for watching movies in glasses? That's this year. Yeah. And then what about glasses? Is it that you still have some element of the real world? It's not as anti. You know, you're not getting the, like. You know, it's not this huge, heavy, closed headset with a screen, you know, right in front of your eyeballs. So, I mean, sorry, it seems like you love it. So the trick is that I was laying perfectly flat in a fully bright room. Yeah. Oh, yeah. This is the other thing. This is the other thing. It needs tracking. So you have to leave the lights on. I'm not kidding. I'm not kidding. No, no, seriously. So you can't turn off the lights. And so if I get home and my wife wants to go to bed and turn off the lights, I'm like, oh, okay. It doesn't work. No, it doesn't work because it loses the tracking. No way. But if you rest it right on your head just perfectly. And then also the VR headset, it wants to be world locked. So initially the screen is down here and you have to look like this, and then you have to recenter it up here, and then it appears above you in beautiful 4K, and it's amazing. And you sit there and you watch the Matrix from start to finish. And it's actually a great experience. But it is weird. A nation. I don't know if it'll ever happen. But I think the idea of, you know, being able to watch something on a giant display in a lightweight pair of glasses is compelling. Oh, yeah, if it's lightweight. But, like, truly, you actually can't have a VR headset on your face for more than 10 minutes. It's bad. I love this super bright. It has to be bright. Like, the ISO on these cameras is so low that even if I just have my lamp on, it's like all fuzzy and noisy. You know, get a warehouse and get a bunch of, like, yoga mats and have the vision pros and it's a VR movie theater. You go and you just lay in the bright, super bright. It really is like the most dystopian, antisocial thing you can possibly do. But the Matrix is a great movie, so it was worth it. I sacrificed worth it for the. For the Borg. And you broke the record. I did. I did call Guinness. Right now, when you all aren't live, you just move the table, put some mats down, and then you've got this. Yeah, yeah. It's perfectly lit. Anyway, thank you so much for coming on the show. Great to catch up. Thanks for having me. Good to see you. Congrats to the whole team on that. Thank you. Thank you. Leave us five stars on Apple podcasts and Spotify. Subscribe to our newsletter@tvpn.com and we will see you tomorrow at 11am Pacific. Are you sure you gotta get out of here? Oh, yeah. You wanna keep going? I kinda wanna keep going. I don't know what else is in the news. If we got news, we can do it. We can get to it tomorrow. Okay, we can get it to it tomorrow. Thanks for hanging out with us, folks. Thanks for hanging out with us. I love you.
So the biggest ones leave. The people that stay might be the guy who's at one point too, is like, I. Guess I'm good for my 20 mil. You can take it. Yeah. I mean, so far, the folks behind the ballot measure have not acknowledged that people are actually, in fact, leaving. They continue to call wealth flight largely a myth, which is crazy. We'll be talking to these people on Twitter. Like, we literally spoke to 20 of them. They literally are leaving the state. It's not a myth, but they point to research mainly looking at the movement of millionaires and just generally wealthy people after taxes are increased. And. It'S true that in those cases, there's not a lot of mobility. But I think they have made. Well, is part of the strategy. Yeah. Is part of the strategy that this is the best possible branch.
Just close the app. Speaking of GPT 5.3 codec spark low. No more model names like, truly. No more model names in consumer AI LLM chat apps, like, ever. Just bury them so deep in the UI that you never see them. And people will complain. People will be like, I wanted it to be easier to pick. I like picking between pro and thinking and fast and instant. I know what I want for everything. People complain, but it will all inspire the model routing team to grind harder. And the model routing team has a hard job to do, but they will eventually figure it out. And eventually you should be able to just talk to the model. You can already do this in ChatGPT. You can say, hey, think really hard about this question and give me a really thorough answer. And it'll switch from instant to thinking. I don't know that you can trigger pro from that. I haven't actually experienced that. I did try and trigger a deep research report. I said, hey, please, Deep research the Roman Empire for me. And it does not fire off a deep research report. Deep research is buried under like a plus button. And you have to select it and say, okay, I actually want you to do this thing. And then it takes you down the Deep research, like, workflow, which I understand is like, for inference reasons. They don't just want you firing off Deep Research reports all the time. But I think in the future, like, the model router should be very intelligent about, okay, this is a question that people have asked thousands of times. Let's just go get it from a database. Which is crazy to think in the age of AI you wouldn't even be hitting a gpu. But I think that's going to be real. And then I think on the other side, like, you should. It should detect, like, okay, this person wants something that's far beyond anything that we've ever worked on before. We got to go search the Internet, we got to write some code, we got to do a whole ton of stuff. I'm going to need 10 minutes. I'm going to. Let's fire up Deep Research. Right? Fourth ads we've talked about.
Come get up. We are surrounded by. Gentlemen, hold your position. Strike1, Strike 2. Activate. Go, go. The retriever mode. Market clearing order inbound. 5. I see multiple journalists on the horizon. Stand by. Founder, You're watching TBPS. Jordy's juggling. It's Wednesday, February 18, 2026. 6. We are live. We are the temple of technology, the fortress of finance, the capital of capital. Let me tell you about ramp.com baby. Times money save. Both easy as corporate cards, bill pay, accounting, a whole lot more all in one place. We have a great show for you today, folks. Specifically, Tyler Cosgrove has been on a little bit of a tear with the market maps. He dropped the final market. Final market. We don't need any more market maps because Tyler made a market map that has every company on it. Let's pull up his latest market map. It looks like the. There was some VC associate out there that was making a market map and was just devastated. All the companies I was going to put on the market map are now on this market map, which is in the timeline, by the way, and it's very blue. And you did select Google. Can you walk us through how you built this particular market map of every company on it? It's every company that has a Wikipedia article, correct? Correct. Yeah. So the one we're showing. That's the wrong one. That's for later. So basically over winter break, actually, I was interested in this thing where like. Okay, on Wikipedia there's like all sorts of like. Wikipedia, I think is like a very underrated data source. And there's like all sorts of cool things I think you can do. Right, you mean Grokopedia. Right. Well, so Grokopedia is a little different because it's like generated on the fly. Right. But basically whatever. What I ended up doing is I took every Wikipedia article. There's like seven, seven and a half million English ones. And I ran them through an embedding model. It was Quen 3 embedding 4B, I think. You speak Chinese? Yeah. Wo Shuang Jung woah. He's got it. Okay. But I got into betting for every single article, right? So it's like basically every article has a vector. It's like 2500. You did this a while ago, right? The whole Wikipedia embedding or did you re. Embed. This was like a month ago. Yeah, I remember. So then basically I took all the articles I found all the ones that are about companies, enterprises. Right? Which is basically you can find some direction in the embedding space that's like, corresponds to how much like company ness. Something has. Right. So you just find all the ones at the end, really. Oh, you don't filter by like Wikipedia's categorization of. So I use that. But that's not inclusive of every single company. So it's like a little bit blurry because some things are like, well, is it a company? Is it not? Yeah, I noticed some, like, railroads on here that looked like maybe they're companies, but they're like state owned and. Yeah. Where is that? It's kind of a blurry thing. So you can't just use just what Wikipedia says, but you can basically find things that are companies and then you have a. You have an embedding for every single one. Right. So it's this big vector, super high dimensional space. If you map it down to 2D. Okay. You can have this like, cool 2D map, which is basically what I did. Yeah. So you can see there's these big clusters. Right. So it's like in the top left, there's. It's all these theater companies or there's space companies. I noticed the aviation companies were pretty far away from the train companies. Is that. Yeah, I mean, I knew there was kind of like. Yeah, conflict. Rivalry. Yeah, rivalry. They need to be. You got to keep those apart or they'll just start fighting. Like when you map something down from like, you know, there's like 2000 dimensions down to 2D, it's like very hard to keep. Yeah. Like a ton of things. And it just randomly looked like the United States. Yeah. That has nothing to do with. So that was totally random because I looked at it. I was like, oh, okay, there's a lot of companies in Florida, a lot of companies in the Northeast. Yeah. I didn't even like, realize I was like, oh, it kind of looks like. And then I was like, what is this? What is this enclave in Canada? Why does that. Is that Alaska or something? But in fact, it has nothing to do with the United States. It just happens to look like the United States. Yeah, but this. So it's like actually interactive so you can like look up a company and you can find where it is and stuff. Tylercosgrove.com Wikipedia map.HTML Wow. Really a wordsmith with the URLs there. Tyler couldn't use a TLD list domain. There are some fun ones in here anyway. That's a fun project. All the links take you to Wikipedia, go check it out. And market maps are basically done. But a lot of the Neolabs are not on this market. Map quickly. Let me tell you about Restream One Livestream 30 Plus Destinations. If you want to multistream, go to restream.com and let's click over to Tyler's market map of the neolabs. Because we've been tracking the neolab boom. We've had a lot of these founders on the show. We came out of the world where we were like, okay, there's DeepMind, there's Google, there's OpenAI. Now we got anthropic, there's thinking machines, and there's a couple different companies. But the neolabs have exploded. They have a term that's been coined. Sarah Guo was actually, I think, the first person that came on the show and sort of broke it down for us around the Christmas episode. But since then, the neolab, like, taxonomy has evolved and so we needed to build a market map. So, Tyler, take us through what's going on in the world of neolabs these days. Yeah, so neolab is kind of this interesting term. Like, it's very broad. People say, like neolab. It's not very clear what they mean because there's like, broadly. I think it generally. And this will make it clearer. Yes. I think after this, it'll be pretty obvious, like what, you know, what you should be looking at how to think about these different companies. Yeah. I don't want to be more confused at the end of the this. Yeah, that would be a disaster if that happened. Yeah, that's not going to happen. This is going to be easy. Okay, got it, got it, got it. Cool. Okay, so let's just start. Okay. So you have neolab, right? Yes. So neo, this prefix. Okay. That's be relative to something. Yes. So neo is relative to, like your trad lab. This is your big lab. Traditional. This is your. Yeah, this is. You're opening it up for the big labs. Yeah. They don't get enough credit today. Building data centers. Spike and Capex. So this is going to be your open AI, your DeepMind, your anthropic kind of your big lab. Yeah. Xai. Xai kind of fits in there too. Even though it's a newer trad lab, it fits in with the big lab. A lot of money. Dario. I think on Torakesh, he was like, yeah, three, maybe four labs. Right. So the force is probably xai. Yep. I think you can also kind of throw in Mistral in there. Okay. Oh, yeah. Mistral's a little bit older. Yeah. Yeah. I mean, Mistral. There's a bunch of these labs that were basically founded in the like two or three years before ChatGPT and then in the like six months after. Yeah. So I think Xai's in there. And these specifically these, I feel like those trad labs, it's like they did a transformer based pre training run. They have their own base. Pre trained. Maybe it's not at the frontier, but at least they're playing that game. They're not doing fine tuning, they're not doing something else. So that's sort of like you're in the trad lab world when you're thinking about like a big pre train run. Loosely. Yeah. I mean, especially if you're talking about these big pre trains. It's really just these four. No one else is really at that scale. Yep. But then. Okay, so Mistral kind of brings us down into what I call the sovereign labs. Okay. So I mean, you know, if you kind of look at this, it's basically just labs that are not in America. But I think also that there actually is some meaning to this. So like Mistral. You've seen Mistral become kind of the leader in European AI. Right. So I think. Was it Sweden? Maybe they're being a new data center. Yeah, Sweden. So they're kind of becoming like stuff. Going on in France too. Macron is always talking about Mistral. It's big leader. Cohere is also kind of. I think that's like a very, you know, Canadian. It's a Canadian company. Yeah. Yes. But also has done their own pre trains. No ties to the curling team, though. Oh, okay. Okay. Okay. Complete no ties. So I don't want them. Yeah, yeah. It's important to put some distance between that scandal. Yeah. And then you can go down, you can kind of see all your Chinese open source labs. You see your Quen, Deepseek, Kimi Unitree is also in there, Right? Unitree. I think so. As we'll see later. There's also I have section for like robotics labs. Sure. But this is very clearly like, you know, this is the Chinese. Yeah. Take us back in time now what was going on before the trad labs broke out. Yeah. So here I have this section. Legacy labs. Okay. So these are ones that are kind of more entrenched in these big enterprises. Yep. So you have stuff like Microsoft Research AT&T or Bell Labs. Right. Oh, Bell Labs. Yeah. I forgot about Bell Labs after. You know why they call it Bell Labs? Why do they call it Bell Labs? Alexander Graham Bell. Yeah, it was founded by him. Yeah, Bell Labs. Okay. But also you have stuff like you have Fair Facebook AI research. This was like, I mean there's so many like OG research papers that came out of fair. Yann LeCun used to be head of before it transitioned to MSL to mso. Okay, so then I think let's move up here around your trad lab. You also have post lab, right? P O A S T. Yes. These are posters. Yeah, These are labs where you get a lot of posters. Right. So Obviously this is OpenAI. You got Rune Anthropic, a lot of Sholto posters over there. Posters. Prime Intellect. They're great posters. A bunch of anons at Prime Intellect. Doing great stuff over there for sure. And then you kind of get into the proper neolab. Yeah, the proper neolib. Okay. So this is also a bit hard to identify because like what is actually the core of a new lab? What are these different kind of offshoots? I think Prime Intellect is kind of the prototypical, like quintessential neolab when you think of it where you basically have. It's like fairly recent. Yeah. It's still very much research focused. Okay. Like sure, they have enterprise like, you know. Yeah. Think about different stuff but at the core of it, you're still like trying to find these like new novel approaches. It's research, you're hiring researchers. It's not just like engineers, sales guys, et cetera. So let's. Wouldn't Sakana be more of like a sovereign lab? Yeah, yeah. I mean so a lot of these can fit in all different places would be. Yeah, Japanese maybe. Okay. And you put MSL in here because it's a new project. Yeah. This one was also a bit hard. It doesn't feel like a trad lab because I mean maybe it has the scale but it's just, it's newer, they haven't shipped yet. Neo New lab. I mean it's so recent. Definitionally. Thinking Machines is my classic go to NeoLab. Yeah. I feel like it's post, post OpenAI exodus and sort of OpenAI is nothing without, without its people. You know, you get the spin outs and you think Thinking Machines and SSI are too of like the first case studies that sort of set the tempo for. Okay, it's possible to do some research outside of the big trad labs and so that's where you get the neolab boom from. And then a lot of the other companies I feel like are saying, okay, we're going to do something similar to Thinking Machines or ssi. We're going to commercialize earlier, late. But we're following in that and we're benchmarking to that. Oh, they raised 2 billion, we're raising 200 million. It's easier. There's a 10% chance that we are at their scale. So you can underwrite it that way. Yeah. So thinking machines also brings us to what I call the trad SAS lab. SAS lab. You've trad SAS lab. So I think the way I think about this is the Trad SaaS labs are trying to basically use the data that's inside these big enterprises, pull them out with AI. Okay, so this is thing machines. Right. The rumored idea is they're doing RL for enterprise. A bunch of these are doing fairly similar things where it's kind of chatting with your data, using the data that's very valuable to a company, but it's going to be inside the company. You can't really pull it out anyway besides having the AI be like internal. So you have applied compute your poolside, doing all kind of similar things in this, in this like enterprise LLM field. Yeah. And then that brings us to neo SaaS. Not full base pre trains for those companies. Mostly fine tuning or RL on top of a particular company's. Yes. Use case. Yeah. And then I have neo SaaS lab. This is different than trad sas. These are different in that they're not really pulling, they're not going enterprise specific. I think that's one way to look at it. Also much more of like startup focused. But they're making a product that is sold effectively as SaaS. Yes. So cursor, cognition, windserve. I have ramp labs. Ramp labs. These are seat based, sort of consumption based, but it's a product that's vended into a. And the product is what you get and then sort of customizes as you integrate it. But it's not. The conversation doesn't start with a business development relationship. Yeah. And of course, I mean these lines are pretty blurry. But then. Okay, let's go down to the post lab. Okay. Post lab. This is after the lab. Yes. So that means like basically they train the models and then these labs are working on top of those models. That's how I think of it. So, so you have meter, you have epoch, these are going to do evals, you have pangram. They're seeing is the model producing slop? Yes. Or is it producing text that you're. Using in some way? These are purely eval. They don't have necessarily AI products themselves. They don't necessarily sell to big business. But they could still be training models. Right. Like Pangram is training models that sit. On top of the lab. That's true. So it counts as a lab. Makes sense. Okay, what else we got? Maybe that brings us down to the safety labs. Yes. So these are pretty interesting anthropic. Kind of fits in this. Right. Because we have a big safety team. They're doing a lot of mechanistic interpretability. You have Goodfire. I think they just raised at like 1.25 billion and they're just doing mechanistic interpretability. Let's go. Very interesting. Eleuther AI is similar kind of lab. I know. Yeah, yeah. A lot of these are also kind of in the open source space. Yeah. I think stable diffusion came out of Eleuther AI. Yeah. This is another label that I think I could have put on. But it's so hard to get everything to like, open source together. But a lot of these are also like the core of the company is doing open source. Sure, sure, sure. Right. So Prime Intellect was a good example. Almost OpenAI back in the day. But a lot of these have bled together where OpenAI has an OSS model, but also a lot of consumer and enterprise. Yeah, makes sense. Okay, so then in contrast to the SaaS labs. Yeah, we have the consumer labs. Okay. Consumer. These are focused on consumers. Right. So you have Eureka Labs. This is Andrej Karpathy's project. I don't think there's anything been released from it yet. Education, though. But yeah, education makes sense for people. You have humans. Oh, it's four. Four people. Not four individuals working there. It's four. It might be four people. It might be one person. Who knows? He's pretty good. Yeah. You have humans and. Okay, right. This is the. I think you're a phrase. It's like humanity focused. You're going to turn human into sand. Human sand. Human sand. Yeah, We. We got to hang out with the founders at the Super Bowl. But they're. But yeah, focus on creating models that work better alongside people. Sure. You have a lot of, like, companions, these kind of ideas. Right. You have character AI also. Oh, yeah. Do they really own C AI? What a great domain. If that's true, I don't know. We'll have to fact check it. Anyway, so then that brings us down to the visual labs. Visual labs, Right. So there's a lot of either multimodal. Yeah. Models. Or they're actually like producing video or images. Right. You talked to a lot of these founders. Yeah, I feel like almost all of them have done. Yeah. I mean, world labs raising today or fundraise. Announcements today. Yeah. Midjourney, et cetera. These are pretty obvious. You have your Neo Auditory lab. Midjourney. Is the sailboat logo. Correct? It's a good logo. Yeah. Okay. You have meta reality labs on there too. Oh, okay. Yeah, yeah, yeah, yeah, yeah. That makes sense. They're visual. Not fully AI yet, but they're getting there. Yep. Okay. You have Neo Auditory Lab. Okay. Right. So this is going to be anything that has to do with vocals or voice or music? Yes, 11 labs. 11 labs. Of course. Sponsor of TVPN. Thank you. Suno. Right. Making music. Gemini also released a new model. Yes. Today, Lyria 3. I didn't even know there was one or two. It's a trilogy already. They just got secret models that they're hiding from us. Yeah. So this is a very interesting field. And then you have your legacy auditory as opposed to your new auditory. So this is your old ones. This is. Well, John, do you want to talk about Nuance? Nuance Dragon Naturally speaking. This is the original box software. You buy it, install it on a Windows computer. You can talk into a microphone and it will write down what you say, dictate it. Yeah. Using some AI, Not a large language model at the time, not a transformer based architecture, but became a very large company. I think it's part of Microsoft now or something. I think it's been acquired a few times, but yeah, very, very interesting company. A lot of really solid fruity loops. Yeah, that's. You're in the lab making beats, I guess. Okay, so now moving up. I think this is really a very interesting section. So this is Neo Trad Lab. Yes. So I think. What is a Neo Trad Lab? This is the simple. This is a simple definition, clearly. Yeah. Does it even need explaining? I think everyone gets it. Watch your head, by the way. It's coming really close. You might want to be on the. Other side to the team. Okay, so Neo Trad Lab. It's a Neo lab. Yes. But it's traditional. Okay. Okay. So what does that mean? So basically the way I think about a lot of these labs is that they're extremely research focused. Okay. They're also largely. They're focused on like kind of single idea. Yeah. So if you think of like OpenAI. Very research focused, obviously, but they're doing a lot of different things. Yeah. Right. So they have consumer. Yeah, they have consumer. But it's even like on the product or on the research side. Right. They're doing their video images. Sora images. Yeah. But even within language models, I'm sure They have a continual learning team or all these weird moonshot things where I think a lot of these Neo Trad labs are basically focused on one single moonshot idea. Okay, so example, flapping airplanes. Right. They just came on. They're talking about data fitness. This is kind of the one kind of moonshot idea. Right. Obviously it's like a very general rob. A bunch of different ways you could tackle it, but they're like, that's the problem that we're going. But it's one specific thing they're working on. Yep. And I mean they talk about, oh, you know, if we figure it out, there'll be some value, but we're not exactly sure how it's going to come. Out, like right now and we're not sure how we're going to productize it necessarily, but we have. Really? Yeah. So the idea is like, if these labs can figure out like the core research idea, then the value will appear. Right. So you also heard this out of Ilia with ssi. Right. Not sure how they're going to get revenue, but it'll come if they figure out a breakthrough, continue to learn. If you build it, they will come. Yes. Yeah, A lot of interesting things here. So we can look at like, okay, general intuition. Yes. They're basically doing a lot of multimodal training where they can basically take video game data and try to figure out how to map it onto lms or world models or these types of things. Okay, you have Inception. I believe they're doing dream tech. Wait, okay, I'm thinking of logical intelligence. They're doing like diffusion models. Okay, Right. So diffusion but for LLMs, but for text. Yeah, we've seen a demo from Google on that too. Okay. Inception is doing, I think they're doing the energy based models, which is kind of this weird thing. Okay, wait, I have both those companies flipped again. So Yann Lecun is into this. It's simple. I mean, I don't know why you're flipping stuff around. This is literally just neolab101 and you're doing a basic breakdown. The point is that they're doing these kind of weird architectures where like energy based model, it's like kind of different than a normal LLM where you have this normal back prop stuff like this. But the point is that these are all like very kind of weird like architectures that they're working on. So maybe the big labs have like small teams that are working on this stuff. But basically these people go out of the big lab. A lot of them are Coming out of the big labs. Sure. And they're starting these new projects, like. Coming out of a trad lab or a. Or a NEO lab, or a NEO or legacy lab. A NEO SAS lab. Exactly. Okay, got it. Yeah. Okay, so now let's move up a little bit. Yeah. What is neolab Lab. Neolab Lab. Okay, so this is. Yeah, I like this one. So these are a lot of companies that are focusing on. They're also, like, very research focused, but the point of the research is to build essentially like a researcher. So they're recursive. Right. Okay, so you have recursive and recursive. Yeah. You have actually two that are recursive and recursive. You have Richard Soter, you have periodic labs where they're a little bit more focused on the hardware. But the whole point is that they have this kind of closed loop where you can basically build a lab within the lab. Right. That's the whole point. Lab, lab. You're building a lab. Got it. Unconventional AI. Similar thing. I think the product will be a lab. They're in the lab manufacturing business. Correct. Got it. Yes. Okay, moving up. We have math lab. Yes. So these are pretty interesting. Axiom and Harmonic. Yes. And then you have matlab. Yes. But these are pretty cool. There's been a lot of good breakthroughs recently. I think there's a bunch of Erdos problems that are being solved or maybe they're just being proven in some ways, but there's a lot of, like, interesting research coming out of these. Harmonic is Vlad Tenev, the founder of Robinhood. Yes. Correct. Yes, yes. Wet labs. Yeah, wet labs. Okay, so these are your biolabs. Oh, you got LabCorp. Yeah, I'm familiar with LabCorp. LabCorp. But there's a lot of biology focused labs. It's actually like, I didn't know a lot of. I didn't know a lot about a lot of these, but there's all sorts of interesting research. So isomorphic labs, this was spun out of, I believe, Gemini, or at least Google. Yeah, that's right. They're working on longevity and just drug development almost. So some of these are very focused on specific forms of drug development. Some of them are. Are just like broader, where they're very focused on longevity stuff. Yeah, cool. And then. Yeah, let's go to. Yeah. What's going on up. Oh, yeah, up top we have Labrador. Oh, that's really important if you want to understand labs. So you got these, you got your. You got the foundation, the white lab. Your black lab, your chocolate lab. Chocolate lab. Yeah, Chocolate labs are important. Yeah. If you want to understand labs. Broadly. Yeah. Okay. Then moving back down, we have the Neo Kinetic lab. Okay. So these are going to be your labs that are more focused on robotics. Yes. So you have a bunch. You have Project Prometheus. Yes. This is Bezos's lab. It's still kind of in stealth, which is not even a logo for it. Yeah. You have figure, you have skilled AI. Skilled AI is the Luke Metro project. Yes, got it. Yes. Physical Intelligence Sunday. Right. These, these are all your kind of NEO connect labs. Right. These are started fairly recently, in the past, like maybe four or five years. Broadly, the Neo NEO Lab. Neo Neo Lab. Right. Okay. So One X is building NEO robots. So they're Neo NEO Lab. Makes sense. Yeah. And then Legacy Kinetic is the previous. Legacy Kinetic is kind of the old gen. Yeah. But cooking, they're cooking. Waymo's cooking. Yeah. Cruise. Boston Dynamics have been a little bit behind Zoox. Also another self driven car. There's a bunch in here that I. Could have another one stealth, I think that never really hit inflection. Okay. Yeah. And then you have your mostly vehicle focused. You have your dark lab. Yes. So this is working with the government. Yeah, I have SHIELD AI. I also have darpa. DARPA is a lab. Yeah. They invented the Internet, right? Gps. Yeah. Darpanet. Yes. That's good. And then simulation lab. I think that Simulation lab. Yes. So simile, we just had them on. SpaceX you could put up there. Aren't they working on this Pentagon? Yeah. Where's Rocket Lab? Rocket Lab needs to be on there. That's a lab. There's a lot of labs. There's a lot of labs. I mean, yeah, lab is very, very broad. Very broad term. Well, at least it's crystal clear now for everyone. Yeah. So I think this should be pretty obvious to anyone who's thinking about neolabs, like how should we think about them now? If you've been paying attention, this is all second nature to you. Yeah. Did you add up how much all the companies have raised? It's got to be in the north of 200 billion. Yeah, it's a lot. I mean, so I didn't do that. But for a while I was trying. To figure out how to do the. Valuations on the map. It was too complicated. You didn't feel like you could do the math? No, we don't know how. Well, it's also a lot of them are rumored. It's actually kind of hard to find out because a lot of these are still really in stealth. A lot of these neo trad labs, they basically. Because the whole point is that they're doing this research stuff. Yeah. They're not going to like productize early. Yeah. And also how much do you put in the DeepMind bucket? That's a huge amount of investment and it's not exactly disclosed. Do you count the tpu, do you count Google Cloud? Like different allocations. You can go really deep in the stack to understand the impact of like the broad AI build out. But yeah, I mean if you just total this up, you can really just do xai OpenAI anthropic and get like 90% of the way there and it's probably like 200 billion. It's also hard because it's like evolving so fast. Right. So David Silver's lab, who he was used to be at DeepMind. Oh, I like Ineffable. Ineffable Intelligence. Yeah. I think that was rumored today. Yeah, it's indescribable. Yeah. But these things are coming out like every day. Right. You put the typos in just to prove that humans. So like Sovereign Lab and then Ineffable Intelligence also has a typo and so I just want to make sure. I wanted to make sure. Yeah, you put the typos in so that it was proof that you made it. Yeah, yeah, I don't want. Well, yeah, whatever. You built this in doesn't have spellcheck, I guess. Anyway, fantastic report. Thanks for breaking it down. Great stuff. I learned a lot and I hope you did too. And let me tell you about a lab, Gemini 3 Pro. It's Google's most intelligent model yet. State of the art reasoning, next level vibe coding and deep multimodal understanding. And I'm also going to tell you about Sentry. Sentry shows developers what's broken and helps them fix it fast. That's why 150,000 organizations use it to keep their apps working. One show, two maps. One show two maps. Strong start. Should we break down five wildly obvious fixes that will explode consumer LLM adoption? They don't want you to know this over at the big labs, but I have some ideas. Basically everyone's been really focused on agentic coding and the saaspocalypse and what's happening in the business to business world and the enterprise world. I've just been sort of like thinking back on, you know, basic improvements to the chat apps that I use all the time because there's some really obvious stuff that I think, I think is in the works and I think it's coming, but I wanted to just sort of like get it all down in one place to think about what the next iteration and the next breakout moment when people are like, oh, I'm using them even more, I'm having a better experience. What would that look like? So the first thing is that I realized that I've asked ChatGPT just when was OpenAI founded? Three different times. It's the exact same query. Like, it doesn't need to light the GPUs on fire for that question. The answer literally never changes. You can cache the result. And that's what Google does with those knowledge queries, knowledge panels. And there's a whole bunch of different. There's a whole bunch of different ways to deliver results that are sort of pre cached. And so if you just look down when an LLM launches, basically every question has never been asked before, but now there's a lot of people that are just showing up with the exact same question. Give me the history of the Roman Empire, give me the history of this company. And you might not be the first person to ever ask that question. Exactly. But also, if you do a little bit of fuzzy, if you do a little bit of fuzzy, search over it, you. There's probably like hundreds of thousands of people that have asked the exact same thing. So cache those results, give them to the user instantly. And I think this like instantaneous feeling of LLMs, like they felt slow for a really long time. They actually got slower. Like, it was always sort of slow. You watch the token stream in, but then once the reasoning models and the thinking models and the deep research and the O3 Pro came out, it was like really slow. It was like, close your phone and come back in 20 minutes. That doesn't have to be the end state, and I don't think it will be. And I have no better example than the number two on my list, which is cerebras inference. So ChatGPT currently has a model called 5.2 Instant and it is not instant at all. I fired off a prompt to 5.2 instant and I said, no reasoning, tell me the history of LLMs. It took 38 seconds to deliver the full response for all the tokens to stream in. And it does a good job. It shows you images and stuff and it is a cool illustration. But then I went over to Codex Desktop and I fired up GPT 5.3 Codex Spark Low, which is a crazy name which we'll get to, and it responded in under two seconds because it's from what we know Spark is incredibly quick cerebrous and it's very, very fast. And so everyone's obsessed with the fast models in the agent decoding world because you're waiting a half an hour for something to get back to you, you're waiting five minutes for something to get back to you, and you're out actually losing your train of thought. But I think that applies in consumer as well. And I think the interaction of sending a message and then just immediately getting a response before you actually think, oh, well, it's waiting. I'll close the app, I'll check my messages. Oh, I got an Instagram notification. Let me go over there. Instant responses will keep people in the apps longer and user minutes will actually increase once that rolls out. So pretty simple implementation for I think most companies. My big question is, I know Google has a huge advantage with tpu, but I don't know if they have an answer to Cerebras specifically. And Nvidia just brought Grok, which can do, I think, some of the same things. So I'm curious to know how every lab solves the fast responses question because that feels like an important piece of the puzzle. It's not the only piece of the puzzle, but it's an important feature and I think we're going to see it rolling out to Consumer LLMs very, very soon. And I do think it'll be an interesting moment for people to both ask a question and just, boom, it's as fast as going to Wikipedia and just seeing like, okay, everything's rendered, it's thoughtful, it's what you want. And on the flip side, I think that it could make people a lot more chatty with them, like actually asking follow up questions because you don't feel that cost of like, oh, if I ask you to follow up and tell me more or re. Or go a different direction, like, I have to wait, I have to wait. So I might just close the app. Speaking of GPT 5.3 codec spark low. No more model names, like truly, no more model names in consumer AI LLM chat apps, like ever. Just bury them so deep in the UI that you never see them and people will complain. People will be like, I wanted it to be easier to pick. I like picking between pro and thinking and fast and instant. I know what I want for everything. People complain, but it will all inspire the model routing team to grind harder. And the model routing team has a hard job to do, but they will eventually figure it out and eventually you should be able to just talk to the model. You can already do this in ChatGPT. You can say, hey, think really hard about this question and give me a really thorough answer. And it'll switch from instant to thinking. I don't know that you can trigger PRO from that. I haven't actually experienced that. I did try and trigger a deep research report. I said, hey, please, deep research the Roman Empire for me. And it does not fire off a deep research report. Deep research is buried under, like, a plus button. And you have to select it and say, okay, I actually want you to do this thing. And then it takes you down the deep research, like, workflow, which I understand is, like, for inference reasons. They don't just want you firing off deep research reports all the time. But I think in the future, like, the modeler router should be very intelligent about, okay, this is a question that people have asked thousands of times. Let's just go get it from a database. Which is crazy to think in the age of AI, you wouldn't even be hitting a gpu. But I think that's going to be real. And then I think on the other side, like, you should. It should detect, like, okay, this person wants something that's far beyond anything that we've ever worked on before. We got to go search the Internet. We got to write some code. We got to do a whole ton of stuff. I'm going to need 10 minutes. Fire up deep research. Right. Fourth ads, we've talked about this, but we got to get them in the LLMs. We got to get them everywhere. Because I was thinking about the death of Google Reader. I don't think you were ever a Google Reader guy, were you? But it was amazing. You could take all these RSS feeds from all these different blogs during the blogosphere. You could put Marginal Revolution, Tyler Cowen's blog, all these different things in there, and just kind of scroll through them really quickly. And Google wound up killing it. And everyone was really upset. And the reason was, I think, because they never really got on the Google Ad flywheel where there was real revenue generation. Yeah. Was that just. It didn't hit a scale that enabled it makes sense. The failure of every Google project that has failed is always a question of, like, was it because they weren't making money from it, or was it because they hadn't monetized it yet? Or it just never got big enough? Never got big enough to monetize a million. Yeah. Weekly active, it's not probably worth keeping around. Totally, totally. But my takeaway from Google's surface area of products that are successful and loved. Google Search, YouTube, Google Maps, Chrome, Android, like these are all direct funnels for the ads flywheel. And so you can see that they're driving the bottom line. There's a whole bunch of folks on the team that are getting excited when they're hitting their numbers, when they're making more money for the business. And so they just get more and more resources, more and more engineering effort. Everything gets better. And I think that not only are ads the best way to deliver high quality products to the broadest possible audience, but they just make products better top to bottom. And yes, there's the stated versus revealed preference thing. And yes, you might want to pay to not have ads like you do on YouTube. Many people do. But I do think that ads flywheel is going to be really, really important as the inference gets, really. And right on time. Perplexity ends Ads experiment. I saw that this was the news from this morning in the information from Catherine. He says perplexity is no longer offering ads. An executive told the Financial Times the AI search startup is pulling back from this line of business as rival OpenAI starts showing its users ads in ChatGPT. Earlier this month, the company said it worried ads would undermine users trust in their platform. With an executive saying the challenge with ads is that a user would just start doubting everything. I don't buy this at all. Arvind has a history of kind of just like trying to provoke OpenAI at every turn and so coming out perplexity in my view, like this is just like somewhat bearish. Right. They're trying to serve as many people as possible all over the world. The best way to do that is gonna have an ad, have an ad supported tier kind of bailing on this, on this moment. I don't know, maybe it's not worth reading too much into it, but a little bit early to throw in the towel on the economic engine that has driven the Internet for its entire history. Yeah, I mean we talked to a lot of founders who have brands and they love advertising. And I think that's another side of this, which is that when a lot of entrepreneurs and also people who work at businesses want to grow their businesses and they have fond memories or affiliations with Facebook and Google because that's how they grew their companies. And when you talk to somebody like Sean Frank at the Ridge, he's like, I'm going to be first in line to advertise on ChatGPT. I can't wait for that. It's converting so well already. I want more of that business. And we didn't really hear that with the Perplexity ad product. We didn't hear people lining up to buy ads in that product. So maybe it was not going as well. Yeah, the other thing is they thought so. According to the information, Perplexity started testing advertising in 2014. Less than a year into its test, Taz Patel, the executive leading the ads effort, left the company in perplexity. It only let in less than half a percent of the brands that wanted to advertise interest on. So there was like a bunch of demand. They barely let anybody use it and then they bailed on it. And so. The last one is somewhat related to OpenClaw, but I think way down the funnel, beyond the 20 minute deep research project, you probably want to be able to fire off something that looks like Claude code or openclaw or Codex to write lots and lots of code and solve a really, really hard problem. And so many reasoning models can already write some Python and execute it. But it's clear that everyone wants to go further. Hence the Mac mini boom. And I'm not actually sure how important access to the local file system is to most consumers. Like, when I when I think about what's like most of the data in an average Internet user's life is mirrored in the cloud. I think they care about their camera roll, they care about their email, their messages, and almost everything's in the cloud. I've noticed this when I move from one computer to the next or I move from a phone, I'm like, wait, I didn't actually there was a time when it was like, oh, you're moving computers. Get an external hard drive, make sure you drag all your files over. Most of the stuff's mirrored to icloud that can be accessed via an API. It requires a business development deal, probably, but it does seem feasible. And a lot of the LLMs have hooks into Gmail already. I think all three major LLM apps have Gmail integrations already, and more integrations are coming, clearly. And so I'm not sure that you need to replicate openclaw and have it running on a dedicated piece of hardware, even like cloud hosted. But I do think people will want to be able to fire off something that writes tons of lines of code to solve a particular problem, even if it's something as mundane as like getting you a restaurant reservation at a place that doesn't have an API. Like if there's a restaurant that just has a web form and you basically want to deploy like agent mode that might look like writing a web scraper and writing something that actually does like a headless chromium browser and like clicks it and that might be generated from something that looks a lot more like Open Claw or cloud code than something that is just a couple lines of Python in a reasoning model. Anyway, there are also a bunch of nice to haves. These aren't really on the list, but these apps, they still occasionally fail to return results. When you're in areas with patchy cell phone service, there's like little UI things. Some of them botch text to speech requests. When you'll fire off a deep research report and they'd be like read this to me. And then it'll read for like a minute and then it just stops. Some of the apps don't let you listen to the deep research reports, but they let you listen to the normal reports. So there's all these little fine details in the UI that I think are causing more churn and people can just chop away at. It's unclear if what is required to make an amazing product is just AB testing all of these things and just optimizing or is it taste? I have no idea. But if you wind up doing this is my recommendation for anyone who's working on this stuff. If you're just going to run an a B test to figure out what is the correct user interface and you run the AB test, you find out that the button should be blue instead of green. Don't tell your boss you ran the AB test. Tell them it was taste. Say that it's all about taste. Good call. And that you have taste. It's all about taste. Because then you'll have a job forever. Yeah, but. But if you say I'm just the guy who runs a B test really well, probably. Probably. Taste is king. It's true. The AI models can't taste. They can't taste. They can't Taste A5 Wagyu. They can't taste a Cabernet Sauvignon. Only you can do that. So make that dinner reservation and enjoy a nice glass of red wine because the models can't. They just can't. There's just no way. There's no way. Alpha. Alpha. Alpha. Anyway, let me tell you about MongoDB. What's the only thing faster than the AI market your business on MongoDB? Don't just build AI, own the data platform that powers it. And let me also tell you about Lambda Lambda is the super intelligent cloud building the AI supercomputers for training and inference that scale from one GPU to hundreds of thousands. Robinhood says historically investing in private markets was limited to institutions and the elite, but not anymore. With Robinhood Ventures you can now get exposure to private companies like the ones listed below. They have a new fund that has databricks, Mercore, Revolut, Airwallex, Boom, Supersonic, Ramp, Aura and Stripe, which is impending close. Very curious which of these companies, if any, were actually on board and excited about being part of this lineup. I think Ramp was. I saw Fax Herbert from Ramp posting about it and he is but that. But that doesn't mean the company. Okay, he said. We are excited to partner with Sarah Shiv Chan and the broader Robinhood Ventures team on their inaugural fund. On a personal note, I'm revealed I'm relieved to finally have an answer for family and friends who have been asking how do I get exposure to Ramp Equity. And so if this is coming out from your head of investor relations, it's not exactly a Matt Grimm style response. So I think most of the companies that are in the press release at least and saying hey you can use our logos are cool. We'll see where it goes. There are folks that might get funneled in there and they don't want to be and there might be a whole bunch of different debates and back and forths. What is on shield Shared kind of some of the cost basis from the prospectus they bought Databricks at $150 per share, now trading at 204. Ramp at 90, now trading at 98 airwallocks $21 it's now trading at 18.8 and then Mercore at 714 now trading. So already seen a little uptick. Anchor came in and was sharing some of his sites insights. Friend of the show he says a single closed end fund that gives you exposure to some of the top private startups. My thoughts. People want access to private markets of course. So much wealth creation in America happens in startups and people desperately want access. You can see this with the insane silly fees people are paying for anthropic SpaceX and OpenAI SPVs. He says to the structure of this fund is broken as a closed end fund. The price here can diverge very significantly from the net asset value of the underlying assets With FOMO from Access this could easily trade at a very high multiple to nav, leading to a lot of retail investors getting their face ripped off. It ends up being less of a venture fund versus a speculative product to ride private market sentiment. It's a great disclosure long Robinhood but. Will not be long. Robin Hood. But he's like, I don't like that. Yeah. So we actually have the founder of Destiny, Destiny coming on the show today, Sohel Prasad. He's coming on at 1pm and they're sharing their Q4 results. They have exposure to anthropic Chaos Industries, Hermeus positioning Destiny as a New York Stock Exchange listed vehicle, democratizing retail access to high growth. Yeah, Destiny has suffered from this same problem. They were super early. They got this fund out in almost two years ago. Exactly close. And immediately it, it spiked. Right. There's a lot of demand to get exposure to these assets and it's sort of come back down to earth since then. But excited to get the update from him and understand. And let's pull up the rest of the LINEAR lineup to show you who's coming on the show today because we got Blake Dodge from Pirate Wires, Freddy Debor from Substack Sohail as we mentioned from Destiny, Travis from Mesh Optical, and then Evan Spiegel, the co founder and CEO of Snap. Linear, of course is the system for modern software development. 70% of enterprise workspaces on LINEAR are using agents. Moving on. Elon Musk announced that XAI is moving away from traditional academic benchmarks like Humanities last exam to focus GROK on maximal utility for real world engineering and software development said actually I don't think HLE is a great measure of usefulness. We're moving away from these benchmarks, Andy Scott says. So it's bad question marks. I think it's totally fair to just focus on real world utility. But of course people are still going to ask. Well, I still want to know how it does. Yeah, it's interesting. I mean. Tyler, give us the update on 4.2 that came out today. So Grok 4 has already been out. This is a minor revision and 4.1. 4.1. So now we're at 4.2 and, and is it, is it focused on benchmarks or have, have they carved out a particular, particular niche yet? Yeah. So I think historically, especially when Grok 4 come out, people were like very, very quick to say it was like oh, this is so benchmarks or whatever. I think they've definitely retreated from that at least path with 4.2 it doesn't look like outrageously benchmarks or anything. They did this kind of interesting thing where it's still not fully out, it's still in beta. If you go on the GROK interface they did this kind of thing where there's four agents every time you actually do a prompt. There's four agents and the agents specifically have distinct roles where it's almost like you have four instances of the same model but they have different system prompts. So you can try to get like okay, this one is focused on doing. Like qualitative things instead of mixture of experts, mixture of agents. Yes, but mixture of experts is like that's like in the architecture of the model, within the architecture where this is like you train the model then you kind of add this as almost like a harness type thing. Yes. So it's kind of interesting path. We'll see. Yeah. Again this is still not like the actual 4.2 full release I believe, but we'll see. Yeah. I wonder what the bull case is here. For xai, there's a world where they carve out some sort of niche anthropics like focused on coding very specifically and had some major, major gains there. What else is there to be? I think with macro hard they're going very hard on computer use. Okay, computer use, yeah. See that would be an interesting thing where they could like jump to the front of that and if that's the important technology for a couple months that could be really good vibes. Also it is interesting to think about with the Cerebras news and with the value of like high speed inference on one, the whole model on one chip. Is that something that, that Tesla's chip team can iterate towards on a faster time horizon than other chip companies? I don't really know but they do custom silicon and they've done it for a long time and they got an entire self driving model that runs on a car so they have some experience there. They obviously design and fab or they don't fab it themselves but they design it themselves and so will be interesting to see how they, how they carve that out. Let me tell you about Vanta Automate Compliance and Security. Vanta is the leading AI trust management platform. Let me tell you about Applovin. Profitable advertising made Easy with Action AI. Get access to over 1 billion daily active users and grow your business. Today Tariq says. I'm proud to share that humane has invested 3 billion into XAI Series E round just prior to its historic acquisition by SpaceX. Through this transaction, Humane became a significant a significant minority shareholder in Xai. The investment builds on our previously announced 500 megawatt AI infrastructure partnership with Xai in Saudi Arabia, reinforcing Humane's role as a strategic development partner. So yeah, interesting. Maybe would have wanted to get this out before the SpaceX acquisition, but better late. Wait, they said they got in before the acquisition. I know, but in the news. But like, you know, this round got announced a while ago. Yeah. So maybe they would. They're. They're coming out with this news today. Yeah, but they're saying, hey, we got in before the acquisition, so we got, we got SpaceX shares. Yeah. I don't know, it is odd that. They'Re saying, better late than never on. Yeah, you mean on like a comms front, but like, from a financial perspective like that, that was the right time to invest, right? Yeah, I think that's, I think that's what's going on. I wonder why the announcement was delayed. Maybe it's like regulatory approval because it's an international investment. Let's play this clip from Jeff Bezos. His space company, Blue Origin, will move heaven and earth to get to the moon before rival SpaceX. The CEO Dave Limp said recently, Jeff. Bezos, who never tweets, this was his first tweet of 2026, posted a photo of this, like, black tortoise, which goes along with Blue. Vague posting motif of slow and ferocious, methodical. A lot of people have viewed it as a warning shot to Elon Musk, which really was focused on SpaceX going to Mars, and now he's saying, we're going to focus on the moon. What do you make of that tweet? And what is the competition right now? Do you think you're going to be the first? Well, it gives me an opportunity to put on a T shirt for you. So there you go. That's that. Nothing else. Let me do that. I get to keep this. Yeah, that's all yours. And that's the first one off the presses too, by the way. I think everybody's going to want one. Of those to lose. For Blue to succeed, what the US needs is it needs to SpaceX. It needs to launch companies that are competing vigorously against each other to try to give us the most capabilities as a country, commercially, civilly, from a defense perspective, because our adversaries aren't standing still. And so we need, we need to be moving very quickly, healthy competition. But I think a lot of people read into that as the tortoise being Blue Origin and the hair being Elon Musk. And in Space X, because it also comes after Secretary Duffy had said that Space X is behind, so they were opening up for everyone in terms of Artemis. And Jared Isaac man, who's now the administrator, also said, essentially, yeah, whoever can get there first is going to get the Contracts. So do you think you're going to get there first? I think if asked, we will make it. We will give it a run for our money. I like our architecture. I like our odds of getting there very quickly. I don't, I don't have a crystal ball into what SpaceX is doing. I think again, Gwen and Elon are competent and they show it every day by launching rockets. But I love the fact that the US would compete us against each other. They are for sustainability on lunar. We're talking about who could get there in 2028. If asked, we will step up and we will move heaven and earth to get to the moon first. Move heaven and Earth. Powerful line. The moon race is gonna be fun. I think it's, it's shaping up. Shaping up. Well, I mean, yeah, a little bit of a come tortoise in the hare story. A little bit of come back, come from behind. I'm not buying the tortoise as ferocious. Yeah, I don't love, I don't really love the, I don't really love the analogy. Like I don't, I don't, I don't, I don't think it's the best calm strategy. Like I like the vague posting out of Jeff. It gets, it gets the people going but at the same time just imagining SpaceX as the hare. Just like running, running a bunch of laps around the tortoise just kind of. They need to take this way further. Elon needs to wear tortoiseshell glasses. Be like, I turned your tortoise into my glasses. And Bezos needs to start carrying a rabbit's foot for good luck. That would be the hair. Like I got your foot. You know, I want, I want much more, I want more battles here. This is great. Well, let me tell you about Okta first. Sorry. Octa helps you assign every AI agent a trusted identity. So you get the power of AI without the risk. Secure every agent. Secure any agent. According to Kalshee. According to Kalshee, Blue Origin. Will Blue Origin land on the moon before SpaceX? So if blue Origin lands an uncrewed moon lander on the Moon before SpaceX before 01.03.31, 2030, the market will resolve the. Yes. So currently it's at a 70%. 70% so they think 29% in March. Like the race isn't over. Like the finish line is not get, get one lander to the moon. It's like develop an economy on the moon and get lots of people there. So you know, it would just one read on this market. But it is interesting and certainly, I mean, you can see the market was not pricing this a year ago, and I don't think anyone was. I think everyone thought that Blue Origin was kind of just a side project that was sort of just like doing space tourism. And now it seems like they might be going to the moon, which is pretty cool. We have some breaking news. What's that? Claude Oauth is officially not allowed in Openclaw. So anthropic is responding to the Open Claw OpenAI news. And Andrew Warner shares that this would be a great time for Sam Altman to step in and let us use OpenAI subscriptions with OpenClaw. So in the Claude code docs, OAuth and OAuth authentication, which is used with the Free Pro and MAX plans, is intended exclusively for CLAUDE code and Claude AI using OAuth tokens obtained through Claude, Free Pro or MAX accounts in any other product, tool or service, including the Agent SDK, is not permitted and constitutes a violation of the consumer terms of service. So if you're on the consumer plan, if you're on the consumer plan with anthropic Claude, like you just signed up for a normal plan on your app and then you get excited, you want to sign up for. You want to set up OpenClaw on your Mac Mini, you do that and then when you're in the login flow, you say, hey, I'd like to use my CLAUDE tokens over here. It's going to say no, you got to set up an enterprise plan, you. Got to set up a private API. Correct, yeah. This is not news, though. This was like a couple weeks ago, I think, like a week after OpenClaw got like super big, they stopped because you can still. I mean, I'm pretty sure you can still use API. Like an API key. An API key, yeah. And will that use your right now consumer? So how it used to be was you could have a CLAUDE subscription. Okay. And then with that you get a certain amount of like, basically Claude code tokens. Yeah, yeah, but they're like massively subsidized versus the API. It's like 10x. Yeah, yeah. For the cloud code tokens. Got it. Then they were basically using those in. Openclaw in other agents. Yeah, that was for open code. Yeah, actually openclaw. Yeah, yeah, yeah. Sorry, I'm getting. Yeah, no, I know, but I think It's a similar. 25 different names. Yeah, yeah. So the chat is saying that this is news that like, the particular Open Claw integration maybe broke today. Peter from OpenClaw has responded and says that OpenAI has already publicly said that OpenAI subscriptions will work and continue to work in OpenClaw. And so it's a little odd because like, yeah, I mean, you can just use the API. That's not that. Like, if you're technical, that's not a problem. But for the sort of pseudo technical folks who are setting up OpenClaw instances on their Mac Minis, they might be a lot more encouraged to set up the system if they're able to just log in with OAuth with their Claude accounts. Because they're like, yeah, I already have the app and I use the app and I have some extra tokens. Why don't I use them over here? Yeah. Thomas says the news is that they're applying it to the SDK. Yes. So there we go. Anyway, moving on. Let me tell you about consul. Consul builds AI agents that automate 70% of it. HR and finance support giving employees instant resolution for access requests and password resets. Out of the Journal. Yes, the fossil fuel tycoon teaming up with the Rockefellers to fight energy poverty. I'm sure the the online conspiracy community will love this one, but we love tycoon. We were trying to bring the word tycoon back, so we're happy to see the Journal using this EQT Chief Executive Toby Rice is starting a nonprofit to tackle a lack of access to to modern energy infrastructure in poor countries. Toby Rice made his fortune unlocking a gusher of natural gas in Appalachia. He has a bold new ambition, bringing energy to millions of people in impoverished nations. Rice, The Chief Executive EQT, one of the largest natural gas producers in the U.S. is a co founder of Energy Corps, a nonprofit Energy Corp. A nonprofit that helps developing nations such as Ghana, Zambia and Burundi build out their energy infrastructure and prosper. Unlike other philanthropic incentives that emphasize renewables to energize impoverished societies, Energy Corp. Sees a role for a broader spectrum of solutions, from fossil fuels to solar panels and nuclear plants. Notably, this approach has been endorsed by the Rockefeller foundation, one of the oldest and richest foundations. Really opened up the checks with the floodgates with this. The Rockefellers. You know, wasn't John D. Rockefeller the richest person in human history? You see how much he's putting in this project? 200 GS. 200 K. Go solve it. Go solve energy globally. 200 K. Here you go. Best I can do is 200 bucks. I got you. I'm super excited about this. I think Macron deserves a victory lap at this point. I mean his McCrone size is looking. Yeah, it's size. It's size compared to the. No, no, obviously they have a lot of other donors. The Rockefeller is just a fancy name because Toby and his wife have personally contributed $3 million. And the initiative is raising 10 million this year from energy companies, family offices and private individuals. And from his perch at Pittsburgh based EQT, a company with a market cap of 36 billion, Toby Rice has preached the benefits of selling more American natural gas across the the globe to reduce emissions and strengthen security of the US and its allies. Now he's wading into a debate. Should impoverished societies be encouraged to rely on polluting fossil fuels to improve their fortunes or leapfrog to intermittent renewables? There was this question about should Brazil be allowed to clear cut the Amazon rainforest to pull forward industrialization. It's the world's lungs. Everyone suffers if that happens, but they would certainly benefit in the short term. So there's a hot debate here and he is engaging in it anyway. Let me tell you about Cisco. Critical infrastructure for the AI era. Unlock seamless real time experiences and new value. With Cisco, David Holz has hit the timeline. He says 5 million humanoid robots working 24,7 can build Manhattan in six months. Now just imagine what the world looks like when we have 10 billion of them by 2045. Now imagine the year 2100. Dyson Sphere. Dyson Sphere. Dyson sphere by 2100. Is the, is the correct like debate? Like is it before, is it after? But it's like around there. I feel like I keep going back to my land thesis. Yeah, it's like when, when armies of robots can build anything anytime. What, what is actually scarce? In this case, I think with 10 billion of them, I don't even think land will be scarce anymore. It's like, hey, we're making, we're gonna build an island. We're gonna build another moon. We're building the moon. New moon alert. There's no moon alert. Just build another Earth and just throw it on the other side of the solar system. Yeah, yeah. I mean it's, it's, you know, right now we're talking about what businesses are unsloppable. Yeah. The next meta will obviously be unclankable. Unclankable. What's actually unclankable when you send, you know, an army. Well, figure out what's unslobable. Figure out what's unclankable and then go invest in it. On public.com investing for those who take it Seriously. Stocks, options, bonds, crypto, treasuries and more with great customer service. Richard says SF Guy eating a delicious blueberry. In 18 months, everything will be blueberries. This is a perfect contrast to the other post. Just the hot dog. The hot dog one discourse. No, no, no. Of David Holes. David Holes is like, he's. Because David's seen humanoid robots. Like he sees. He's lived in SF and been around this stuff. Like he's, he's, he's a true believer and he's. And he's sort of saying like, I've seen what they can do and I understand the exponential here and now. Imagine 10 billion of them in 100 years. Like it's going to be crazy. Then you have Richard on the other side. Everything will be blueberries. I thought you were talking about the delicious tacos post. He said, I'm the CEO of a hot dog company. I've worked on hot dogs for 10 years and I wasn't prepared for what I've just seen. Your life is about to change, so what can you do? Buy as many hot dogs as you can. Buy stock in hot dog companies. It's a good idea. I am long hot dog. I like hot dogs. Hot dog market map. Good with the kids. Everyone loves a hot dog. Hot dog markets all American. There's nothing better than a hot dog at a ball game. Except for Fin AI. That's better than a hot dog. It's the number one AI agent for customer service. If you want AI to handle your customer support, go to Fin AI. And oral fundraising shows. Defense tech is still red hot. Pretty crazy. Katie Roof, one of the Scoop athletes. Scoop. There it is. Scoop, she says. In case you missed it, on Friday, we broke the news at Anduril and talks to double its valuation to around 60 billion in a new funding round. So if you were buying triple layered SPVs into Anduril at you're going to make 45. You might make it assuming you didn't pay three levels of 10% one time. And assuming that the guy you bought them from actually scorned and is now in custody of the Feds. That's a bad. The round is notable for more than just its price. While Andrew Will technically, technically has both A and I in its name, it's not the AI centric type of startup that typically gets all the investor attention in the current cycle. Very unsloppable. Right? You're not going to vibe code. A drone. You're not going to vibe code. Yeah, I don't know. I think when you think about who's going to unlock the potential of AI for the government. Think of Palantir. Think of Anduril. No, no. Yeah. No, I just mean in terms of like AI disruption. Like it's not something that you can vibe code. A Fury drone that takes a lot of hardware, a lot of testing. You gotta blow a bunch of stuff up. You need a test range, you need government. Yeah, I just don't relationship over decades. I think all these defense oriented businesses, even if they are building software, are quite a bit more insulated just because of the trust factor. And if Anduril sells a product for one price and you have a small team coming together saying we're 10 people, we can build you the same thing for half the cost, there's not quite as much pricing pressure because the government wants reliability. They want to set something up and use it for a really long time. They don't want to really take risks, et cetera, et cetera. Defense tech's on a tear shield. AI, a drone business that can also tap the AI interest thanks to its autonomous office, is in talks to raise a $12 billion valuation, Bloomberg reported. And several other younger startups will likely raise money in the next few months. Paul Kwon, managing director at General Catalyst, said that part of the reason the firm is so optimistic about defense tech is because there are very few trillion dollar markets that are critical for global resilience, that are dominated by legacy vendors and which are experiencing both tech and geopolitical transformation. Yeah, the number of companies that fall in that bucket is pretty small. General Catalyst has invested in Anduril as well as other defense related businesses such as Saronik and Helsing, a European rival to Anduril. As the world unfortunately braces for more wars, increased government spending has led to high prices, high priced contracts for defense tech. Kwon said that the U.S. department is realizing that defense tech is critical for deterrence. Kwon said he has been he has also seen a shift among entrepreneurs believing that many of the most talented founders are choosing to build for the defense industrial base. And you can check out the rest of the story on the information. A lot of attention has been focused on open router. If you go on open router and look at the rankings, you can see that Chinese open source models are completely dominating charts. Minimax I saw DHH talking about Kimmy K2 is now a daily driver for squashing bugs at 37 signals. Very interesting data point since a lot of this can be I think the open router stuff can be a little Hard to contextualize because there's some amount of volume that doesn't get captured in open router. Obviously it's the majority of the volume is not captured. You think so? Yeah, yeah. According to Zephyr, who's very on it, it's 1 to 2% globally. You think that's about right? Yeah, definitely. I mean if you just compare like the like. If you look at the actual like token count, it's like in the billions for like over a week or something. Where I remember Demis over the summer, he was like, we're doing, you know, quadrillion tokens every month or something. So it's like the scale is completely different quad. And also it's like no one is going to be using the big Labs models on this because they would just hit the actual API. It's just easier. So if I'm going to be calling Anthropic, I'm probably just going to use the Anthropic API. I'm not going to go through openratter. So you should expect it to be the open source models because one of the good things about OpenRouter is that it has all the different inference providers together. So there's a ton of companies that host the different open models. So it aggregates them all together. Yeah. And also this doesn't count for token generation in consumer LLMs, and that's a huge thing. Like Google AI overviews is I think the most used LLM product in the world, something like that. And that's technically generating tokens. When you just hit Google Search and it answers with an LLM query, that's token generation. And then there's stuff that's happening in Gemini app, Claude App directly, not even coding. Like no one's using Open Router within their consumer app unless it's like some third party thing. But most of them are going anyway. Let me tell you about Cognition. They're the makers of Devon, the AI software engineer. Crush your backlog with your personal AI engineering team. Let's continue with the timeline. Jacob Rintomaki has a post here. He says, unfortunately it is not seen as cool to say, but the beatings will continue until more people internalize this. He's talking about Rune saying, I don't think this is remotely true, but it's hard to fight open source Copium because people act like you shot a dog or something. Something. Because Anton two years ago, back in December 21st of 2023 said AGI is more likely to come out of someone's basement. Some mega merge Hermes 4000 than a giant data center. And I think everyone agrees with this now, but it was very unpopular to say at the time. I remember John Ludig had a post about open source AI not being on the critical path to AGI because of the scaling laws and a whole bunch of other economic factors. He sort of predicted that Meta would stop being so focused on open source because it just doesn't make sense to spend a trillion dollars or $100 billion on infrastructure that then you capture so little of the value of. And I think that's why Anthropic has not been very pushing like hard on open source and even even the other superintelligence initiatives, not many of them have been open source. The open source question has been a very different a very different business model. But it is very important in terms of a model commoditization and terminal economic equilibriums in the AI lab battle Orin. Hoffman is sharing that Ozempic is bad for business. Yes. A few months ago someone told me they had heard a rumor that a banker hedge fund had banned its traders from taking Ozempic when Wegovian other GLP1 weight loss drugs. The theory, as I understood it, was something like traders need to make quick decisions based on gut instinct and GLP ones mess with your gut instincts, you're not hungry for snacks, you're not hungry for profits, you lose your edge. It is funny. Warren says GLP is getting banned by hedge funds.
I'm having a better experience. What would that look like? So the first thing is that I realized that I've asked ChatGPT just when was OpenAI founded? Three different times. It's the exact same query. Like, it doesn't need to light the GPUs on fire for that question. The answer literally never changes. You can cache the result. And that's what Google does with those knowledge queries, knowledge panels. And there's a whole bunch of different, there's a whole bunch of different ways to deliver results that are sort of pre cached. And so if you just look down when an LLM launches, basically every question has never been asked before. But now there's a lot of people that are just showing up with the exact same question. Give me the history of the Roman Empire, give me the history of this company. And you might not be the first person to ever ask that question. Exactly. But also, if you do a little bit of fuzzy, if you do a little bit of fuzzy search over it, you. There's probably like hundreds of thousands of people that have asked the exact same thing. So cache those results, give them to the user instantly. And I think this instantaneous feeling of LLMs, they felt slow for a really long time. They actually got slower. It was always sort of slow. You watch the token stream in, but then once the reasoning models and the thinking models and the deep research and the O3 Pro came out, it was like really slow. It was like, close your phone and come back in 20 minutes. That does. It doesn't have to be the end state and I don't think it will be. And I have no better example than the number two on my list, which is cerebrous inference. So ChatGPT currently has a model called 5.2 Instant and it is not instant at all. I fired off a prompt to 5.2 instant and I said, no reasoning. Tell me the history of LLMs. It took 38 seconds to deliver the full response. So for all the tokens to stream in. And it does a good job. It shows you images and stuff and it is a cool illustration. But then I went over to Codex Desktop and I fired up GPT 5.3 Codex Spark Low, which is a crazy name which we'll get to, and it responds.
Have a lot of cultural importance, but where we're not suddenly faced with a fundamentally different version of human life. So if it's not nuclear war and it's not fire and it's not electricity, it's also not the fax machine. Are we talking about mobile, cloud, the Internet? Like, how big is this thing? What does your world model look like for how AI progresses and diffuses through society? Sure. We have to understand there's different kinds of importance and different kinds of influence. So you mentioned the Internet and the mobile phone. Okay. Obviously the Internet and specifically the smartphone, the iPhone, have had massive cultural and social impacts on the United States. It would have shocked people in the mid-1990s to learn that we have about the same productivity growth and about the same GDP growth in this country now that we did back then. Right. Like many, many people were invested in this idea that this sort of. This missing GDP growth, you know, we're at half of what we were in the mid-1960s. A lot of people thought, okay, the Internet's the thing that's going to restore us. The Internet is very meaningful and it's very influential. Right. And yet economically, it hasn't had the effects that are expected. And that's just like. That's how history works. You know, that's. That's just like there is a. You always have to bake into percentage to the degree to which, like, there's regression to the mean. Right. Like, we always seem to find ourselves way back to this sort of mundane reality. And I look at things like when everybody got so depressed and disappointed after ChatGPT5 was released because they thought it was going to be AGI, and you had all these lonely guys who were like, oh, this is just going to change life forever. And now everything's going to change. It's like, no, it's. Things are going to change, but slowly and in a distributed fashion. And you have to keep planning for normal life. Counterpoint. Maybe this time is different. Maybe this time is different, but absolutely. So show me. I mean, here's the beauty of all this. The beauty of all this is, like, if this, if the real stuff happens, you're not going to have to convince me. Right. Like, if we really have AGI the way people think we are, no one's going to disagree because the effects are going to be so profound. There's going to be nothing to disagree about. Okay. How did you interpret this latest piece in the Financial Times from Eric Brian Falson? I can't pronounce.
The reason to do it. Yeah, we'll use anything as air cover. Yeah. And so I just. In general, I caution people to say, look, it's like, I've said this before, you know, when I was in high school, a very distinguished scientist came to my science class, and he was on, like, the board on, like, the National Science. Some sort of board of the National Science Foundation. He was like a geneticist. And he came and he said, like, that he envied and also felt bad for us because the Human Genome Project was going to so radically change human life that we were going to see things that he couldn't imagine. But also, the job of doctor wouldn't exist in 10 years. And this. I was in high school in, like, 1998. This probably happened. So, you know, studying medicine. Right, right. And you quit. Right, Right. You know, like, if you can actually. But this is. This is an exercise that people can do at home, which is to go back, like, just to Google and look at the. Predict what people thought the Human Genome Project would do. Obviously, genetic research in general is very important, but there was a real belief among very intelligent and highly credentialed people that we were on the verge of something absolutely humanity changing, and life's more complicated than that. And again, I want AI boosters to do more showing and less predicting. Show me. Show me the change instead of predicting the change. Yeah. Well, thank you so much for taking the time to come chat with us. This is really fun. Yeah, we have this. We have this conversation. We have these debate. We have these, you know, kind of debates and conversations all the time. Specifically, the. There's a popular influencer on Instagram that every time a tech company does a bunch of layoffs and says, we did this because of AI, he takes that and, like, makes this crazy story up around how you AI is just immediately causing all this job loss. And I'm just looking at it as, like, I know the company, they had a lot of bloat, they're getting some efficiency, efficiency increase because of AI, but certainly it wasn't like the.
Lot of mobility. But I think they have made. Well, is part of the strategy. Billionaires are the same. Yeah, is part of the strategy that this is the best possible branding for attacks like this. And this type of like campaign focused on billionaires one time is how you get something like this passed. Then once it's passed, you'll have the ability to kind of like reduce the set of requirements to qualify for it and you can eventually get down and be eating off of the plate of all Californians, middle class, etc. Yeah, I'm actually, I'm staring at a quote that I have here in my last story. There's a couple of academics whose work, like heavily influenced this wealth tax and pretty much all wealth taxes globally. One of those men is advocating for a 2% globally coordinated billionaire tax where all of the countries kind of get together and agree they're going to do this. Sort of like a one world government. I literally, in this quote, have a screenshot of Michael Gibson's summary of the teal Antichrist lectures. So, yeah, he says it's clearly far from enough. But also what history shows is that what's most difficult is to move from zero to something positive. And once you have something positive, Even if it's 2%, then it opens up a realm of possibilities. And so they say it's one time, they say it's an emergency, but clearly. What's the takeaway from the Netherlands? They passed this 36% unrealized capital gains tax. It excludes, I think, real estate and startup equity. But what's the thought process there? It seems super bearish for the country as a whole, but.
You know, has enabled me to buy a house and, you know, so it's, you know, it's, it's a pretty good living dream. So tell me about your wager. How did this happen? What is, how, how do you come to define the, the, the actual bet and what's at stake? Yeah, so I, I'm frustrated by the AI conversation. I think that it is, I don't know if you guys are familiar with the concept of a Mott and Bailey argument. Mahmot and Bailey, Yeah. So, yeah, for those at home, it's just like you are, you make a very sort of extravagant argument and when challenged, you retreat to a simpler and easier to defend argument. So you might say the Christian God is real and built the universe and he rules over everything. And then when you're challenged you say, oh well, God's just a feeling and God's in the, in the wind and whatever. Right. That's like a mutton, Bailey. I just think that that's all over AI, where the CEO of Google is saying that this is bigger than fire and electricity and people are saying it's going to end death, et cetera. But then when challenged, it's like, hey, these LLMs, they might make going through legal documents of much more efficient process. There's this constant sort of back and forth as far as the wager goes. The people in the AI world kind of come from this sort of rationalist, Silicon Valley sort of culture. And they say you should be very sort of objective and specific in your predictions and you should put money on them. And so Scott Alexander is a guy I've known for a long time, the blogger of Slate Star Codex and now of Astral Codex 10. He is a AI enthusiast. He was a signatory on the AI 2027 document. So I just, I challenged Scott and said I believe that three years from now we'll be in a more or less normal economy. And, and that was chosen because, you know, AI 2027, you know, this is like going to 2029. So I felt like it was giving him enough sort of wiggle room. And I just defined a bunch of economic indicators and said that if any one of these indicators are violated, he'll win the bet and I'll lose. Wow. And the reason to do that is just I'm, I'm looking for someone to put their money where their mouth is about, like, is this actually going to cause a white collar apocalypse and all these economic sort of things. And I mean, he said no and would prefer to do a 10 year version. So we're kind of looking at that right now. Okay, so I have some of these. Unemployment must stay under 18%. What are you at now? 4%? That feels like. But this is the point.
Partnered with the New York Stock Exchange. Do you want to change the world? Get a haircut and then go raise the capital at the New York Stock Exchange. Great call, John. Duane says, what is going on at Anthropic? They're going after people with multiple paid max accounts. You're paying full price multiple times and they're treating you like a criminal. Not sure what Dario is trying to speedrun here. And on Reddit, on Claude code, it says Claude just banned having multiple max accounts. Since around a few hours ago, signing into another account has stopped working. I think some people do need to have their multiple max accounts banned. They're just. They're not building anything useful. They're wasting tokens and they're just creating endless setups and tool chains and MD files. And unless you're actually shipping something that's going to drive business value and be used by more than one person, you only get one account. I'm with Claude on this. Stop wasting tokens on your silly thing. I was reflecting on. I was texting you this last night. Am I dumb and out of ideas or is all the software I want just illegal? Because I was like, all the things that I want are things that could exist, but they can't exist for business reasons. Yeah, I want an Apple TV app. Give the example. So the example was, yeah, I want. An Apple TV app that has Netflix installed. It's like, why doesn't Netflix integrate with Apple tv? Because Netflix doesn't want to get aggregated. They're an aggregator. They want you to open that app on the Apple tv. So the Apple TV app doesn't have Netflix content, even though you have the Netflix app installed on Apple tv, the device. And I was saying, like, I subscribe to all these different news sources. I want like an Apple News that aggregates them all, a Google Reader that aggregates them all. I pay, but I'm still logged out of million things. Like I'm in some social app and then it opens in a Safari web browser. I'm not logged in. There's a paywall. It's annoying to log back in. I want something that just like, aggregates all my news sources and jumps the paywall or. Well, that's not a coding issue. That's a business issue. They want you to log in for a reason and they have a decision to that. So I don't know. I think we're still early in the broad distribution of people building custom software and experimenting with things. But at the same time, we've had great writing models for a long time. And anyone we know, Everyone could write their own books. Everyone could write a better ending to the end of Game of Thrones and send it to you as a text file right now with the current models. And I haven't read anything that I've been like, oh, yeah, this is really good. I gotta read this AI generated book. So I don't know, there's like some weird bottleneck there that it's like. It's not a barricade for anything. Look what Gabe is offering. Sounds very illegal. Yeah, this is Popcorn TV. It's called uTorrent. I know, I know, I know. Xbox Media Center. Trying to play by the rules. Xbox Media Center. But that's the thing is that, yes, that streaming site, like, yes, you can vibe code that. But that can't actually get to scale. It can't have an impact in the economy because it's breaking the rules. And there's a lot of AI stuff that feels magical. And you see this with C Dance from bytedance where it's in the Journal today.
And deep multimodal understanding. And I'm also going to tell you about Sentry. Sentry shows developers what's broken and helps them fix it fast. That's why 150,000 organizations use it to keep their apps working. One show two maps. One show two maps. Strong start. Should we break down five wildly obvious fixes that will explode consumer LLM adoption? They don't want you to know this over a at the big labs, but I have some ideas. Basically everyone's been really focused on agentic coding and the SaaS apocalypse and what's happening in the business to business world and the enterprise world. I've just been sort of like thinking back on, you know, basic improvements to the chat apps that I use all the time, because there's some really obvious stuff that I think, I think is in the works and I think it's coming, but I wanted to just sort of like get it all down in one place to think about what the next is iteration and the next breakout moment when people are like, oh, I'm using them even more. I'm having a better experience. What would that look like? So the first thing is that I realized that I've asked ChatGPT just when was OpenAI founded? Three different times. It's the exact same query. Like, it doesn't need to light the GPUs on fire for that question. The answer literally never changes. You can cache the result. And that's what Google does with those knowledge queries, knowledge panels. And there's a whole bunch of different. There's a whole bunch of different ways to deliver results that are sort of pre cached. And so if you just look down when an LLM launches, basically every question has never been asked before. But now there's a lot of people that are just showing up with the exact same question. Give me the history of the Roman Empire, give me the history of this company. And you might not be the first person to ever ask that question. Exactly. But also, if you do a little bit of fuzzy, if you do a little bit of fuzzy search over it, there's probably like hundreds of thousands of people that have asked the exact same thing. So cache those results, give them to the user instantly. And I think this instantaneous feeling of LLMs, they felt slow for a really long time. They actually got slower. It was always sort of slow. You watch the token stream in, but then once the reasoning models and the thinking models and the deep research and the O3 Pro came out, it was like really, really slow. It was like, close your phone and come back in 20 minutes. That doesn't have to be the end state and I don't think it will be. And I have no better example than the number two on my list, which is cerebras inference. So ChatGPT currently has a model called 5.2 Instant and it is not instant at all. I fired off a prompt to 5.2 instant and I said, no reasoning. Tell me the history of LLMs. It took 38 seconds to deliver the full response for like all the tokens to stream in. And it does a good job. It shows you images and stuff and it is a cool illustration. But then I went over to Codex Desktop and I fired up GPT 5.3 Codex Spark Low, which is a crazy name which we'll get to, and it responded in under two seconds because from what we know, Spark is incredibly quick, cerebrous and it, it's very, very fast. And so everyone's obsessed with the fast models in the agentic coding world because you're waiting a half an hour for something to get back to you, you're waiting five minutes for something to get back to you, and you're actually losing your train of thought. But I think that applies in consumer as well. And I think the interaction of sending a message and then just immediately getting a response before you actually think, oh well, it's waiting. I'll close the app, I'll check my messages. Oh, I got an Instagram notification. Let me go over there. Instant responses will keep people in the apps longer and user minutes will actually increase once that rolls out. So pretty simple implementation for I think most companies. My big question is, I know Google has a huge advantage with tpu, but I don't know if they have an answer to Cerebras specifically. And Nvidia just brought Grok, which can do, I think some of the same things. I'm curious to know how every lab solves the fast responses question because that feels like an important piece of the puzzle. It's not the only piece of the puzzle, but it's an important feature and I think we're going to see it rolling out to Consumer LLMs very, very soon. And I do think it'll be an interesting moment for people to both ask a question and just boom, it's as fast as going to Wikipedia and just seeing like, okay, everything's rendered, it's thoughtful, it's what you want. And on the flip side, I think that it could make people a lot more chatty with them, like actually asking follow up questions because you don't feel that cost of like, oh, if I ask you to follow up and. Or go a different direction, like, I have to wait. I have to wait. So I might just close the app. Speaking of GPT 5.3 codec spark low. No more model names, like, truly, no more model names in consumer AI LLM chat apps, like, ever. Like, just bury them so deep in the UI that you never see them. And people will complain. People will be like, I wanted it to be easier to pick. I like picking between pro and thinking and fast and instant. I know what I want for everything. People complain. But it will all inspire the model routing team to grind harder. And the model routing team has a hard job to do, but they will eventually figure it out. And eventually you should be able to just talk to the model. You can already do this in ChatGPT. You can say, hey, think really hard about this question and give me a really thorough answer. And it'll go. It'll switch from instant to thinking. I don't know that you can trigger pro from. From that. I haven't actually experienced that. I did try and trigger a deep research report. I said, hey, please, Deep research the Roman Empire for me. And it does not fire off a deep research report. Deep research is buried under like a plus button. And you have to select it and say, okay, I actually want you to do this thing. And then it takes you down the deep research workflow, which I understand is like, for inference reasons. They don't just want you firing off deep research reports all the time. But I think in the future, the model router should be very intelligent about, okay, this is a question that people have asked thousands of times. Let's just go get it from a database. Which is crazy to think in the age of AI you wouldn't even be hitting a gpu. But I think that's going to be real. And then I think on the other side, like, you should. It should detect, like, okay, this person wants something that's far beyond anything that we've ever worked on before. We got to go search the Internet. We got to write some code, we got to do a whole ton of stuff. I'm going to need 10 minutes. Fire up deep research. Right. Fourth ads. We've talked about this, but we got to get them in the LLMs. We got to get them everywhere. Because I was thinking about the death of Google Reader. I don't think you were ever a Google Reader guy, were you? But it was amazing. You could take all these RSS feeds from all these different blogs during the blogosphere. You could put Marginal Revolution, Tyler Cowen's blog, all these different things in there and just kind of scroll through them really quickly. And Google wound up killing it. And everyone was really upset. And the reason was, I think because they never really got on the Google Ad flywheel where there was real revenue generation. Yeah. Was that just. It didn't hit a scale that enabled it make sense? The failure of every Google project that has failed is always a question of like, was it because they weren't making money from it or was it because they hadn't monetized it yet or it. Just never got big enough. Never got big enough to monetize a million. Yeah. Weekly active. It's not probably worth keeping around. Totally, totally. But my takeaway from Google's surface area of products that are successful and loved Google Search, YouTube, Google Maps, Chrome, Android, like these are all direct funnels for the ads flywheel. And so you can see that they're driving the bottom line. There's a whole bunch of folks on the team that are getting excited when they're hitting their numbers, when they're making more money for the business. And so they just get more and more resources, more and more engineering effort. Everything gets better. And I think that not only are ads the best way to deliver high quality products to the broadest possible audience, but they just make products better top to bottom. And yes, there's the stated versus revealed preference thing. And yes, you might want to pay to not have ads like you do on YouTube. Many people do. But I do think that that ads flywheel is going to be really, really. Important as the inference gets really and right on time. Perplexity ends ads experiment. I saw that this was the news from this morning in the information from Catherine. He says perplexity is no longer offering ads. An executive told the Financial Times the AI search startup is pulling back from this line of business as rival OpenAI starts showing its users ads in ChatGPT. Earlier this month, the company said it worried ads would undermine users trust in their platform. With an executive saying the challenge with ads is that a user would just start doubting everything. I don't buy this at all. Arvind has a history of kind of just like trying to provoke OpenAI at every turn and so coming out perplexity in my view, like this is just like somewhat bearish. Right. They're trying to serve as many people as possible all over the world. The best way to do that is going to have an ad supported tier kind of bailing on this, on this moment. I don't know, maybe it's not worth reading too much into it, but a little bit early to throw in the towel on the economic engine that has driven the Internet for its entire history. Yeah, I mean, we talked to a lot of founders who have brands and they love advertising. And I think that's another side of this, which is that when a lot of entrepreneurs and also people who work at businesses want to grow their businesses and they have fond memories or affiliations with Facebook and Google because that's how they grew their companies. And when you talk to somebody like Sean Frank at the Ridge, he's like, I'm going to be first in line to advertise on ChatGPT. I can't wait for that. It's converting so well already. I want more of that business. And we didn't really hear that with the Perplexity ad product. We didn't hear people lining up to buy ads in that product. So maybe it was not going as well. Yeah, the other thing is they thought so. According to the information, Perplexity started testing advertising in 2014. Less than a year into its test, Taz Patel, the executive leading the ads effort, left the company in perplexity. It only let in less than half a percent of the brands that wanted to advertise. So there was like a bunch of demand. They barely let anybody use it and then they bailed on it. Interesting, interesting. Well, the last one is somewhat related to OpenClaw, but I think way down the funnel, beyond the 20 minute deep research project, you probably want to be able to fire off something that looks like Claude code or openclaw or codecs to write lots and lots of code and solve a really, really hard problem. And so many reasoning models can already write some Python and execute it. But it's clear that everyone wants to go further. Hence the MA mini boom. And I'm not actually sure how important access to the local file system is to most consumers. Like, when I. When I think about what's like, most of the data in an average Internet user's life is mirrored in the cloud. I think they care about their camera roll, they care about their email, their messages, and almost everything's in the cloud. I've noticed this when I move from one computer to the next or I move from a phone, I'm like, wait, I didn't actually, there was a time when it was like, oh, you're moving computers. Get an external hard drive, make sure you drag all your files over. Most of the stuff's mirrored to iCloud that can be accessed via an API. It requires a business development deal probably, but it does seem feasible. And a lot of the LLMs have hooks into Gmail already. I think all three major LLM apps have Gmail integrations already and more integrations are coming clearly. And so I'm not sure that you need to replicate openclaw and have it running on a dedicated piece of hardware, even like cloud hosted. But I do think people will want to be able to fire off something that writes tons of lines of code to solve a particular problem, even if it's something as mundane as like getting you a restaurant reservation at a place that doesn't have an API. Like if there's a restaurant that just has a web form and you basically want to deploy like agent mode, that might look like writing a web scraper and writing something that actually does like a headless Chromium browser and like clicks it and that might be generated from something that looks a lot more like open Claw or cloud code than something that is just a couple lines of python in a reasoning model. Anyway, there are also a bunch of nice to haves. These aren't really on the list, but you know, these, these apps, they still occasionally fail to return results. When you're in areas with patchy cell phone service, there's like little UI things. Some of them botch text to speech requests when you'll fire off a deep research report and they'd be like, read this to me. And then it'll read for like a minute and then it just stops. Some of the apps don't let you listen to the deep research reports, but they let you listen to the normal reports. So there's all these little fine details in the UI that I think are causing more churn and people can just chop away at. It's unclear if what is required to make an amazing product is just AB testing all of these things and just optimizing. Or is it taste? I have no idea. But if you wind up doing this is my recommendation for anyone who's working on this stuff. If you're just going to run an A B test to figure out what is the correct user interface and you run the A B test, you find out that the button should be blue instead of green. Don't tell your boss you ran the AB test. Tell them it was taste. Say that it's all about taste. Good call. And that you have taste. It's all about taste because then you'll have a job forever. Yeah, but, but if you say I'm just the guy who runs a B test. Really? Well, probably. Probably. Taste is king. It's true. The AI models can't taste. They can't taste. They can't Taste A5 Wagyu. They can't taste a Cabernet Sauvignon. Only you can do that. So make that dinner reservation and enjoy a nice glass of red wine. Because the models can't. They just can't. There's just no way. There's no way. Alpha. Alpha. Alpha. Anyway, let me tell you about MongoDB. What's the only thing faster than the AI market? Your business on MongoDB? Don't just build AI, own the data platform that powers it. And let me also tell you about Lambda Lambda is the super intelligent cloud building the AI supercomputers for training and inference that scale from one GPU to hundreds of thousands. Robinhood says historically investing in private markets was limited to institutions and the elite, but not anymore. With Robinhood Ventures, you can now get exposure to private companies like the ones listed below. They have a new fund that has databricks, Mercore, Revolut, Airwallex, Boom, Supersonic, Ramp, Aura and Stripe, which is signed and pending close. Very curious which of these companies, if any, were actually on board and excited about being part of this lineup. I think Ramp was. I saw Fax Herbert from Ramp posting about it and he is.
Companies, but the neolabs have exploded. They have a term that's been coined. Sarah Guo was actually, I think, the first person that came on the show and sort of broke it down for us around the Christmas episode. But since then, the neolab, like, taxonomy has evolved and so we needed to build a market map. So, Tyler, take us through what's going on in the world of neolabs these days. Yeah, so neolab is kind of this interesting term. Like, it's very broad. People say, like neolab. It's not very clear what they mean because there's like broadly. I think it generally. And this will make it clearer. Yes. I think after this it'll be pretty obvious, like what, you know, what you should be looking at, how to think about these companies. Yeah, I don't want to be more confused at the end of this. Yeah, that would be a disaster if that happened. Yeah. So that's not gonna happen. This is gonna be easy. Okay, got it, got it, got it. Cool. Okay, so let's just start. Okay, so you have neolab, right? Yes. So neo, this prefix. Okay. That's be relative to something. Yes. So neo is relative to like your trad lab. This is your big lab. Traditional. This is your. Yeah. This is your opening. Let's give it up for the big labs. Yeah. They don't get enough credit today. The open data centers spike in capex. So this is gonna be your OpenAI, your DeepMind, your anthropic kind of your big lab. Yeah. Xai. Xai kind of fits in there too. Even though it's a newer trad lab, it fits in with the big lab. They got a lot of money. Dario. I think on Torkesh, he was like, yeah, three, maybe four labs. Right. So the Force is probably Xai. Yep. I think you can also kind of throw in Mistral in there. Okay. Oh, yeah. Mistral is a little bit older. Yeah, Yeah. I mean, Mistral, there's much of these labs that were basically founded in the like two or three years before ChatGPT and then in the like six months after. Yeah. So I think Xai's in there, Mistral's in there. And these specifically these, I feel like those trad labs, it's like they did a transformer based pre training run. They have their own base, pre trained. Maybe it's not at the frontier, but at least they're playing that game. They're not doing fine tuning. They're not doing something else. So that's sort of like you're in the trad lab world. When you're thinking about like a big pre train run. Loosely. Yeah. I mean, especially if you're talking about these big pre trains. It's really just these four. No one else is really at that scale. But then. Okay, so Mistral kind of brings us down into what I call the sovereign labs. Okay. So, I mean, you know, if you kind of look at this, it's basically just labs that are not in America. But I think also that there actually is some meaning to this. So, like, Mistral. You've seen Mistral become kind of the leader in European AI. Right. So I think. Was it Sweden? Maybe they're bringing a new data center. Yeah, Sweden. So they're kind of becoming like, stuff. Going on in France too. Macron is always talking about Mistral. It's a big leader. Cohere is also kind of. I think that's like a very, you know, Canadian. It's a Canadian company. Yeah. Yes. But also has done their own pre trains. No ties to the curling team, though. Oh, okay, okay, okay. Complete no ties. So I don't want them. Yeah, yeah. It's important to put some distance between that scandal. Yeah. And then you can go down. You can kind of see all your Chinese open source labs and see your Quinn Deep Sea. Kimi. Unitree is also in there, right? Unitree. I think so. As we'll see later. There's also. I have section for like robotics labs. Sure. But this is very clearly like, you know, this is the Chinese. Yeah. Take us back in time now what was going on before the trad labs broke out. Yeah. So here I have this section. Legacy labs. Okay. So these are ones that are kind of more entrenched in these big enterprises. Yeah. So you have stuff like Microsoft Research AT&T or Bell Labs, Right? Oh, Bell Labs. Yeah. I forgot about Bell Labs after. You know what? You know how. You know why they call it Bell Labs? Why do they call it Bell Labs? Alexander Graham Bell. Yeah, it was founded by him. Yeah. Bell Labs. Okay. But also you have stuff like you have fair, Facebook, AI research. This was like. I mean, there's so many, like, OG research papers that came out of fair. Yeah. This is what Yann Lecun used to be head of before it transitioned to msl. To msl. Okay. So then I think let's move up here around your trad lab. You also have post lab, right? Yes. P O A S T. Yes. These are posters. Yeah, These are labs where you get a lot of posters. Right. So obviously this is OpenAI. You got roon, anthropic a lot of Sholto, et cetera, posters. Prime Intellect. They're great posters. A bunch of anons at Prime Intellect. Doing great stuff over there for sure. And then you kind of get into the proper neolab. Yeah, the proper Neo Lab. So this is also a bit hard to identify because like what is actually the core of a new lab? What are these different kind of offshoots? I think Prime Intellect is kind of the prototypical, like quintessential neolab when you think of it where you basically have. It's like fairly recent. Yeah. It's still very much research focused. Okay. Like sure, they have enterprise, like you know, think about different stuff but at the core of it, you're still like trying to find these like new novel approaches. It's research, you're hiring researchers. It's not just like engineers, sales guys, et cetera. So let's. Is it. Wouldn't Sakana be more of like a sovereign lab? Yeah, yeah. I mean, so a lot of these can fit in all different places would be. Yeah, Japanese maybe. Okay. And you put MSL in here because it's a new project. Yeah. This one was also a bit hard. It doesn't feel like a trad lab because I mean maybe it has the scale but it's just, it's newer, they haven't shipped yet. Neo New Lab. Right. I mean it's so recent. Definitionally. Thinking Machines is my classic go to Neo Lab. I feel like it's post OpenAI exodus and sort of OpenAI is nothing without its people. You know, you get the spin outs and you think Thinking Machines and SSI are two of like the first case studies that sort of set the tempo for, okay, it's possible to do some research outside of the big trad labs. And so that's where you get the neolab boom from. And then a lot of the other companies I feel like are saying, okay, we're going to do something similar to Thinking Machines or ssi. We're going to commercialize early or late, but we're following in that and we're benchmarking to that. Oh, they raised 2 billion, we're raising 200 million. It's easier. There's a 10% chance that we, you know, are at their scale. So you can underwrite it that way. Yeah. So Thing Machines also brings us to what I call the trad SAS lab. SAS Lab. You've trad SAS lab. So I think the way I think about this is the trad SAS labs are trying to basically use the data that's Inside these big enterprises, pull them out with AI. Okay, so this is thing machines. Right. The rumored idea. Right. Is they're doing RL for enterprise. A bunch of these are doing fairly similar things where it's kind of chatting with your data, using the data that's very valuable to a company, but it's going to be inside the company. You can't really pull it out anyway besides having the AI be internal. So you have applied compute, you have poolside doing all kind of similar things in this enterprise LLM field. And then that brings us to neo SaaS. Not full based pre trains for those companies, mostly fine tuning or RL on top of a particular company's use case. Yeah. And then I have Neo SaaS lab. This is different than trad SaaS. I think these are different in that they're not really pulling, they're not going enterprise specific. I think that's one way to look at it. Also much more of like startup focused. But they're making a product that is sold effectively as SaaS. Yes. So cursor, cognition, windserve. I have ramp labs. Ramp labs. These are seat based, sort of consumption based, but it's a product that's vended into a. And the product is, is what you get and then sort of customizes as you integrate it. But it's not. The conversation doesn't start with a business development relationship. Yeah. And of course, I mean these lines are pretty blurry. But then. Ok, let's go down to the post lab. Okay. Post lab. This is after the lab. Yes. So that means basically they train the models and then these labs are working on top of those models. That's how I think of it. So, so you have meter, you have epoch. These are going to do evals. You have Pangram. They're seeing. Is the model producing slop? Yes. Or is it producing text that you're. Using in some way? These are purely eval. They don't have necessarily AI products themselves. They don't necessarily sell to big business. But they could still be training models. Right. Like Pangram is training models that sit. On top of labs. That's true. So it counts as a lab. Makes sense. Okay, what else we got? Maybe that brings us down to the safety labs. Yes. So these are pretty interesting anthropic kind of fits in this. Right. Because they have a big safety team. They're doing a lot of mechanistic interpretability. You have Goodfire. I think they just raised at like 1.25 billion and they're just doing mechanistic interpretability let's go. Very interesting. Eleuther AI is similar, kind of. I know, yeah, yeah, Very cool. They're also. A lot of these are also kind of in the open source space. Yeah. I think stable diffusion came out of Eleuther AI. Yeah. This is another label that I think I could have put on, but it's so hard to get everything to, like, work together. But a lot of these are also, like, the core of the company is doing open source. Sure, sure. Right. So Prime Intellect Fair was a good example. Almost OpenAI back in the day. But a lot of these have bled together, where OpenAI has an OSS model, but also a lot of consumer and enterprise. Yeah, makes sense. Okay, so then in contrast to the SaaS labs. Yeah. We have the consumer labs. Okay. Consumer labs. These are focused on consumers. Right. So you have Eureka Labs. This is Andrej Karpathy's project. Yeah. I don't think there's anything been released from it yet. Education, though. But, yeah, education makes sense. Four people. You have humans. Oh, it's four. Four people. Not four individuals working there. It's four people. Yeah, it might be four people. It might be one person. Who knows? He's pretty good. Yeah. You have humans. And this is the. I think that phrase, it's like humanity focused. You're going to turn human into sand. Human sand. Human sand. Yeah. We got to hang out with the founders at the super bowl, but they're. But yeah, focus on creating models that work better alongside people. Sure. Yeah. You have a lot of, like, companions, these kind of ideas. Right. You have character AI also. Oh, yeah. Do they really own C. AI? What a great domain. If that's true. I don't know. We'll have to. Anyway, so then that brings us down to the visual labs. Visual labs. Right. There's a lot of either multimodal models, or they're actually, like, producing video or images. Right. Talk to a lot of these founders. Yeah. I feel like almost all of them have been. Yeah. I mean, world labs raising today. Yeah. Or fundraise announcements today. Yeah. Midjourney, etc. Leads are pretty obvious. You have your NEO Auditory lab. Midjourney. Is the. Is the sailboat logo correct? It's a good logo. Yeah. Okay. You have meta reality labs on there too. Okay. Yeah, yeah, yeah, yeah, yeah. That makes sense. They're visual, not fully AI yet, but they're getting there. Yep. Okay. You have NEO Auditory lab. Okay. Right. So this is going to be anything that has to do with vocals or voice or music. Yes. Eleven labs. Eleven labs. Sponsor of tvpn. Thank you. Suno. Right. Making music. Gemini also released a new model. Yes. Today, Lyria 3. I didn't even know there was one or two. It's a trilogy already. They just got secret models that they're hiding from us. Yeah. So this is very interesting field. And then you have your legacy auditory as opposed to your new auditory. Right. So these are your old ones. This is. Well, John, do you want to talk about nuance? Nuance Dragon, Naturally speaking. This is the original box software. You buy it, install it on a Windows computer. You can talk into a microphone and it will write down what you say. Dictate it. Yeah. Using some AI. Not a large language model at the time, not a transformer based architecture, but became a very large company. I think it's part of Microsoft now or something. I think it's been acquired a few times. But yeah, very, very interesting company. A lot of really solid fruity loops. Yeah. That's, you know, you're in the lab making beats, I guess. Okay, so now moving up, I think this is really a very interesting section. So this is Neo Trad lab. Yes. So I think. What is a Neo trad lab? This is the simple. This is a simple definition, clearly. Yeah. Does it even need explaining? I think everyone gets it. Watch your head, by the way. We're coming really close. You might want to be on the. Other side to the team. Okay, so Neo trad lab. It's a Neo lab. Yes, but it's trad. Okay. Okay. So what does that mean? So basically the way I think about a lot of these labs is that they're extremely research focused. Okay. They're also largely. They're focused on a single idea. So if you think of OpenAI, very research focused, obviously. But they're doing a lot of different things. Right. So they have consumer and they have consumer. But it's even like on the product or on the research side. Right. They're doing their video images. Sora images. Yeah. But even within language models, I'm sure they have a continual learning team or all these like weird moonshot things where I think a lot of these Neo Trad labs are basically focused on one single moonshot idea. Okay, so example flapping airplanes. Right. They just came on. They're talking about data efficiency. This is kind of the one kind of moonshot idea. Right. Obviously it's like a very general, broad. Bunch of different ways you tackle it. But they're like, that's the problem that we're going on. But it's one specific thing. They're working on. Yep. And I mean they talk about, oh, you know, if we figured out there'll be some value, but we're not exactly sure how it's going to come out right now. And we're not sure how we're going to productize it necessarily. But we have really. Yeah. So the idea is like, if these labs can figure out like the core research idea, then the value will appear. Right. So you also heard this out of Ilia with ssi. Right. Not sure how they're going to get revenue, but it'll come if they figure out a breakthrough. Continual learning. If you build it, they will come. Yes. Yeah. So a lot of interesting things here. So we can look at like, okay, general intuition. Yes. They're basically doing a lot of multimodal training where they can basically take video game data and try to figure out how to map that onto lms or world models or these types of things. Okay, you have Inception. I believe they're doing dream tech. Wait, okay, I'm thinking of logical intelligence. They're doing like diffusion models. Okay, right, so diffusion. But, but for LLMs. But for tax. Yeah, we've seen a demo from Google on that too. Okay. Inception is doing, I think they're doing the energy based models which is kind of this weird thing. Okay, wait, I have both of those companies flipped again. So Yann Lecun is into this. It's simple. I mean, I don't know why you're flipping stuff around. Like this is. This is literally just neolab101. You're doing a basic breakdown. The point is that they're doing these like kind of weird architectures where like energy based model, it's like kind of different than a normal LLM where you have this normal back prop stuff like this. But the point is that these are all very kind of weird architectures that they're working on. So maybe the big labs have small teams that are working on this stuff. But basically these people go out of the big lab. A lot of them are coming out of the big labs and they're starting these new projects like coming out of. A trad lab or a NEO lab or a Neo or legacy lab. A Neo SAS lab. Exactly. Okay, got it. Yeah. Okay, so now let's move up a little bit. Yeah. What is Neolab Lab? Neolab Lab. Okay, so this is. Yeah, I like this one. So these are a lot of companies that are focusing on, they're also like very research focused. But the, the point of the research is to build essentially like a researcher. So It's. They're recursive. Right, okay, so you have recursive and recursive. Yeah. You have actually two that are recursive and recursive. Richard Silker. So you have periodic labs where they're a little bit more focused on the hardware, but the whole point is that they have this kind of closed loop where you can basically build a lab within the lab. Right. That's the whole point. Lab, lab, building a lab. Got it. Unconventional AI similar thing. I think the product will be a lab. They're in the lab manufacturing business. Correct. Got it, yes. Okay, moving up, we have math lab. Yes. So these are pretty interesting. Axiom and Harmonic. Yes. And then you have matlab. Yes. But these are pretty cool. There's been a lot of good breakthroughs recently. I think there's a bunch of Erdos problems that are being solved, or maybe they're just being proven in some ways. But there's a lot of interesting research coming out of these. Harmonic is Vlad Tenev, the founder of Robinhood. Yes, correct. Yes. Wet labs. Yeah, wet labs. Okay, so these are your biolabs. Oh, you got LabCorp. Yeah, I'm familiar with LabCorp. LabCorp. But there's a lot of biology focused labs. It's actually like, I didn't know a lot about a lot of these, but there's all sorts of interesting research. So isomorphic labs, this was spun out of, I believe, Gemini, or at least Google. Yeah, that's right. They're working on longevity and just drug development almost. So some of these are very focused on specific forms of drug development. Some of them are just like broader, where they're very focused on longevity stuff. Yeah, Cool. And then. Yeah, let's go to. Yeah. What's going on? Oh, yeah, up top we have Labrador. Oh, that's really important if you want to understand labs. These, you got your foundation, the white lab, your black lab, your chocolate lab. Chocolate lab. Yeah, chocolate labs are important. Yeah. If you want to understand labs broadly. Yeah. Okay. Then moving back down, we have the Neo Kinetic lab. Okay, so these are going to be your labs that are more focused on robotics. Yes. So you have a bunch. You have Project Prometheus. Yes. This is Bezos Lab. It's still kind of in stealth, which is why there's not even a logo for it. Yeah. You have figure, you have skilled AI. Skilled AI is the Luke Metro project. Yes, got it. Yes. Physical Intelligence Sunday. Right. These are all your kind of Neo connect labs. Right. These are started fairly recently, in the past, like maybe four or five years. Yes, broadly. The Neo NEO Lab. Neo NEO Lab. Right. Okay. So One X is building Neo robots. So they're Neo NEO Lab. Yeah. Yep. And then Legacy Kinetic is the previous. Legacy Kinetic is kind of the old gen. Yeah. But cooking, they're cooking. Waymo's cooking. Yeah. Cruise. Cruise. Boston Dynamics have been a little bit behind Zoox. Also another self driving car. There's a bunch in here that I kind of. There's another one, stealth I think that never really hit inflection. Okay. Yeah. And then you have your mostly vehicle focused. You have your dark lab. Yes. So this is working with the government. Yeah, I have SHIELD AI I also have darpa. DARPA is a lab. Yeah. They invented the Internet. Right? Gps. Yeah. Darpanet. Yes. That's good. And then simulation lab. I think that. Simulation lab, yes. So simile, we just had them on. SpaceX you could put up there because aren't they working on this Pentagon? Yeah. Where's Rocket Lab? Rocket Lab needs to be on there. That's a lab. There's a lot of labs. There's a lot of labs. I mean. Yeah, lab is very, very broad term. Very broad term. Well, at least it's crystal clear now for everyone. Yeah. So I think this should be pretty obvious to anyone who's thinking about NEO Labs. Like how should we be thinking about them now? If you've been paying attention, this is all second nature to you. Yeah. Did you add up how much all the companies have raised? It's gotta be in the north of 200 billion. Yeah, it's a lot. I mean so, so I didn't do that. But for. I was trying to figure out how to include valuations on the map. Yeah. You didn't want it. You didn't feel like you could do the math? No, we don't know how. Well, it's also a lot of them are rumored. It's actually like kind of hard to find out because a lot of these are still really in stealth. Like a lot of these NEO Trad labs, they basically. Because the whole point is that they're doing this like research stuff. They're not going to like productize early. Yeah. And also how much do you put in the DeepMind bucket? That's a huge amount of investment and it's not exactly disclosed. Do you count the tpu, do you count Google cloud? Like different allocations? You can go really deep in the stack to understand the impact of like the broad AI build out. But yeah, I mean if you just total this up, you can really just do X OpenAI anthropic and get like 90% of the way there and it's probably like 200 billion. It's also hard because it's like evolving so fast. Right. So David Silver's lab, who he was used to be at DeepMind. Well, I like ineffable. Ineffable intelligence, good word. Yeah, I think that was rumored today. Yeah, it's indescribable. Yeah, it's good. But these things are coming out like every day. You put the typos in just to prove that. What typos? Humans are like sovereign lab. And then ineffable intelligence also has a typo. So I just want to make sure. I wanted to make sure you put. Yeah, you put the typos in so that it was proof that you made it. Yeah, I don't want. Well, yeah, whatever. You built this in doesn't have spell check, I guess. Anyway, fantastic report. Thanks for breaking it down. I learned a lot and I hope you did.
Companies, but the neolabs have exploded. They have a term that's been coined. Sarah Guo was actually, I think, the first person that came on the show and sort of broke it down for us around the Christmas episode. But since then, the neolab, like, taxonomy has evolved and so we needed to build a market map. So, Tyler, take us through what's going on in the world of neolabs these days. Yeah, so neolab is kind of this interesting term. Like, it's very broad. People say, like neolab. It's not very clear what they mean because there's like broadly. I think it generally. And this will make it clearer. Yes. I think after this it'll be pretty obvious, like what, you know, what you should be looking at, how to think about these companies. Yeah, I don't want to be more confused at the end of this. Yeah, that would be a disaster if that happened. Yeah. So that's not gonna happen. This is gonna be easy. Okay, got it, got it, got it. Cool. Okay, so let's just start. Okay, so you have neolab, right? Yes. So neo, this prefix. Okay. That's be relative to something. Yes. So neo is relative to like your trad lab. This is your big lab. Traditional. This is your. Yeah. This is your opening. Let's give it up for the big labs. Yeah. They don't get enough credit today. The open data centers spike in capex. So this is gonna be your OpenAI, your DeepMind, your anthropic kind of your big lab. Yeah. Xai. Xai kind of fits in there too. Even though it's a newer trad lab, it fits in with the big lab. They got a lot of money. Dario. I think on Torkesh, he was like, yeah, three, maybe four labs. Right. So the Force is probably Xai. Yep. I think you can also kind of throw in Mistral in there. Okay. Oh, yeah. Mistral is a little bit older. Yeah, Yeah. I mean, Mistral, there's much of these labs that were basically founded in the like two or three years before ChatGPT and then in the like six months after. Yeah. So I think Xai's in there, Mistral's in there. And these specifically these, I feel like those trad labs, it's like they did a transformer based pre training run. They have their own base, pre trained. Maybe it's not at the frontier, but at least they're playing that game. They're not doing fine tuning. They're not doing something else. So that's sort of like you're in the trad lab world. When you're thinking about, like a big pre train run. Loosely. Yeah. I mean, especially if you're talking about these big pre trains. It's really just these four. No one else is really at that scale. But then. Okay, so Mistral kind of brings us down into what I call the sovereign labs. Okay. So, I mean, you know, if you kind of look at this, it's basically just labs that are not in America. But I think also that there actually is some meaning to this. So, like, Mistral. You've seen Mistral become kind of the leader in European AI. Right. So I think. Was it Sweden? Maybe they're bringing a new data center. Yeah, Sweden. So they're kind of becoming like, stuff. Going on in France too. Macron is always talking about Mistral. It's a big leader. Cohere is also kind of. I think that's like a very, you know, Canadian. It's a Canadian company. Yeah. Yes. But also has done their own pre trains. No ties to the curling team, though. Oh, okay. Okay, okay. Complete no ties. So I don't want them. Yeah, yeah. It's important to put some distance between that scandal. Yeah. And then you can go down. You can kind of see all your Chinese open source labs. You see your Quinn Deep Sea Kimi Unitree is also in there, Right? Unitree. I think so. As we'll see later. There's also. I have section for like, robotics labs. Sure. But this is very clearly like, you know, this is the Chinese. Yeah. Take us back in time now what was going on before the trad labs broke out? Yeah. So here I have this section. Legacy labs. Okay. So these are ones that are kind of more entrenched in these big enterprises. Yeah. So you have stuff like Microsoft Research AT&T or Bell Labs, Right? Oh, Bell Labs. Yeah. I forgot about Bell Labs after. You know what? You know how. You know why they call it Bell Labs? Why do they call it Bell Labs? Alexander Graham Bell. Yeah, it was founded by him. Yeah. Bell Labs. Okay. But also you have stuff like you have fair, Facebook, AI research. This was like. I mean, there's so many, like, OG research papers that came out of fair. Yeah. This is what Yann Lecun used to be head of before it transitioned to msl. To msl. Okay. So then I think let's move up here around your trad lab. You also have post lab, right? Yes. P O A S T. Yes. These are posters. Yeah, These are labs where you get a lot of posters. Right. So obviously this is OpenAI. You got roon, anthropic a lot of Sholto, et cetera, posters. Prime Intellect. They're great posters. A bunch of anons at Prime Intellect. Doing great stuff over there for sure. And then you kind of get into the proper neolab. Yeah, the proper Neolab. So this is also a bit hard to identify because like what is actually the core of a new lab? What are these different kind of offshoots? I think Prime Intellect is kind of the prototypical, like quintessential neolab when you think of it where you basically have. It's like fairly recent. Yeah. It's still very much research focused. Okay. Like sure, they have enterprise, like you know, think about different stuff but at the core of it, you're still like trying to find these like new novel approaches. It's research, you're hiring researchers. It's not just like engineers, sales guys, et cetera. So let's. Is it. Wouldn't Sakana be more of like a sovereign lab? Yeah, yeah. I mean, so a lot of these can fit in all different places would be. Yeah, Japanese maybe. Okay. And you put MSL in here because it's a new project. Yeah. This one was also a bit hard. It doesn't feel like a trad lab because I mean maybe it has the scale but it's just, it's newer, they haven't shipped yet. Neo new lab. Right. I mean it's so recent. Definitionally. Thinking Machines is my classic go to Neo Lab. I feel like it's post OpenAI exodus and sort of OpenAI is nothing without its people. You know, you get the spin outs and you think Thinking Machines and SSI are two of like the first case studies that sort of set the tempo for, okay, it's possible to do some research outside of the big trad labs. And so that's where you get the neolab boom from. And then a lot of the other companies I feel like are saying, okay, we're going to do something similar to Thinking Machines or ssi. We're going to commercialize early or late, but we're following in that and we're benchmarking to that. Oh, they raised 2 billion, we're raising 200 million. It's easier. There's a 10% chance that we, you know, are at their scale. So you can underwrite it that way. Yeah. So Thing Machines also brings us to what I call the trad SAS lab. SAS Lab. You've trad SAS lab. So I think the way I think about this is the trad SAS labs are trying to basically use the data that's Inside these big enterprises, pull them out with AI. Okay, so this is thing machines. Right. The rumored idea. Right. Is they're doing RL for enterprise. A bunch of these are doing fairly similar things where it's kind of chatting with your data, using the data that's very valuable to a company, but it's going to be inside the company. You can't really pull it out anyway besides having the AI be internal. So you have applied compute, you have poolside doing all kind of similar things in this enterprise LLM field. And then that brings us to neo SaaS. Not full based pre trains for those companies, mostly fine tuning or RL on top of a particular company's use case. Yeah. And then I have Neo SaaS lab. This is different than trad SaaS. I think these are different in that they're not really pulling, they're not going enterprise specific. I think that's one way to look at it. Also much more of like startup focused. But they're making a product that is sold effectively as SaaS. Yes. So cursor, cognition, windserve. I have ramp labs. Ramp labs. These are seat based, sort of consumption based, but it's a product that's vended into a. And the product is, is what you get and then sort of customizes as you integrate it. But it's not. The conversation doesn't start with a business development relationship. Yeah. And of course, I mean these lines are pretty blurry. But then. Ok, let's go down to the post lab. Okay. Post lab. This is after the lab. Yes. So that means basically they train the models and then these labs are working on top of those models. That's how I think of it. So, so you have meter, you have epoch. These are going to do evals. You have Pangram. They're seeing. Is the model producing slop? Yes. Or is it producing text that you're. Using in some way? These are purely eval. They don't have necessarily AI products themselves. They don't necessarily sell to big business. But they could still be training models. Right. Like Pangram is training models that sit. On top of labs. That's true. So it counts as a lab. Makes sense. Okay, what else we got? Maybe that brings us down to the safety labs. Yes. So these are pretty interesting anthropic kind of fits in this. Right. Because they have a big safety team. They're doing a lot of mechanistic interpretability. You have Goodfire. I think they just raised at like 1.25 billion and they're just doing mechanistic interpretability let's go. Very interesting. Eleuther AI is similar, kind of. I know, yeah, yeah, Very cool. They're also. A lot of these are also kind of in the open source space. Yeah. I think stable diffusion came out of Eleuther AI. Yeah. This is another label that I think I could have put on, but it's so hard to get everything to, like, work together. But a lot of these are also, like, the core of the company is doing open source. Sure, sure. Right. So Prime Intellect Fair was a good example. Almost OpenAI back in the day. But a lot of these have bled together, where OpenAI has an OSS model, but also a lot of consumer and enterprise. Yeah, makes sense. Okay, so then in contrast to the SaaS labs. Yeah. We have the consumer labs. Okay. Consumer labs. These are focused on consumers. Right. So you have Eureka Labs. This is Andrej Karpathy's project. Yeah. I don't think there's anything been released from it yet. Education, though. But, yeah, education makes sense. Four people. You have humans. Oh, it's four. Four people. Not four individuals working there. It's four people. Yeah, it might be four people. It might be one person. Who knows? He's pretty good. Yeah. You have humans. And this is the. I think that phrase, it's like humanity focused. You're going to turn human into sand. Human sand. Human sand. Yeah. We got to hang out with the founders at the super bowl, but they're. But yeah, focus on creating models that work better alongside people. Sure. Yeah. You have a lot of, like, companions, these kind of ideas. Right. You have character AI also. Oh, yeah. Do they really own C. AI? What a great domain. If that's true. I don't know. We'll have to. Anyway, so then that brings us down to the visual labs. Visual labs. Right. There's a lot of either multimodal models, or they're actually, like, producing video or images. Right. Talk to a lot of these founders. Yeah. I feel like almost all of them have been. Yeah. I mean, world labs raising today. Yeah. Or fundraise announcements today. Yeah. Midjourney, etc. Leads are pretty obvious. You have your NEO Auditory lab. Midjourney. Is the. Is the sailboat logo correct? It's a good logo. Yeah. Okay. You have meta reality labs on there too. Okay. Yeah, yeah, yeah, yeah, yeah. That makes sense. They're visual, not fully AI yet, but they're getting there. Yep. Okay. You have NEO Auditory lab. Okay. Right. So this is going to be anything that has to do with vocals or voice or music. Yes. Eleven labs. Eleven labs. Sponsor of tvpn. Thank you. Suno. Right. Making music. Gemini also released a new model. Yes. Today, Lyria 3. I didn't even know there was one or two. It's a trilogy already. They just got secret models that they're hiding from us. Yeah. So this is very interesting field. And then you have your legacy auditory as opposed to your new auditory. Right. So this is your old ones. This is. Well, John, do you want to talk about nuance? Nuance Dragon, Naturally speaking. This is the original box software. You buy it, install it on a Windows computer. You can talk into a microphone and it will write down what you say. Dictate it. Yeah. Using some AI. Not a large language model at the time, not a transformer based architecture, but became a very large company. I think it's part of Microsoft now or something. I think it's been acquired a few times. But yeah, very, very interesting company. A lot of really solid fruity loops. Yeah. That's, you know, you're in the lab making beats, I guess. Okay, so now moving up, I think this is really a very interesting section. So this is neo trad lab. Yes. So I think. What is a neo trad lab? This is the simple. This is a simple definition, clearly. Yeah. Does it even need explaining? I think everyone gets it. Watch your head, by the way. We're coming really close. You might want to be on the. Other side to the team. Okay, so Neo trad lab. It's a Neo lab. Yes, but it's trad. Okay. Okay. So what does that mean? So basically the way I think about a lot of these labs is that they're extremely research focused. Okay. They're also largely. They're focused on kind of a single idea. So if you think of OpenAI, very research focused, obviously. But they're doing a lot of different things. Right. So they have consumer and they have consumer. But it's even like on the product or on the research side. Right. They're doing their video images. Sora images. Yeah. But even within language models, I'm sure they have a continual learning team. Or all these like weird moonshot things where I think a lot of these Neo trad labs are basically focused on one single moonshot idea. Okay, so example flapping airplanes. Right. They just came on. They're talking about data efficiency. This is kind of the one kind of moonshot idea. Right. Obviously it's like a very general rob. There's a bunch of different ways you tackle it, but they're like, that's the problem that we're going on. But it's one specific thing they're working on. Yep. And I mean they talk about, oh, you know, if we figured out there'll be some value, but we're not exactly sure how it's going to come out right now. And we're not sure how we're going to productize it necessarily, but we have really. Yeah. So the idea is like, if these labs can figure out like the core research idea, then the value will appear. Right. So you also heard this out of Ilia with ssi. Right. Not sure how they're going to get revenue, but it'll come if they figure out a breakthrough. Continual learning. If you build it, they will come. Yes. Yeah. So a lot of interesting things here. So we can look at like, okay, general intuition. Yes. They're basically doing a lot of multimodal training where they can basically take video game data and try to figure out how to map that onto lms or world models or these types of things. Okay, you have Inception. I believe they're doing dream tech. Wait, okay, I'm thinking of logical intelligence. They're doing like diffusion models. Okay, right, so diffusion. But, but for LLMs. But for tax. Yeah, we've seen a demo from Google on that too. Okay. Inception is doing, I think they're doing the energy based models which is kind of this weird thing. Okay, wait, I have both of those companies flipped again. So Yann Lecun is into this. It's simple. I mean, I don't know why you're flipping stuff around. Like this is, this is literally just neolab101. You're doing a basic breakdown. The point is that they're doing these like kind of weird architectures where like energy based model, it's like kind of different than a normal LLM where you have this normal back prop stuff like this. But the point is that these are all very kind of weird architectures that they're working on. So maybe the big labs have small teams that are working on this stuff. But basically these people go out of the big lab. A lot of them are coming out of the big labs and they're starting these new projects like coming out of. A trad lab or a NEO lab or a NEO or legacy lab. A Neo SAS lab. Exactly. Okay, got it. Yeah. Okay, so now let's move up a little bit. Yeah. What is Neolab Lab? Neolab Lab. Okay, so this is. Yeah, I like this one. So these are a lot of companies that are focusing on, they're also like very research focused. But the, the point of the research is to Build essentially like a researcher. So it's. They're recursive. Right, okay, so you have recursive and recursive. Yeah. You have actually two that are recursive and recursive. Richard Silker. So you have periodic labs where they're a little bit more focused on the hardware, but the whole point is that they have this kind of closed loop where you can basically build a lab within the lab. Right. That's the whole point. Lab, lab, building a lab. Got it. Unconventional AI Similar thing. I think the product will be a lab. They're in the lab manufacturing business. Correct. Got it, yes. Okay, moving up, we have math lab. Yes. So these are pretty interesting. Axiom and Harmonic. Yes. And then you have matlab. Yes. But these are pretty cool. There's been a lot of good breakthroughs recently. I think there's a bunch of Erdos problems that are being solved, or maybe they're just being proven in some ways. But there's a lot of interesting research coming out of these. Harmonic is Vlad Tenev, the founder of Robinhood. Yes, correct. Yes. Wet labs. Yeah, wet labs. Okay, so these are your biolabs. Oh, you got LabCorp. Yeah, I'm familiar with LabCorp. LabCorp. But there's a lot of biology focused labs. It's actually like, I didn't know a lot about a lot of these, but there's all sorts of interesting research. So isomorphic labs, this was spun out of, I believe, Gemini, or at least Google. Yeah, that's right. They're working on longevity and just drug development almost. So some of these are very focused on specific forms of drug development. Some of them are just like broader, where they're very focused on longevity stuff. Yeah, Cool. And then. Yeah, let's go to. Yeah. What's going on? Oh, yeah, up top we have Labrador. Oh, that's really important if you want to understand labs. These you got your foundation, the white lab, your black lab, your chocolate lab. Chocolate lab, yeah. Chocolate labs are important. Yeah, if you want to understand labs broadly. Yeah. Okay. Then moving back down, we have the Neo Kinetic lab. Okay, so these are going to be your labs that are more focused on robotics. Yes. So you have a bunch. You have Project Prometheus. Yes. This is Bezos lab. It's still kind of in stealth, which is why there's not even a logo for it. Yeah. You have figure, you have skilled AI. Skilled AI is the Luke Metro project. Yes, got it. Yes. Physical Intelligence Sunday. Right. These are all your kind of Neo connect labs. Right. These are started fairly recently, in the past like maybe four or five years. Yes, broadly. The Neo NEO Lab. Neo NEO Lab. Right. Okay. So One X is building Neo robots. So they're Neo NEO Lab. Yeah. Yep. And then Legacy Kinetic is the previous. Legacy Kinetic is kind of the old gen. Yeah. But cooking. They're cooking. Waymo's cooking. Yeah. Cruise. Cruise. Boston Dynamics have been a little bit behind Zoox. Also another self driving car. There's a bunch in here that I kind of. There's another one, stealth I think that never really hit inflection. Okay. Yeah. And then you have your mostly vehicle focused. You have your dark lab. Yes. So this is working with the government. Yeah, I have SHIELD AI I also have darpa. DARPA is a lab. Yeah. They invented the Internet, right? Gps. Yeah. Darpanet. Yes. That's good. And then simulation lab. I think that Simulation lab, yes. So simile, we just had them on. SpaceX you could put up there because aren't they working on this Pentagon? Yeah. Where's Rocket Lab? Rocket Lab needs to be on there. That's a lab. There's a lot of labs. There's a lot of labs. I mean, yeah, lab is very, very broad term. Very broad term. Well, at least it's crystal clear now for everyone. Yeah. So I think this should be pretty obvious to anyone who's thinking about NEO Labs. Like how should we be thinking about them now? If you've been paying attention, this is all second nature to you. Yeah. Did you add up how much all the companies have raised? It's gotta be in the north of 200 billion. Yeah, it's a lot. I mean so, so I didn't do that. But for. I was trying to figure out how to include valuations on the map. Yeah. You didn't want it. You didn't feel like you could do the math? No, we don't know how. Well, it's also a lot of them are rumored. It's actually like kind of hard to find out because a lot of these are still really in stealth. Like a lot of these NEO Trad labs, they basically. Because the whole point is that they're doing this like research stuff. They're not going to like productize early. Yeah. And also how much do you put in the DeepMind bucket? That's a huge amount of investment and it's not exactly disclosed. Do you count the tpu, do you count Google cloud? Like different allocations? You can go really deep in the stack to understand the impact of like the broad AI build out. But yeah, I mean if you just total this up, you can really just do X OpenAI anthropic and get like 90% of the way there. And it's probably like 200 billion. It's also hard because it's, like evolving so fast. Right. So David Silver's lab, who he was used to be at DeepMind. Well, I like ineffable. Ineffable intelligence. Good word. Yeah, I think that was rumored today. Yeah, it's indescribable. Yeah, it's good. But these things are coming out, like, every day. You put the typos in just to prove that. What typos? Humans are like sovereign lab. And then ineffable intelligence also has a typo. So I just want to make sure. I wanted to make sure you put. Yeah, you put the typos in so that it was proof that you made it. Yeah, I don't want. Well, yeah, whatever. You built this in doesn't have spell check, I guess. Anyway, fantastic report. Thanks for breaking it down. Great stuff. I learned a lot and I hope you did too, and.