LIVE CLIPS
Episode 11-7-2025
That makes sense. Yeah, yeah. I have some silly photos of old that I'm sure will resurface when people want to dunk on me. Break this post down for me actually. What was your read on this? So technological innovation. Can you think you used AI for it? Do you think he did? Well, I think the worst actually the worst thing about AI is that people are saying you can't use EM dashes. I agree. I love the EM dash. Okay, okay, so I don't love the EM dash. It's not that I hate it. I'm just indifferent to it. I don't even know where it is on the keyboard. And so like it's you do minus sign, minus sign and then space I think to generate. I think you, I mean I know double tap space shift option dash. So like isn't it just double tap space and I go here double tap.
83. Oh, 83. Okay. Rough one, rough one. Well, we have breaking news from the Pope. He chimed in on the AI bubble. Some bub talk from the papacy. A papal bull. Is that a pop signal on the bub talk? No, no, it's not on the bub. It's not on the bubble. It's not on the bubble. It is from the Pope, though, who I should be following. Why am I not following the Pope? Technological innovation can be a form of participation in the divine creation. It carries an ethical and spiritual weight, for every design choice expresses a vision of humanity. The Church therefore calls all builders of AI to cultivate moral discernment as a fundamental part of their work. To develop systems that reflect justice, solidarity, and a genuine reverence for life. I like that. Good line. Liquidity thinks it's AI generated slop. Very bearish. No. Yes. He used one EM dash. Like, the EM dash is not a telltale sign. You can just use one if you're just hammering on the keyboard. Or you can just write a note and then in the edit, someone can use an EM dash. I don't know. Is the Pope one shotted? We'll never know. I think this is fine. I think this is a nice little call to action. What do you think, Tyler? You put this in here, right? Yeah. I mean, what are you guys doing? If the Pope comes out and says, like, I actually don't care about AI margins, what are you doing? I would be extremely bullish on that. What I'm worried about is the Pope coming out and being like, actually, like, I've discovered some novel physics. I've been chatting, I've been talking to Grok, and I've been talking to ChatGPT for just, like, thousands of hours. People don't think about the recursiveness in the Bible. Yeah, no, yeah. Recursiveness in the Bible, actually, it's crazy. You know, I'm the Pope of the Catholic Church, but I've been getting into polytheism recently. I went down a rabbit hole, and I think there might be more than one God. I think there's multiple gods. AB in the chat says, gotta get the Pope on vanta. Totally agree, 100%. I think it'd be interesting if the Pope started coming out and said, we're exploring taking equity stakes in technology companies like Fannie Mae and Freddie Mac. Yeah, well, we'll see if we got to get the pop on bezel. Because your bezel concierge is available now to source you any watch on the planet. Seriously? Any watch. Yeah. The.
Was just basically a straightforward pro lifting essay. So that had more of that. Sure, sure, sure. That's awesome. The thing that my takeaway from going, I was very anti gym growing up because I was a skateboarder and a skater surfer. Like, the idea of going inside, being indoors, lifting stuff up and putting it down just for the. Whatever the reason was, it wasn't appealing to me as a kid. Cause I was like, well, I could just go to the beach and surf and like be out in nature. And so I had a generally negative sentiment around the gym. And I was also like, super scrawny growing up. I've got like, long arms. I was like in high school, probably like 140 pounds. Like, getting under like one plate on bench was like, big. I'm gonna die. At first, I never really got into it. I wasn't. I didn't play football or anything like that. And then I got really into it in college, and it was the biggest unlock for like, mental health. Like, I. Prior. Prior to that, I was like a teenager. I was like, teenagers, I think, are just naturally kind of like moody and figuring out who they are and how they fit into the world and all that stuff. And then I found lifting and like, a lot of that stuff just like went away entirely. Because I had a purpose. I would. Even if that purpose was like so simple and just like going. I woke up, I was going to eat well and I was going to go to the gym. I was going to lift as heavy, heavy as I could and go on with my day. And it's just it. It ended up being like. It was like a drug almost immediately because I was like, I want.
Well, no, no, no. There are 10. There's 10, there's 10. We'll see. Well, one of them maybe shouldn't be there. No, no, no, no, no. There's actually 10 because it's the Mag 7. Oh, Oracle. I forgot Oracle. Yes, you did. So, so it is, it is the TPPN top 10. The 10 most important companies in AI. Loosely, the Mag 7, plus a few bonus ones. And we're gonna try and crew and. Look, Horse, you got a hoop, man. We're going to try and go through the various companies and rank them based on how AGI pilled they are and how much they need AGI. Is that right? Yeah, basically. So let's pull up the actual scales. There's two axes. Let's go to the next slide. So basically on the horizontal we have how AGI pilled they are. Sure. So I feel like that's fairly self explanatory. You kind of believe that AI will become something like it can produce the median economic output of a person. Yes. And there's a few different ways to understand if somebody believes in AGI. It could be the rhetoric of the CEO or the founder. It could be the actions of the company. Like they're just actions speak louder than. And is there anything else that could lead someone to show that they believe in AGI? I guess it's mostly just the actions and the words. Right. I mean those are the main things that you can kind of do as a person. Okay, well, we have it. We will be judging them by both their actions and their words. Yes. So then on the vertical axis we have how much they need AGI. Okay. So I think this is maybe a little harder. So I want to qualify this. Yeah. So I mean, this doesn't necessarily believe that you have this kind of sentient, you know, AI that's as good as a person. Yeah. But I think it more so in this context just means that AI will continue to become more and more economically valuable. Yes, yes. To where you can kind of sustain building more and more data centers. You can do more and more capex. Yeah, there's a little bit of like how much will this company be transformed if the AI wave plays out well? AI doesn't play out well. How chopped are they? How cooked are they? Exactly. That's a great framework. Yes. And then also, yeah, if we flash forward, nothing really changes. Total plateau, total decline in token generation or something. Is the business just continuing business as usual? Okay, so who are we starting with? Let's start with Sam Altman. Okay. Sam Altman. Where is he on this? So, Sam Altman, I think this is pretty reasonable spot. He believes in AGI. Right. He runs kind of the biggest AI company. He also needs AGI because if you imagine that if the models stagnate, they have a lot of capex they need to fulfill. If models stagnate, what are they going to do? Maybe they can. Maybe the margins somehow work out, but you're probably not in a good spot if models get worse or if people start using AI less. If you're saying all one. But he's also not, you know, in the top, top corner. Okay. Right. And I think this is. You can justify this through a lot of OpenAI's actions. You see stuff like Sora, you see maybe the erotica. This is not very who's who. I'm running the laser pointer. I want to. Down here. I'd maybe put them down here. Okay, explain that. Explain that. I think that more and more, at least in the short term, OpenAI looks like a hyperscaler. They're kind of a junior hyperscaler. And I think their actions are more, you know, I think that OpenAI. A lot of people, you know, want to say that they're. They're bearish on OpenAI current levels, but ultimately, when you look at how their business is evolving, they seem to me like they'd be fine if the models plateaued. Yes. But, yeah, I feel like the mood on the timeline was much more slide. Sam to the left, doesn't believe in AGI. There was that post about, like, if OpenAI really believed in AGI, they wouldn't be doing SORA or they wouldn't be doing the erotica thing. All of those were very much like, needs it, but kind of accepts that it's not coming and so stopped believing it. I think in response to Jordy, I think, yeah, because it's not like this. The vertical axis is also just about, like, if it's going to continue to be economically, you know, economically useful. So if people just stop using AI in general, or like, people. If the kind of, you know, revenue stops accelerating or any of this stuff, I think OpenAI will be in a bad spot regardless of the models, like, actually getting much better. Like, if they just make the models much more efficient to run, you could say that's not very like, AGI pilling because the models aren't getting a lot better. Yeah, but that's still like. But I'm just saying, like, if there was no more progress at all. Yeah, like, we never got a new model from Any of the labs. I think that OpenAI would add ads. They would add commerce. They would. Yeah. So they might be fine without AGI. They would make agents a lot better. But. But, I mean, the 1. 4 trillion enterprise, the 1.4 trillion in commitments like that is hard to justify. And if it's just the business today just growing, like, it's growing because it's just, like, it's good. No. No crazy breakthroughs. Like, just like you. Yeah, but they're. They're. They're gonna. You're laying out the bull case. They're playing. They're playing crazy. You're laying out the bull case saying, oh, if they just add ads, they're gonna be able to hit the 1.4 trillion, no problem. That's. I'm not saying. I'm not saying they. They can pull back on a lot of these commitments. Sure, sure, sure. They don't like. Okay, okay. I don't. I don't think these are, like, going to end up being, like, real liabilities. Business is just cooked. They can't hit them. Right. So I'm just saying, like, I think there's a shot they have. They could, you know, when. When they talk about having success in consumer electronics. Right. Which is something you talked about yesterday. Like, they don't need, like, I. I think they can probably build a really cool device. Maybe it could be competitive with, you know, if. If there's. If it can be at all competitive with an iPhone like that, they could be fine without AGI. We're also getting into, I guess let's. Get some other people on board so we can see where. Yeah. I think it's useful to be relative because these are not quantitative numbers. Yeah. So next we have Dario. Okay, so Dario is up here. So I think Dario is kind of. When you listen to what Dario is saying, he is, you know, he's extremely AGI pilled. Yes. Right. This is kind of the reasoning why he's. He's so anti China. Yeah. Right. Because he sees it as an actual race. This is going to. It is a national, you know, security. Totally. It's a problem if China gets there first. What is that new sound cue? I don't think Tyler has sound effect. What is it? UAV online. UAV online. Okay. I like that we got. This is the uav. We should give this UAV esthetics. For sure. This is good. Okay, continue. But there's also a sense that he needs AGI. Yeah. Because if AI stops becoming, if it stops growing as fast, and you imagine that Things kind of settle where they are now. OpenAI is definitely in the lead. So you need a lot of continued growth for Anthropic to keep kind of making sense economically, I think. Okay, yeah, yeah, sure. Who else is on here? So I think next is Larry. Larry Ellis. Larry. Larry's in kind of an interesting spot. So this is kind of a weird place to be where you don't believe in AGI, but you need it. Okay. How did you wind up there? So I think you're probably wondering how. I ended up in this record scratch, freeze frame. So I think this is how you factor in. There's kind of the personal rhetoric and then there's the actions of his company. Sure. So when you listen to Larry speak, he doesn't seem the type that is, believes in some kind of super intelligent God that is going to come, that's going to birth this new thing. Humanity will rise. But then you look at Oracle. Has anyone found his less wrong username? Is he on there? Exactly. I don't think, you know, I don't think Larry's reading gwern. Okay, got it. But then, but if you look at Oracle and they are, you know, they, they need AI to work. Okay. Right. They are maybe covering up a little too much or maybe not enough, depending on how AGI pilled you are. But, you know, he's not very AGI pilled, so. Exactly. It's. It's kind of hard to square, but it's a bold bet. Yeah, this is, this is a unique spot. I think it is. This is a unique spot. He's off the grid. Okay. Yeah. Who's next? So let's see who's next. Who did I. Okay, Satya. Sacha. Okay. I think this is a fairly reasonable spot. Yeah. Obviously there's, you know, there's some sense where he is slightly AGI pilled or maybe more than slightly. Slightly believes in the power of the technology. Yeah, I mean, he's very early on OpenAI. He thinks that AI in general will become very useful, but maybe it won't become, you know, super intelligent. Maybe it's not gonna replace every person. It's just a useful tool. It's the next. The quote I always go back to is him saying, like, my definition of AGI is just greater economic growth. So show me the economic numbers and that will be. It's like, it's a, it's a very practical. Yeah. Definition. I think people see him as very reasonable. He's not getting over skis. Yeah. I like him. I like him in the center of the grid somewhere. That seems like he's also, you know, if AI doesn't work out. Yeah. I think Microsoft is in a very good spot. Going to stick around. You know, they're not crazy over investing open AI. They have a nice. Yeah, they'll write some code if opening I works out. He'll do very well. If they don't work out. I think he's also. He's hedge. He's doing quite well. He's hedged. Yeah. The man's head. He's happy to be a leaser. Yeah. That's a good quote, too. Okay. Yeah. Yeah. You wouldn't be leasing if you were super AGI pilled. Right. You're hoovering up everything you keep it all for. Exactly. I think. Let's actually talk about that with. I think it's Jensen next. Okay. Yes, Jensen. So Jensen, he doesn't believe in AGI. He's the whole reason why we're here. Explain that. So, yeah, this is maybe my personal take. Please. If Jensen was very AGI pilled. Yes. I mean, he is the kind of. He's the rock on which this all is built on. Yes. He has the chips. Yes. He has the appeal. He would not be giving out those chips. He would keep all to himself and he would be training his own model. Okay. So that's why I think he's. He's more on the. Doesn't believe in AGI side. But if there's any kind of downturn in the economy, could you put. Could you put Sam, like, potentially closer in this direction too? Because he's talking about getting into the. Yes. Like compute cloud. Yeah, yeah. Reselling for sure. So if there's so much demand. Yeah, yeah. And the models progressing so quickly, wouldn't you want to just hold on to all. All that? I think this would also be interesting to see over time. Like, Sam has definitely shifted leftwards. Yes. Over time he's moving. He's basically. This summer, you've seen a lot of the actions of opening. They seem less and less AGI. Pretty much everyone has been like, moving the AGI timelines outward, which is which you could transform into no longer believing in AGI. It's more just like the timelines have gotten longer this year, broadly, pretty much everyone. Dwark. There was a new blog post yesterday. It was basically AI 2027. There was a new one. It was AI 2032. So it's basically the same team. Different team. But the team of AI 2027 was promoting. It did. Yeah. AI 27, 2027 should be like. It was actually AI 3027. We missed by a thousand typo. AI 30, 20s. We couldn't get that domain. We were just off by a thousand years. Yeah. But there's definitely the sense where if there's a downturn, Jensen, the stock is going to go down. Yeah. But it's not going to go down as far as Larry. Right. Because they're still not on. They don't have insane financial advice. Yes. You know, they don't have, you know, completely unreasonable capex. They're not levered up. Sure, sure, sure. That I know. Oh yeah. Their Z score is through the roof. Their Allman Z score. Allman's Z score is through the roof. They're looking pretty safe. Yes. Okay, let's see the next one, I believe. Okay. Sundar. Sundar, he believes in AGI more than Satya. You think? Yeah, well, I think you can see this in kind of. They were even earlier in some sense than Satya. Right. With DeepMind. So I think AI has played a fairly big part in the Google story very recently. They've always, I think basically for the past 10 years they've been trying to get into AI. Maybe their actions didn't actually much, but you know, they were the transformer paper. They, you know, applied AI. Like they're actually applying. They built self driving cars. Discovery and core. Google Search is just an AI product. It's just an index on information they're organizing. Also, I mean, compared to Satya, I mean, Gemini is a frontier model. Totally. Gemini 3 is supposed to be this incredible model. Everyone's very excited. Satya is not. Microsoft is not training their own base model yet. Yeah. And also like, it's like there is a little bit of like, if you really believe in AGI, the actions that we see are you like squirming and being. I got to get in. It doesn't matter if I'm 1% behind or 10% behind or 80% behind, I got to get in. And I think we know someone who's. Doing Gabe in the chat says, is this AGI or AG1 from athletic greens needs AG1. Needs AG1. Fine. Without AG1 believes in AG1. Now we are the only podcast that is not partnered with AG1. But we do like green. Green is our hero color. Let's go to the next person. Okay. But also before that, I think Sundar is also definitely below this line because, you know, Google has been doing very well. AI was at first, I mean people thought of AI as like, oh, this is Going to, this is going to destroy Google. This is bad for Google. So if AI doesn't work, then Google is just in the spot they were before, which is doing very well. If AI does work, then, I mean, Gemini is one of the best models. They'll do very well too. Speaking of Gemini, Google AI Studio create an AI powered app faster than ever. Gemini understands the capabilities you need and automatically wires them up for you. Get started at AI Studio. Build. Yeah, continue. Okay, so I think, yeah, next is Zuck. Yes. So Zuck is also kind of in an interesting spot. Yes. I think Zuck is actually someone who has shifted rightward. It's fascinating. Yeah. You've seen this. I mean, for a while he's been traveling. Yes. Yeah. So for a while he was doing open source, which in some sense it's very AGI pilling because you're training a model. Right. It's like you're moving the frontier forward. But it's also, it's open source, which you can kind of think as being. You're trying to commoditize everything. You don't think that there's going to be some super intelligence that will take all the value that you need to hold on to. You can kind of give it out. And now he's kind of moved towards closed source. We're going to get the best people, we're going to train the best model. It felt like Zuck was sort of like, oh yeah, AI. Like it's this cool thing. I'm going to check the box. I got my team. We did this fun little side project. It's this open source model. We kind of found our own little lane. But we're not like competing in the big cosmic battle between OpenAI, anthropic, DeepMind. Like, well, don't. Do you think that was just a counter position to. Way to try to win the AI warp? Go say, hey, we're just going to try to commodity commodify this market. Commoditize your Chinese approach. Yeah, Commoditizing your, I think, is a good strategy. Yeah, that makes sense. And then maybe you could say that, oh, once the Chinese have, have been getting better, he can kind of step out of that position. Sure. Move towards closed source. Yeah. But now it feels like he's way, way going way harder on the AGI vision, paying up for it, investing so much money in it. You know, it's like. And depending on how much he invests, you could see him pushing up. It's true. He's also right now, but right now the Business is just insane. It's so phenomenal that even if he winds up spending all this money and, you know, they don't even get a frontier model. And, like, all the researchers are like, yeah, we didn't discover anything and we're just gone. Like, the business is fine because it's such a behemoth. So that's why he's after earnings. They took a fairly big hit, so maybe he should be a little bit higher. Yeah, maybe. Maybe a little bit. Meta broadly is still a very safe play if AI doesn't work out. And it was the same thing during the metaverse. It was like he believed in the metaverse, he invested in the metaverse, but he never needed the metaverse. And so after the stock sold off like crazy, it went right back up because everyone realized, wait a minute. But he's also raising debt off the balance sheet, which would kind of could push him up into this zone. Right. If he's like, you know, if you're worried about, why don't you just carry it yourself? Why don't you carry it yourself? You believe in AGI, Just. Just let it ride. Yeah. Okay. Who else are we missing? Okay, so Most of the Mag 7 now is Jassy. Oh, Elon. Elon. Elon. Yeah. Okay. Elon. So Elon, I mean, he's been AGI pilled, I think, for a very long time. Super AGI pill. I agree with that. OpenAI co founder, even before that, he. I think he was fairly big in the. In the safety space. Totally. You see him even on. On Joe Rogan, he was talking about AI safety. He still believes in it. Backed off. And AI safety is important because it's going to become super intelligent. It's going to take over the world. Yeah. Even this was part of the dialogue around the new comp package. Wanting the voting power to be able to be part of securing his robot army. Oh, yeah. Interesting. Yeah. I think robots are very underrepresented on this board. Totally. And Elon is kind of the main player in that space right now. Yeah. So he talked yesterday about Humanoids being sort of an infinite money glitch. And I feel like you kind of. Need. AGI in order to kind of unlock the infinite money glitch. Yes. But at the same time, very strong core business. The cars don't need AGI, the rockets don't need AGI, Starlink doesn't need AGI. So. Yeah, so he's. So he's not entirely indexed on it in the way the foundation labs are. Right. Yeah. I mean, there's definitely a significant part of the Tesla market cap that is basically betting on robots. Totally. But there's also a big part of it that's just. But that's why he's in the middle of the needs are fine without. It's like. Yeah. Everything on it. Needs. Yeah. But that's just one piece of the. AGI more than SpaceX. SpaceX. Yes. SpaceX is very, seems very uncorrelated with the AI. Totally, totally. Totally. Unless you have the data. He's happy to be a telecom baron. He's happy to. Yeah, he's happy to be the new Verizon. Who else? Okay, so I believe now is Andy Jassy. Jassy, yeah. So he is, I think broadly does not believe in AGI. Although he, you know, fairly major stake in Anthropic. Yeah, yeah. Which maybe is very AGI, but. But they have not. He's hedging with Anthropic. Yeah, yeah, I think that's basically what you can say. He doesn't seem at all worried about kind of the core business. AWS seems to be. And is Google building their position in Anthropic? I saw some headline about that. Yeah, that was wild. Imagine like Sundar. Such a beast. Sundar and Satya, really both of them. They just have few stakes broadly Andy. Jesse seems very quantitative. He's focused on the numbers, realistic. He's not making these, you know, grandiose statements about the future of intelligence or what humanity is going to be like. Yeah. When you look at the US earnings, it feels like they invest when the demand shows up and they build data centers when there is demand. And they, they are not in the business of, of doing, you know, a 10 year hypothetical sci fi forecast based on breakthrough. So Ben Thompson on the recent sharp tack, we were listen it on the way in he was talking about how Amazon has repeatedly said hey, we're, we're supply constrained. We have a lot more demand than we can fulfill with this new deal with OpenAI. The $38 billion which only got announced Monday, which again feels like forever ago at this point. But Ben was kind of hypothesizing that they kind of let OpenAI jump the line because that $38 billion deal was starting effectively immediately versus some of the other. Like Larry's deal with OpenAI was announced. This like massive backlog is revenue that cannot be generated in a meaningful way today because they have to build the data centers that can actually deliver the compute. Yeah. I mean and you know, Amazon has been building data centers. They're building data centers. They're basically, they built all the data centers for Anthropic but it still seems very kind of restrained. It's not overly ambitious. Anthropic has had issues with capacity and it's probably because, you know, Andy Jassy doesn't want to get over his skis. He doesn't want to build too much. So I think that's why he's kind of on the left side. And also I mean if you think AI doesn't work and we're going to kind of be in this, you know, the same spot of web 2.0, you need, you need AWS, you need your EC2 server. I think he's very well positioned there. And then I think the last one is Tim Cook. Tim Cook. Let's hear for Tim Cook. Criminally underpaid but has done a fantastic job not getting over his skis. Yeah, this one I think is goat contention. Fairly self explanatory. I mean he seems, he doesn't believe. In AGI, he doesn't believe in LLMs, he doesn't believe in chatbots apparently. I don't know the new Gemini deal. Signaling as of 2 years ago Apple was like, yeah, we're not doing that stuff. Yeah, there was that famous three years ago. But I think what's under discussed, my takeaway from our conversation with Mark Gurman and Apple's ambitions around their new LLM experience with Gemini is that I think that there's a very real scenario where Apple like wants to compete for the same user base as OpenAI. They want that like transaction based revenue, that commerce revenue. Yeah. I just, I disagree on this point. But I'm not, I'm not saying that I'm putting their odds at like winning that market at over like 10% but I think that they would be right to realize that it's potentially an opportunity. Yeah. Is it a billion dollars a year that they're paying Google for? I believe that's the number. Yeah. That feels really low, doesn't it? I don't know, it just feels super low to me. Among all the different deals that are going on, when I think about like when I just think about like the value of AI on the iPhone and you're like $1 billion like when, when I think about like what is the value that we're, that the market broadly is putting on like AI for the enterprise but at the same time, whatever. When, when Sundar on the Google earnings call was talking about their top I think 10 customers and how many like trillions of tokens they were using, but it really was netting out to like $150 million of like actual revenue. Yeah, yeah, yeah. And so this is, this will be their biggest customer for Gemini on day one. Until one of our listeners gets on, of course. Yeah. And I think, I mean even with the, with the Gemini news, Apple still seems very kind of reactive to AI. Yeah. They're not kind of seeing oh this future where it's going to do everything and moving there right now. Yeah, they're kind of seeing where the demand is, seeing where the users are and then moving, which I think is very kind of, you know, classically, you know, that's how business work. It's not very eye pilling also on. That point of $1 billion. Like I have no idea how can they possibly know the amount of inference that's going to happen in Siri plus Gemini over a year period? Like there's just no way to predict that. Or can they? I feel like if the integration's good there will be a ton of queries. It will. I didn't read the 1 billion headline. Chat can correct me if I'm wrong, but I didn't read the 1 billion headline. I felt like that was a technology license, not necessarily inference. And then there might be consumption on top of that. Yeah, okay. Or maybe. Or maybe. Yeah, yeah, maybe they're licensing Gemini and then they pay per query for the energy and the capex. Like the full. Couldn't a lot of the stuff be done on device? For sure. Yeah. Because Gemini Gemma has like baked down models that could be smaller so that's possible. Chad says where's Tyler on the chart? Feels like Tyler is an AGI pilled anymore. Let's actually am on the chart here. Let's go to Tyler. I am over here. So I, yeah, I think I'm very AGI killed. Right. I, you know, I'm ready for the Dyson sphere. Yes. I think it's you know, only a matter of years. Handful. Only a matter of years. It's a couple thousand days away. You need to, you need to publish about 100,026 AGI. 20, 25. We still have a month left. Still a month left. Yeah, we got a lot AGI on Christmas. This Christmas. It's coming this Christmas, this holiday season. But why do you need AGI? I think I also need AGI. Why do you need AGI? Well, I mean look, if you look at the current jobs data for like college age students, it's looking pretty bad. It's bad. So I think you kind of need AGI to really boost the economy. If AI does not work, the macro economy is looking not good. Oh, sure, sure, sure. So I feel pretty bad about my job outlook without AGI. Sure, sure, sure. Even though you're already employed? That's hilarious. Well, thank you for taking us through Chat. Want to know where would you put Lisa Su amd? Yeah, so Lisa Sue, I think she's probably been quite reactive, actually. She. You've really only seen AMD start. You don't get credit for being reactive, is what you're saying. Yeah, you. I think it's really only over the past year, maybe, that she's been making any kind of deals. Right. You saw George Hotz maybe a year or two ago.
Some reason. Yeah. Look, if you're on a. If you're on a billionaire's PR team, step one. Step one, hire a trainer. Yes. Yeah, yeah. Bring the trend twins in. Get jacked. And, you know, then. Then the public opinion. Why do you think health is so political? Oh, man. I did an event two nights ago here, and someone asked the same question, and this guy in the audience it up and he. Because I actually don't know. I mean, one reason is that I think that when you're jacked and you're strong or you're competent, it acts as a judge, actually, because if you're, like, fat and you're lazy, then you feel just sort of implicitly judged. The other thing is that going to the gym and getting jacked implies a certain value system. There's like, if you lift 145, you know, that's less than 225. Sure. And if you're in the gym, you're trying to get stronger, the implication is that it's better to lift 225. Yes. Serious. And so, like, a lot of people who are, like, dogmatically committed to a certain kind of, like, egalitarianism don't like that. But the guy at the event stood up. This guy, he looked just like Joe Rogan. His name's Anthony. Actually shouts out Anthony. But he was like. He said that generally speaking, the left tries to impose their ideas onto nature, and. And the right starts with nature and discerns their ideas from there. And so there's something just sort of fundamentally different about that. And I think lifting sort of falls into the latter camp. I have a question. Yeah. I'm always surprised at how much there seems to be more.
Do this jokey, like, gigachadification of these people. Is there something where you like. It would actually be in their best interest for the tech elite to get a lot. Look at Bezos. Look at Zuck. I was gonna say like that. You know, I used to think Zuck was a full creep. Yeah, he was like a beady eyed sort of bug. He would, like, you could. When Zuck would look in the camera, you could almost like hear him blinking, you know what I'm saying? And then he got jacked and threw on a chain and I was like, this guy's not so bad after all. You know what I mean? So maybe. So maybe they would all benefit from that. PR teams hate this one simple trick. Well, PR teams should love it. They maybe aren't awake to it for some reason. Yeah, look, if you're on a. If you're on a billionaire's PR team, step one. Step one, hire a trainer. Bring the trend twins in. Yeah, yeah, Bring the trend twins in. Get jacked. And, you know, then the public opinion. Why do you think health is so political? Oh, man, I did an event.
Full name Jordan. Yeah. Wow, brother. Make it make sense. Make it make sense what's going on here. But, but, yeah, so I would, I would drive. I, I, I just remember, like, at the time I was driving a, A, a prius with like 200,000 miles on it, and I'd just be like, driving to the gym, just blasting future, drinking protein shakes. And that's, like, when I think I became a man. Incredible things. And so I want, I think it's super important. And like, when I hear, when I hear that a young person is getting into going to the gym, I'm like, great. Totally. You're gonna become, you're gonna find yourself through that. And my only thing is, like, I think at some point you have to evolve beyond that and take the learnings and that, like, that mentality that you get from going to the gym and apply it to other, other ways. I think, like, the big takeaway that I learned early on was like, early on, I was obsessed with, like, how many, how many sets am I doing? How many reps am I doing? How am I, how am I, you know, just approaching different lifts. How many different exercises should I be doing in a lift? And I very quickly realized that there was, like, two factors. There was maybe three factors. It was like, was I sleeping a lot? Yep. Was I eating a lot? And was I, like, hyper consistent and doing a lot of volume? And those are the same lessons that I've brought into my real life outside of the gym. It's like, am I sleeping a lot? Am I, like, nourishing myself? Am I getting the right amount of calories? Am I getting, you know, getting the right amount of just, like, the right kinds of food? And then am I being consistent in the work? And so we apply that every day. We come in and do the show. When you also go to the gym, I mean, I don't behind the scenes, look, the whole squad was at the gym this morning, which I think is a great thing that you guys do. A lot of people that were on that chart, a lot of the tech elite are deeply unpopular people.
It looks like now this is real change that's about to happen and it's going to be transformative for all of the startup community. What it really says is that the best products will win. Yeah. Who are the losers here? Well I think I mean the major loser from this speech is the Primes. I mean he said I don't want to pick on anyone. I'm not going to call out companies but you have to be investing in research and technology. You have to be the fact that the Primes only spend 2% on research and development and are focused on constantly kind of maintaining the status quo that is not going to happen on his part. I think it was 2% on R& D. Yes. And haven't historically been acquiring venture backed companies so this is also the problem where they're really just on a much slower timeline than is needed by the department and I'd say that the takeaway from the speech and I think the sound bite you'll be hearing more and more about is that the new process is.
Trillion. And Tesla's already worth a trillion. So it's, it's, it's within, you know, striking distance. The market, the market cap is now around 1.5 trillion, actually. So my two questions were, one, like, it's going to be weird to live in the world of the trillionaire. Like, but we are getting close. Like, that's going to happen not just within our lifetime. Like, definitely within the next decade. This sets him up to be the first one, but it's going to happen. And I wonder how that's going to reshape our. Our culture, like, the world in America. Because when I had this, I had this. I had this realization that when billionaires became so prevalent and prominent, there was a lot of heat that was taken off the millionaire. Like, if you're just like, a guy. Yeah, I have an H. Billionaire. The heat shield. Yeah, exactly, exactly. Like, yeah, I'm a millionaire. I have a boat. I go to Bass Pro Shops, but I'm not getting protested because I have a million dollars in my house and boat. And isn't it like, practice? One out of ten Americans is a. Yeah, yeah. So the millionaire became more accessible, and the billionaire became the thing that the society scapegoats for all the problems. Approximately 9.4 to 9.5% of American adults are millionaires. Yeah. But my question was, what do you. You, like, like, what happens to the billionaire when trillionaires come in? Because, you know, like, Bernie Sanders and there's a whole crew that say, like, billionaires shouldn't exist. Every billionaire is a policy failure. Well, like, what happens when you have to say, like, well, like, trillionaires are the real policy failure, but billionaires are also the policies failure. And millionaires were, like, kind of okay with. But it's not great. It's like, it becomes much more complicated. But at the same time, it definitely, like, if Elon is the only trillionaire, it's going to be really, really easy to target him and be like, he's bad. He's a trillionaire. Is that any more targeted? I don't know. Yeah, maybe he maxed it out already, but I thought that was interesting. And then the flip side was, what does this mean for other companies?
Job outlook without AGI. Sure, sure, sure. Even though you're already employed. That's hilarious. Well, thank you for taking us through chat. Wanted to know where would you put Lisa sue? Amd? Yeah, so Lisa Sue, I think she's probably been quite reactive actually. She. You've really only seen AMD start. You don't get credit for being reactive is what you're saying. Yeah, I think it's really only over the past year maybe that she's been making any kind of deals. Right. You saw George Hotz maybe a year or two ago basically trashing AMD chips for how bad they were in. But maybe she's, she's given so much of the company away. Maybe she thinks that shares won't have very much value in the future. She's like happy to just give away 10% of the company. Yeah. So I think I would. If I had to pick somewhere, I would say that she is, she's honestly getting a little close to Larry in that amd. It's very much like an AI play still. Right. They're a chip company obviously, but you don't get the feeling that she's a true believer. Yeah, that tracks. Okay, that tracks the new Siri. I'm reading through some of Mark Gurman's reporting. Speaking of Apple and AI, the new Siri is planned for March, give or take, as has been the case for months. Apple has been saying for nine months it's coming in 2026. Apple simply reiterated on that and then on the actual deal. It's a $1 billion deal for a 1.2 trillion parameter artificial intelligence model developed by Google to help power and overhaul the Siri AI voice assistant. The two companies are finalizing an agreement that would see Apple pay roughly 1 billion annually for access to Google's technology with the Google model handling series summarizer and planner functions. Apple intends to use the Google model as an interim solution until its own models are powerful enough and is working on a 1 trillion parameter cloud based model that it hopes to have ready for consumer applications as soon as next year. That is interesting. I feel like they're going to have to pay a lot more to actually run all the queries, generate all the tokens. But I'm sure we'll find out more as Google reports earnings and Apple reports earnings. Before we get to that, let me tell you about graphite.dev code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster. What else is in the news today? We have. We are we have some breaking news through the timeline. What's the breaking news? TJ Parker has finally found a new car that he likes. Huge. Guess what it is. Guess I. A new car that he likes. It's not the Serato. It's not the Lamborghini Huracan Sterrato. No, it is the. Wait, wait. R. Look at this. Look at this. Finally found a new vehicle I quite like and great gas mileage to boot. This was the problem with your Ford Raptor. It wasn't the R. It wasn't the R. You only had 600 horsepower instead of 8. That was the main problem. I also needed to park it in park city cities. Oh yeah. Just absolutely brutal. Yeah. What are the specs on the, on the, on the Raptor R again, it's. It's a pretty wild. How much, how much horsepower? Raptor zero to 60 and 3.9, 720 horsepower. 5.2 liter supercharged V8. Fantastic. It's kind of the Julius AI of trucks. It really is. AI data analyst. Connect your data, ask questions in plain English, get insights in seconds. No coding required. Trying to go through the timeline. We got Will Menitis talking about some other stuff. Let's pull up this post from Pegasus since it's relevant to the overview we just did. Pegasus, says Altman today. Of course he's talking about yesterday. We were looking at selling compute, but we need as much as possible. Zuck last week we could sell compute. Are we in a compute shortage or not? Because both are saying they're buying as much of it as they can and thinking about selling it. That's very interesting. Yeah. I mean the supply and demand dynamics. I like that story about Jensen where he understands the dynamic here. But there's still. You get kind of crushed if there's a sell off in terms of demand. You just have to go back and study what was going on during the crypto boom for Nvidia. During the crypto boom for Nvidia, it was like buy as many as possible. Just ramp, ramp, ramp. And then they literally had a gluttony. Yeah. And I can completely out of glutton. They took huge write offs. The steel man here. If you're a large tech company and you have the ability to. And the financing capital to buy a lot of GPUs. It's not the worst idea to buy as much as you can so that you have preferred access to it and then resell some if you have more than you need. Right? Yeah. But yeah, the question is historically that. Is not how things have been done. I understand the pitch there, but I mean we just talked to David Baszucki from Roblox and he was saying that look, we had our own on Prem, but then we had spikes of demand and so we went to the hyperscalers for that because they can load balance across while people are playing Roblox here and then maybe they watch some Netflix over there and they are storing all sorts of different data and there's different workloads that happen at different times. And so traditionally the hyperscalers have been able to service multiple users across them. Jumping straight to selling compute. I think that the timelines are a little bit funky on this one. It seems odd, it seems rushed. It seems rushed especially when your core product is growing so quickly. Google Cloud platform was released what seven years after Google launched. Same thing with aws, same thing with Azure. These were very mature businesses, very stable businesses with cash flow that were able to justify the investment. It was not something that was done as part in the growth phase as much I don't know. Speaking of other wild new business lines, Elon apparently confirmed that that Tesla is going to build a semiconductor fab. Yep. I've been advocating for this for a long time. I'd love to hear what Elon had to say. But first let me tell you about Fall, the generative media platform for developers. The world's best generative image, video and audio models all in one place. Develop and fine tune models with serverless GPUs and on demand clusters. Let's play Elon. What do we got here? Let's play Elon from our suppliers. It's still not enough. So I think we may have to do a Tesla Terafab. Whoa, great name. That is a bombshell. It's like Giga but way bigger. Terabytes. He's feeling good right now. I can't see any other way to get to the volume of chips that we're looking for. So I think we're probably going to have to build a gigantic chip app. Morris changing TSNC is like, no, actually I'm fine. I will supply you. Samsung is like, I'm good. I will do it. Pull up this other video of Optimus. That was crazy because there was some new moves of Optimus getting jiggy. Let's see. It is at the bottom of the Tyler app. Look at this, let's see, look at this. Okay, so this was, this was shown in contrast to the first Optimus, which was just a guy in a suit. That was so fun. This thing has motion. Yeah, it's not bad. This feels like getting pretty close to unitary level. Definitely at that level. I don't know. Imagine you're working late at the office one night and this thing just walks out onto the floor and starts looking at you and doing these moves. Yeah. Weird. It's so weird how these new projects, like they seem. I understand why a lot of people look at this with skepticism, but at the same time it just doesn't seem that complicated to just manufacture that thing and ship it. Like they do it, you know, in China. They do it all over the place. This doesn't seem that insane. And at the same time, like Apple couldn't ship a car. So like these new projects, like they. Do kind of Elon's ship some cars though. Yeah, yeah. I meant it more as like, if you're doing one thing, it can sometimes be hard to branch out into the other thing. I remember a few weeks ago there was a headline around Tesla ordering like $600 million worth of actuators. Yeah. Like, it seems like they're going into some sort of production. Yeah. I wonder what, who they're gonna sell. You don't need 600 million for like test. Just for testing. It's just the car analogy is so tricky. Cause it's like most people that bought Teslas already had cars. Right. And so it's just like a one for one swap. Who are you replacing with this? You know, it's tricky. What do you think? Well, with this you can replace like an exotic dancer. Yes, that's true. I don't know if many people have that, an exotic dancer in their life who they're ready to, who they're ready to replace. It seems, I don't know, it'll just be very interesting to see where these things actually diffuse. Like, because if you sell them, if you sell them for one price to consumers to do their laundry, but they work in an industrial capacity. People will just buy them and use them for that. But when we talk to people who are in industrial environments, they're like, I definitely don't want a humanoid robot for that. I'd rather just like a big massive robotic arm that's like bolted to the ground and can actually lift like £10,000 as opposed to one that can only lift like £50. I don't know. Ryan says they're gonna sell him to the police. We'll see. Maybe. They certainly would be, I don't know, backup. I think they would be pretty effective deterrence, maybe. Maybe. I like Vittorio having some fun on the timeline going viral Getting community noted. Vittorio said Elon Musk now has $1 trillion in his bank account. That's a thousand times $1 billion. He could give every single human on Earth 1 billion and still be left with 992 billion. Let that sink in. People love this funny math whenever it drops. What else is going on? The humanoid. Yes. China has a similar humanoid robotic project, although it's way, way scarier because it went full Terminator mode on this. Of course this one is hooked up to power, but yes. So this video was in response to earlier this week. There was a. I think it was unitary like presentation and they brought out this new robot and it was walking with such like swagger, natural gait. Yeah. Basically that, that people thought it was actually just a person really. And it was not. And so they replied with. With it fully stripped down. That is remarkable. Yeah. Oh, because they put it, they put it. They put like the suit on the robot. Yeah, it was like a Neo, kind of like the 1x robot. We're getting in spooky, spooky territory. This is pretty crazy. Oh, wow. Yeah. It does look human. It does look human. I'm looking at the other video. Sorry. Let me tell you about Turbo Puffer search. Everybyte, serverless vector and full text search built from first principles and object stores. You saw these logos. Cheaper. You see these new logos. Oh wow. Stacked, stacked. Absolutely stacked. They're cooking. Chris Backy says In the last 10 months, three very talented friends have joined separate hot early stage startups in senior roles and quit after realizing that the company's actual revenue was significantly, significantly less than what the founder had told them during the interview process and shared online few things to clarify. I'm not joking. In two of the three cases, it would be hard to tell that you were joining a bad actor company. On the surface, solid investors founders worked at unicorn companies. The third was maybe more obvious. The only ones who really lose here are the employees. The investors have 200 port cos and can afford some losses. And our industry doesn't really pursue founders over fraud. Misleading information up to a certain point. For if you're thinking about joining an early stage startup, ask to see their stripe and or signed contracts before you join. Not bad advice. Yeah. Again, this going back to spring. Right. It just felt like there was this like every founder was feeling this insane pressure to show like 1 to 10 million of like a 1 to 10 million ramp. That was just insane. Yeah. And the weird dynamic is that like as that pressure ramps up, you just get more and more incentive to fake it with community adjusted ARR and contracts that don't actually stick and all sorts of different twists on something that's like, is it actually cash? Is it actually people coming in? Are they on long contracts? The quality of revenue has been maybe degrading, but also just maybe in just, just, you know, it's just been easier to game than ever. There's been more incentive to game it than ever. So stay safe out there, folks. Kazakhstan has signed an MoU to buy up to 2 billion of advanced chips from Nvidia. Let's hit the gong for Kazakhstan. Warm it up, warm it up, warm it up. Boom. Great hit. Great hit for our friends in Kazakhstan. Good to see them getting into the the game. Does, does Borat take place in Kazakhstan? Isn't that, is that the whole thing? Maybe they should do Borat 3 about data centers. He, he, he goes to a data center. Sasha Barra Cohen Cohen would be on. The end his role all to bring AI to Kazakhstan. Sasha Barak Cohen doing a doc like. A, like, yeah, like just on AI. Incredible. It'd be amazing. So I can use it to do better web search. So I can use it as an autocomplete. You're not doing the Borat voice, brother. I'm not gonna do the Borat voice. I will do the profound voice. Get your brand mentioned in ChatGPT. Reach millions of consumers who are using AI to discover new products and brands. Go get a demo. The Kobse letter says breaking Nvidia's losses accelerated to negative 5% of the day now down 16% since Monday's high. That marks a drop of 8,800 billion since Monday. Wow, that is a wild sell off. Didn't Tyler quote this and say is this bullish or something like that? No, that was on the doordash. They went down 20%. Everyone's down 10 or 20%. We need whether you, whether you beat or miss, you're going down. Wait, is this true of Ari Emanuel said he's working with Elon to bring Optimus to the ufc. That would be incredible. Please don't be fake news. Please don't be fake news. We need robots in the ufc. I mean, Rogan and Palmer Luckey were talking about it on, on their podcast and it certainly seems like a hilarious and wild thing, even just as a, as an exhibition match. Before the real UFC get an Optimus though, here's a question. Would Elon want a human to thrash on Optimus? Is that good for his brand or would he want Optimus to win and If Optimus win, then is that actually entertaining? I don't know. I think putting Optimus up against the best MMA fighters in the world. Yeah. Is fine. It's impossible though, right? Because if you just kiss metal, you'll just break your foot. And if you punch. This was Ari Emanuel at the all in Summit. Yeah, it was released I guess one day ago. Oh, interesting. He was talking about wanting. Wait, wait. What a weird. What a crazy. Crazy timing. That's. So he says. I saw what he's creating. The man's a genius. I want to do a UFC fight with his robots. Yeah, yeah. Very cool. Heisenberg sharing. Yeah, sharing. A little bit of red here. Microsoft down 10% in the last eight days. Nvidia 12% in the last four days. Alantir at 16%. Down 16%. Meta down 18%. Losses have accelerated. Compound 248 was sharing yesterday. He's like, when I clipped this, I didn't expect to crash the markets globally. But he certainly. That clip seems to have played a part in this little sell off. Always take a victory lap for having world historic consequences. I actually think that that particular clip was like less than 5 of the total views on that clip because there were other people that clipped it and it got. Oh sure, it went everywhere. But always claim victory. That's the lesson. That's right. Anyway, we have our first guest of the show, Katherine Boyle from Andreessen Horowitz joining us. She's in the Restream waiting room. Welcome to the TVT and Ultra film. Katherine, how are you doing? It's great to be here. Good to see you guys. Why are you playing the night vision goggle sound? You already got new soundboard today. I felt a little tactical. I felt a little tactical. It's great to see you. Good to see you. Should the Department of War get a soundboard during these speeches? I heard Hagseth was giving a big speech. Are they sticking mostly to the cheers and the clapping or are we doing firing off muskets? Are we firing off cannons? Is there pageantry? Is pageantry alive in the Department of War? Bring the pageantry back to the Department of War. That is important. You guys should definitely have your own sound for them. For sure. You guys are getting the first look at a major speech. Yes. So it is just ending right now. I just hopped off the live stream and I can tell you my phone is blowing up with just how many people are enthusiastic, not only in the venture community, but just across Washington about what has been said in the speech. And it's going to have dramatic repercussions for defense tech and for Silicon Valley and the broader defense innovation base. So it's an extraordinary speech. You'll be hearing a lot more about it. But you're getting the first look. Fantastic, Incredible. Take us through it. And it is definitely, I'd say.
In this little sell off, always take a victory lap for having world historic consequences. I actually think that that particular clip was like less than 5% of the total views on that clip because there were other people that clipped it and it got. Oh, for sure. It just went everywhere. But always claim victory. That's the lesson. That's right. Anyway, we have our first guest of the show, Katherine Boyle from Andreessen Horowitz joining us. She's in the Restream waiting room. Welcome to the TV here in Amsterdam. Catherine, how are you doing? It's great to be here. Good to see you guys. Why are you playing the night vision goggle sound? Jordan got new soundboard today. I felt a little tactical. I felt a little tactical. It's great to see you. Good to see you. Should the Department of War get a soundboard during these speeches? I heard Hagseth was giving a big speech. Are they sticking mostly to the cheers and the clapping or are we firing off muskets? Are we firing off cannons? Is there pageantry? Is pageantry alive in the department of the Pageantry? Bring the pageantry back to the Department of War. That is. That is important. You guys should definitely have your own sound for them. For sure. You guys are getting the first look at a major speech. Yes. So it is. It is just ending right now. I just hopped off the live stream and I can tell you my phone is blowing up with just how many people are enthusiastic not only in the.
Review. Let's do it. We got the laser pointer. We got the laser pointer. Our, our good friend Tyler Cosgrove has put together a slide deck for us that tries to help map the mag 7. Really? I call it the TBPN top 10. The top 10. Well, technically 9. There's 9. Well, no, no, no. There are 10. There's 10, there's 10. We'll see. Well, one of them maybe shouldn't be there. No, no, no, no, no. There's actually 10 because it's the Mag 7. Oh, Oracle. I forgot Oracle. Yes, you did. So, so it is, it is the TPPN top 10. The 10 most important companies in AI. Loosely, the Mag 7, plus a few bonus ones. And we're going to try and watch your head through and look, you're a horse. We're going to try and go through the various companies and rank them based on how AGI pilled they are and how much they need AGI. Is that right? Yeah, basically. So let's pull up the actual scales. There's two axes. Let's go to the next slide. So basically on the horizontal we have how AGI pilled they are. Sure. So I feel like that's fairly self explanatory. You kind of believe that AI will become something like it can produce the median economic output of a person. Yes, yes. And there's a few different ways to understand if somebody believes in AGI. It could be the rhetoric of the CEO or the founder, it could be the actions of the company. Like they're just speak louder than words. And is there anything else that could lead someone to show that they believe in AGI? I guess it's mostly just the actions and the words. Right. I mean those are the main things that you can kind of do as a person. So you do things. Well, we have it. We will be judging them by both their actions and their words. Yeah. So then on the vertical axis we have how much they need AGI. So I think this is maybe a little harder. So I want to qualify this. Yeah. So I mean this doesn't necessarily believe that you have this kind of sentient, you know, AI that's as good as a person. But I think it more so in this context just means that AI will continue to become more and more economically valuable. Yes. To where you can kind of sustain, you know, building more and more data centers. You can do more and more capex, you know. Yeah. There's a little bit of like how much will this company be transformed if the AI wave plays out? Well, AI doesn't play out well. How chopped are they? How cooked are they? Exactly. That's a great framework. Yes. And then also, yeah, if we flash forward, nothing really changes. Total plateau, total decline in token generation or something. Is the business just continuing business as usual? Okay, so who are we starting with? So let's start with Sam Altman. Okay, Sam Altman, where is he on this? So, Sam Altman, I think this is pretty reasonable spot. He believes in AGI, Right. He runs kind of the biggest AI company. He also needs AGI because if you imagine that if the models stagnate, they have a lot of capex they need to fulfill. If models stagnate, what are they going to do? Maybe the margins somehow work out, but you're probably not in a good spot if models get worse or if people start using AI less. If you're saying all one. But he's also not, you know, in the top, top corner. Okay, Right. And I think this is. You can justify this through a lot of OpenAI's actions. You see stuff like Sora, you see maybe the erotica. This is not very. Who's. Who's. I'm running the laser pointer. I want you down here. Maybe put them down here. Okay, explain that. Explain that. I think that more and more, at least in the short term, OpenAI looks like a hyperscaler. They're kind of a junior hyperscaler. And I think their actions are more, you know, I think that OpenAI. A lot of people, you know, want to say that they're, they're bearish on OpenAI at current levels, but ultimately, when you look at how their business is evolving, they seem to me like they'd be fine if the models plateaued. Yes. But, yeah, I feel like the mood on the timeline was much more slide. Sam to the left, doesn't believe in AGI. There was that post about, like, if, if, if OpenAI really believed in AGI, they wouldn't be doing SORA or they wouldn't be doing the erotica thing. Like all of those were very much like, needs it, but it. But kind of accepts that it's not coming and so stop believing. Yeah. So I think in responsibility, I think, yeah, because it's not like this. The vertical axis is also just about, like, if it's going to continue to be economically, you know, economically useful. So if people just stop using AI in general or like people, if the kind of, you know, revenue stops accelerating or any of this stuff, I think OpenAI will be in a bad spot regardless of the models. Like, actually Getting much better. Like if they just make the models much more efficient to run. You could say that's not very like AGI pilling because the models aren't getting a lot better. Yeah, but that's still like, But I'm. Just saying, like, if there was no more progress at all. Yeah. Like, we never got a new model, any of the labs. I think that OpenAI would add ads, they would add commerce, they would increase. Yeah. So they might be fine without AGI. They would make agents a lot better. But, but I mean, the 1.4 trillion enterprise, the 1.4 trillion in commitments like that is hard to justify if it's just the business today just growing. Like, it's growing because it's just like, it's good. No, no crazy breakthroughs. Like just. Yeah, but they're, they're, they're. You're laying out the bulk. They're playing, they're playing crazy. You're laying out the bull case saying, oh, if they just add ads, they're going to be able to hit the 1.4 trillion, no problem. I'm not saying, I'm not saying they can pull back on a lot of these commitments. Sure, sure. They don't, like. Okay. I don't, I don't think these are like, going to end up being like, real liabilities. Business is just cooked. Okay. They can't hit them. Right. So I'm just saying, like, I think there's a shot they have, they could, you know, when, when they talk about having success in consumer electronics. Right. Which is something you talked about yesterday. Like, they don't need, like, I, I think they can probably build a really cool device. Maybe it could be competitive with, you know, if there's, if it can be at all competitive with an iPhone like that, they could be fine without AGI. We're also getting into, I guess let's. Get some other people on the board so we can see where. Yeah. I think it's useful to be relative because these are not quantitative numbers. Yeah. So next we have Dario. Okay, well, Dario is up here. So I think Dario is kind of. When you listen to what Dario is saying, he is, you know, he's extremely AGI pilled. Yes. Right. He, he. This is kind of the reasoning why he's, he's so anti China. Yeah. Right. Because he sees it as an actual race. This is going to nuclear weapons. It is a national, you know, security total. It's a problem if China gets there first. What is that new sound cue? I don't think Tyler has sound Effects. What is it? UAV online. UAV online. Okay, I like that we got. This is the uav. We should give this UAV esthetics for sure. This is good. Okay, continue. But there's also a sense that he needs AGI because if. If AI stops becoming, if it stops growing as fast and you imagine that things kind of settle where they are now. OpenAI is definitely in the lead. Sure. So you need a lot of continued growth to. For anthropic to keep kind of making sense economically, I think. Okay, yeah, yeah, sure. Who else is on here? So I think next is Larry. Larry Ellis. Larry. Larry's in kind of an interesting spot. So this is kind of a weird place to be where you don't believe in AGI, but you need it. Okay. How did you wind up there? So I think you're probably wondering how I ended up in this corner. Record, scratch, freeze frame. So I think this is how you factor in. There's kind of the personal rhetoric and then there's the actions of his company. Sure. So when you listen to Larry speak, he doesn't seem the type that believes in some kind of super intelligent God that is going to come, that's going to birth this new thing and humanity will rise. But then you look at Oracle. Has anyone found his less wrong username? Is he on there regularly? I don't think Larry's reading gwern. Okay, got it. But then. But he's investing and they are, you know, they need AI to work. Okay. Right. They are maybe covering up a little too much or maybe not enough, depending on how AGI pilled you are. But, you know. AGI pilled. Exactly. It's kind of hard to square, but. It'S a bold bet. Yeah, this is, this is a unique spot. I think it is a yes spot. He's off the grid. Okay. Yeah. So who's next? So let's see who's next. Who did I. Okay. Okay. So I think this is a fairly reasonable spot. Yeah. Obviously there's, you know, there's some sense where he is slightly AGI pilled or maybe more than slightly. Slightly believes in the power of the technology. Yeah, I mean, he's very early on OpenAI. He thinks that AI in general will become very useful, but maybe it won't become, you know, super intelligent. Maybe it's not gonna replace every person. It's just a useful tool. It's the next. The quote I always go back to is him saying, like, my definition of AGI is just greater economic growth. So show me the economic Numbers and that will be. It's like, it's a. It's a very practical. Yeah. Definition. I think people see him as very reasonable. He's not getting over skis. Yeah. I like him. I like him in the center of the grid somewhere. That seems like he's also, you know, if AI doesn't work out. Yeah. I think Microsoft is in a very good stick around. You know, they're not crazy over investing OpenAI. They have a nice. Yeah, they'll write some code. If OpenAI works out, he'll do very well. If they don't work out, I think he's also. He's hedge. He's doing quite well. He's hedged. Yeah. The man's head. He's happy to be a leaser. Yeah. That's a good quote, too. Okay. Yeah. Yeah. You wouldn't be leasing if you were super AGI pilled. Right. You're hoovering up everything you keep it all for. Exactly. I think. Let's actually talk about that with. I think it's Jensen next. Okay. Yes, Jensen. So Jensen, he doesn't believe in AGI. He's the whole reason why we're here. Explain that. So, yeah, this is maybe my personal take. Please. If Jensen was very IGI pilled. Yes. I mean, he is the kind of. He's the rock on which this all is built on. Yes. He has the chips. Yes, he has. If he was a jack healed, he would not be giving out those chips. He would keep all to himself and he would be training his own model. Okay. So that's why I think he's. He's more on the. Doesn't believe in AGI. Yes. But if there's any kind of downturn in the economy, could you put. Could you put Sam, like, potentially closer in this direction too? Because he's talking about getting into the. Yes. Like compute cloud reselling, for sure. So, yeah. If there's so much demand. Yeah, yeah. And the models progressing so quickly, wouldn't you want to just hold on to all. All that? I think this would also be interesting to see over time. Like, Sam has definitely shifted leftwards. Yes. Over time he's moving. He's basically. This summer, you've seen a lot of the actions of opening. They seem less and less AGI. Pretty much everyone has been like moving the AGI timelines outward, which is which you could transform into no longer believing in AGI. It's more just like the timelines have gotten longer this year, broadly, pretty much everyone. To our Cash app, there was a new Blog post yesterday. It was basically AI 2027. There was a new one. It was AI 2032. So it's basically the same team. Different team. But the team of AI 2027 was promoting. It did, yeah. AI 27. 2027 should be like. It was actually AI 3027. We missed typo. 3020S. We couldn't get that domain. We were just off by a thousand years. Yeah. But there's definitely the sense where if there's a downturn, Jensen, the stock is going to go down. Yeah, but it's not going to go down as far as Larry. Right. Because they're still not on. They don't have insane. This is not financial advice. Yes. You know, they don't have, you know, completely unreasonable capex. They're not levered up. Sure, sure, sure that I know. Oh yeah. Their Z score is through the roof. Their Allman Z score is to the Oman. Z score is through the roof. They're looking pretty safe. Yes. Okay, let's see the next one, I believe. Okay. Sundar. Sundar. He believes in AGI more than Satya. You think? Yeah, well, I think you can see this in kind of. They were even earlier in some sense than Satya. Right. With DeepMind. So I think AI has played a fairly big part in the Google story very recently. They've always, I think basically for the past 10 years they've been trying to get into AI. Maybe their actions didn't actually do much, but you know, they were the Transformer paper. They, you know, applied AI. Like they're actually applying. They built self driving car discovery and core. Google search is just an AI product. It's just an index on information they're organizing. Also, I mean compared to Satya, I mean, Gemini is a frontier model. Totally. Gemini 3 is supposed to be this incredible model. Everyone's very excited. Satya is not. Microsoft is not training their own base model yet. Yeah. And also like it's like there is a little bit of like if you really believe in AGI, the actions that we see. Are you like squirming and being. I got to get in. It doesn't matter if I'm 1% behind or 10% behind or 80% behind. I got to get in. And I think we know someone who's. Doing Gabe in the chat says, is this AGI or AG1 from athletic greens needs AG1 needs AG1. Fine. Without AG1 believes in AG1. Now we are the only podcast that is not partnered with AG1. But we do like Green. Green is our hero. Color. Let's go to the next person. Okay. But also before that, I think Sundar is also definitely below this line because Google has been doing very well. AI was at first, I mean, people thought of AI as like, oh, this is going to destroy Google. This is bad for Google. So if AI doesn't work, then Google is just in the spot they were before, which is doing very well. If AI does work, then, I mean, Gemini is one of the best models. They'll do very well too. Speaking of Gemini, Google AI Studio create an AI powered app faster than ever. Gemini understands the capabilities you need and automatically wires them up for you. Get started at AI Studio. Build. Yeah, continue. Okay, so I think, yeah, next is Zuck. Yes. So Zuck is also kind of in an interesting spot. Yes. I think Zuck is actually someone who has shifted rightward. It's fascinating. Yeah. You've seen this. I mean, for a while he's been traveling. Yes. Yeah. So for a while he was doing open source, which in some sense it's very AGI pilling because, you know, you're building it, you're training a model. Right. It's like you're moving the frontier forward, but it's also, it's open source, which you can kind of think as being. You're trying to commoditize everything. You don't think that there's going to be some super intelligence that will take all the value that you need to hold on to. You can kind of give it out. And now he's kind of moved towards closed source. We're going to get the best people, we're going to train the best model. It felt like Zuck was sort of like, oh, yeah, AI. Like, it's this cool thing. I'm going to check the box. I got my team, we did this fun little side project. It's this open source model. We kind of found our own, like, little lane. But we're not like competing in the big cosmic battle between OpenAI, anthropic, DeepMind. Like, well, don't. Do you think that was just a counter position to way to like, try to win the AI war? Go say, hey, we're just going to try to commodity, commodify this market. Commodify Chinese approach. Yeah. Commoditizing your. I think that is a good strategy. Yeah, that makes sense. And then maybe you could say that, oh, once the Chinese have. Have been getting better, he can kind of step out of that position. Sure. And he's moved towards closed source. Yeah. But now it feels like he's Way, way going, way harder on the AGI vision. Paying up for it, Investing so much money in it. Yeah. You know, it's like. And depending on how much he invests, you could see him pushing up. But right now. But right now, the business is just insane. It's so phenomenal that even if he winds up spending all this money and, you know, they don't even get a frontier model and like all the. All the. The researcher, like, yeah, we didn't discover anything and we're just gone. Like, the business is fine because it's such a behemoth. So. Yeah, that's why he's after earnings. They took a fairly big hit. Yeah. So maybe he should be a little bit higher. Yeah. Maybe a little bit of broadly is still a very safe play if it doesn't work out. And it was the same thing during the metaverse. It was like he believed in the metaverse, he invested in the metaverse, but he never needed the metaverse. And so after the stock sold off like crazy, it went right back up. Because everyone realized, wait, he's also raising debt off the balance sheet, which would kind of could push him up into this zone. Right. If he's like, you know, if you're worried about, why don't you just carry it yourself? Yeah, yeah, yeah. How do you carry yourself? Yeah. What do you really. You believe in AGI? Just. Just let it ride. Yeah. Okay. Who else are we missing? Okay, so we have Most of the Mag 7 now is Jassy. Oh, Elon. Elon. Elon. Yeah. Yeah. Okay, Elon. So Elon, I mean, he's been AGI pilled, I think, for a very long time. Super AGI pill. I agree with that. OpenAI co founder, even before that, he. I think he was fairly big in the. In the safety space. Totally. You see him even on. On Joe Rogan, he was talking about AI safety. He still believes in it. And AI safety is important because it's going to become super intelligent. It's be to. Going. Going to take over the world. Yeah. Even this was part of the dialogue around the new comp package. Wanting the voting power to be able to be part of securing his robot army. Oh, yeah. Interesting. Yeah. I think robots are very underrepresented on this board. Totally. And Elon is kind of the main player in that space right now. Yes. Yeah. So he talked yesterday about humanoids being sort of an infinite money glitch. And I feel like you kind of. Need. AGI in order to kind of unlock the infinite money glitch. Yes. But at the same time, very strong core business. The cars don't need AGI, the rockets don't need AGI, starlink doesn't need AGI. So he's not entirely indexed on it in the way the foundation labs are. Right. Yeah. I mean there's definitely a significant part of the Tesla market cap that is basically betting on robots. Totally. But there's also a big part of it that's just. But that's why he's, he's in the middle of the needs are fine without. It's like. Yeah, everything on it is needs. Yeah, but that's just one piece of. The AGI more than SpaceX. SpaceX, yes. SpaceX is very, seems very uncorrelated with the AI. Totally, totally. Totally. Unless you have the data. He's happy to be a telecom baron. He's a. Happy to. Yeah, he's happy to be the new Verizon. Who else? Okay, so I believe now is Andy Jassy. Jassy, yeah. So he is, I think broadly does not believe in AGI, although he, you know, fairly major stake in Anthropic, which maybe is very AGI, but they have not. He's hedging with Anthropic. Yeah, yeah, I think that's basically what you can say. He doesn't seem at all worried about kind of the core business. AWS seems to be. And is Google building their position in Anthropic? I saw some headline about that that was wild. Imagine like Sund such a beast. So yeah, really both of them, they just have huge stakes broadly. Andy Jassy seems very quantitative. He's focused on the numbers, realistic. He's not making these grandiose statements about the future of intelligence or what humanity is going to be like. Yeah. When you look at the AWS earnings it feels like they invest when the demand shows up and they build data centers when there is demand. And they are not in the business of doing a 10 year hypothetical sci fi forecast based on Breakthrough. Thompson, on the recent Sharp Tech. We were listening to it on the way in. He was talking about how Amazon has repeatedly said hey, we're, we're supply constrained. We have a lot more demand than we can fulfill with this new deal with OpenAI. The $38 billion which only got announced Monday, which again feels like forever ago at this point. But Ben was kind of hypothesizing that they kind of let OpenAI jump the line because that $38 billion deal was starting effectively immediately versus some of like Larry's deal with OpenAI was announced. This massive backlog is revenue that cannot be generated in a meaningful way today because they have to build the data centers that can actually deliver the compute. Yeah, I mean and Amazon has been building data centers. They're building data centers. Basically they built all the data centers for Anthropic. But it still seems very kind of restrained. It's not overly ambitious. Anthropic has had issues with capacity and it's probably because, you know, Andy Jassy doesn't want to get over his skis. He doesn't want to build too much. So I think that's why he's kind of on the left side. And also, I mean if you think AI doesn't work and we're going to kind of be in this, you know, the same spot of web 2.0, you need, you need AWS, you need your EC2 server. I think he's very well positioned there. And then I think the last one is Tim Cook. Tim Cook. Let's hear for Tim Cook. Criminally underpaid but has done a fantastic job not getting over his skis. Yeah. This one I think is fairly self explanatory. I mean he seems, he doesn't believe. In AGI, he doesn't believe in LLMs, doesn't believe in chatbots apparently. I don't know the new Gemini deal signaling. I said two years ago Apple was like, yeah, we're not doing that stuff. Yeah, there was that famous three years. But I think what's under discussed, my takeaway from the, our conversation with Mark Gurman and Apple's ambitions around their new LLM experience with Gemini is that I think that there's a very real scenario where Apple like wants to compete for the same user base as OpenAI. They want that like transaction based revenue, that commerce revenue. Yeah. I disagree on this point but I'm. Not, I'm not saying that I'm putting their odds at like winning that market at over like 10% but I think that they would be right to realize that it's potentially an opportunity. Yeah. Is it a billion dollars a year that they're paying Google for? I believe that's the number. Yeah. That feels really low, doesn't it? I don't know, it just feels super low to me. Among all the different deals that are going on, when I think about like when I just think about like the value of AI on the iPhone and you're like $1 billion. Like when, when I think about like what is the value that we're, that the market broadly is putting on Like AI for the enterprise at the same time, whatever. When, when Sundar on the Google earnings call was talking about their top I think 10 customers and how many like trillions of tokens they were using, but it really was netting out to like $150 million of like actual revenue. And so this is, this will be their biggest customer for Gemini on day one. Until one of our listeners gets on, of course. Yeah. And I think, I mean even with the, with the Gemini news, Apple still seems very kind of reactive to AI. Yeah. They're not kind of seeing oh this future where is going to do everything and moving there right now. Yeah, they're kind of seeing where the demand is, seeing where the users are and then moving, which I think is very kind of, you know, classically, you know, that's how business work. It's not very eye pilling also on. That point of $1 billion, like I have no idea how can they possibly know the amount of inference that's going to happen in Siri plus Gemini over a year period? Like there's just no way to predict that. Or can they? I feel like if the integration's good there will be a ton of queries. It will. I didn't read the 1 billion headline. Chat can correct me if I'm wrong, but I didn't read the 1 billion headline. I felt like that was a technology license, not necessarily inference and then there. Might be consumption on top of that. Yeah, okay. Or maybe. Or maybe. Yeah, yeah, maybe they're licensing Gemini and then they pay per query for the energy and the capex. Couldn't a lot of the stuff be done on device? For sure. Yeah. Because Gemini Gemma has like baked down models that could be smaller so that's possible. Chad says where's Tyler on the chart? Feels like Tyler is an AGI pilled anymore. Let's figure out. I actually am on the chart here. Let's go to Tyler. I am over here. So I, yeah, I think I'm very AGI killed. Right. I'm ready for the Dyson sphere. Yes. I think it's you know, only a matter of years. Handful. Only a matter of years. It's a couple thousand hundred thousand days away. You need to, you need to publish about 100,006. Yeah. 20, 25. We still have a month left. We still have a month left. Yeah. We got AGI on Christmas. This Christmas. It's coming this Christmas, this holiday season. But why do you need AGI? I think I also need AGI. Why do you need AGI? Well, I mean look if you look at the current jobs data for like college age students. Oh yeah, it's looking pretty bad. It's bad. So I think you kind of need AGI to really boost the economy. If AI does not work. The macro economy is looking not good. Oh sure, sure, sure. So I feel pretty bad about my job outlook without AGI. Sure, sure, sure. Even though you're already employed. That's hilarious. Well, thank you for taking us through Chat. Wanted to know where would you put Lisa Su? Yeah, so Lisa Su, I think she's probably been quite reactive actually. She. You've really only seen AMD start. You don't get credit for being reactive is what you're saying. Yeah, you. I think it's really only over the past year maybe that she's been making any kind of deals. Right. You saw George Hotz maybe a year or two ago, basically trashing AMD chips for how bad they were in. But maybe she's, she's given so much of the company away. Maybe she thinks that shares won't have very much value in the future. She's like happy to just give away 10% of the company. Yeah. So I think I would, if I had to pick somewhere, I would say that she is, she's honestly getting a little close to Larry in that amd. It's very much like an AI play still right there. Chip company, obviously. But you don't get the, you don't get the feeling that she's a true believer. Yeah, yeah, that tracks okay. That true. Siri. I'm reading through some of Mark Gurman's reporting speaking of Apple and AI, the new Siri is planned for March, give or take, as has been the case for months. Apple has been saying for nine months it's coming in 2026. Apple simply reiterated on that and then on the actual deal. It's a $1 billion deal for a 1.2 trillion parameter artificial intelligence model developed by Google to help power ITS and overhaul the Siri AI voices assistant. The two companies are finalizing an agreement that would see Apple pay roughly 1 billion annually for access to Google's technology with the Google model handling series summarizer and planner functions. Apple intends to use the Google model as an interim solution until its own models are powerful enough and is working on a 1 trillion parameter cloud based model that it hopes to have ready for consumer applications as soon as next year. That is interesting. I feel like they're going to have to pay a lot more to actually run all the queries, generate all the tokens, but I'm sure we'll find out more as Google reports earnings and Apple reports earnings. Before we get to that, let me tell you about graphite.de code.
You're watching TVPN. Today is Friday, November 7th, 2025. We are live from the TVP and Ultradome. The temple of technology, the fortress of finance. The capital. Capital Time is money saved. Both easy to use, corporate cards, bill payments, accounting and a whole lot more all in one place. Ramp.com baby. The timeline is in turmoil over again. Sam Altman again. What were we calling it? Backstop gate. Backstop gate continues unabated. Johnson timelines and turmoil. Turmoil over. Sam Altman every telling me this the. First time every single day this week and last week, it is nonstop open AI. What will happen is, is it over? Is it the end of the global economy? Or will we, you know, live to fight another day? The main, the main point of debate is, is over, you know, Sarah Fryer's comments that she used the word backstop. She backtracked on her backstop comment and said, I wasn't asking for a backstop of OpenAI equity. I was advocating for this American manufacturing plan. Then Semper Satoshi is going pretty hard. He says here's an OpenAI document submitted one week ago where they advocate for including data center spend within the American manufacturing umbrella. They specifically advocate for federal loan guarantees. And Semper Satoshi says Sam lied to everyone. Let's read the specifics. Yes, the specifics are AI server production and AI data centers. Broadening coverage of the amic, which is the American Advanced Manufacturing Investment Credit, will lower the effective cost of capital de risk early investment and unlock private capital to help alleviate bottlenecks and accelerate the AI build in the United States. Counter the PRC by de risking US manufacturing expansion to provide manufacturers with the certainty and capital they need to scale production quickly. The federal government should also deploy grants, cost sharing agreements, loans or loan guarantees to expand industrial base capacity and resilience. Okay, so loan guarantees. So. So what's happening here is OpenAI a week ago and everyone can go and read this letter which is publicly available on their website. Was making the recommendation that the government should effectively treat data centers or AI or token factories, put them in the manufacturing bucket, which would qualify them for similar incentives that traditional manufacturing, defense, defense tech, etc. And I don't have a problem with asking the government for a handout. I think that that's actually like best practice. It's actually in your shareholder's responsibility, like you have a fiduciary duty to ask the government for as much help as possible. I think that everyone should go to the. Go to the government right now. Hey, if I'm paying 50% tax. How about we take that down to 20%? How about we take it down to 0%? Like, you have every incentive to ask your person in Congress and your senator, the people in Washington, to do everything they can to support your mission. This has worked out in the past with Elon and Tesla. It didn't work out in the case of Solyndra. But like the game on the field is if there are taxpayer dollars that are moving around the board, you want to get those into the industries that are aligned with you. And so the thing that people are taking issue with is that in the opening of his message yesterday, he said, we do not have or want government guarantees for OpenAI data centers. Yes. And that seems to conflict with the message. The letter that they wrote a week ago that is still up on their website. Yes. So if it's, what is it? Loan guarantees to expand in retail based capacity. Noah Xchat is taking the initiative. Just sent the admin a safe to sign uncapped. Uncapped note. You literally can do this. Now there's a sovereign wealth fund. Like it sounds crazy, but like they're ripping checks. Like, this is the new economy. And you know, you can, you can, you can be upset about it, but you also have to understand like, what is the game on the field? You can always advocate for, like, we should change the game. Like, we shouldn't be doing this. Like, I would prefer a more of a free market economy, but in the world where we're not in a free market economy, you want to have your company win. Right. That's just rational. That's just actually playing the game on the field. Now it is weird optics to talk about the game on the field. That's something most people don't like doing. And that's very odd because when you say, oh yeah, this is a one hand washes the other situation, or oh yeah, this is a situation where, you know, a backstop will allow us to be more aggressive. That feels like the banker saying, oh yeah, I knew that the government was gonna bail us out in 08, so I was intention underwriting loans that where it was somebody's fifth house. And I knew that they couldn't pay it. I wasn't asking about their job, I wasn't asking about their income, I wasn't asking about their assets. And so I pushed it way further and I made a lot of money and I got out at the top. That's what's really upsetting to Americans because the bailout comes in, the backstop comes in. Yes, it makes sense to rationally do the backstop in that moment. But if some people get out early and then other people get cooked, that's really bad optics. That's really, really bad optics. Which is why I kind of expect a lot more of the narrative to shift towards subsidizing and incentivizing, bringing new energy. You were talking about that yesterday, which. Directly benefits the labs and anybody building a data center, but it also feels very much in America's interests, broadly. Right. And it benefits. It theoretically would benefit the average American too. We were listening to Ben Thompson this morning. It wasn't on Restream. One livestream, 30 plus destinations, multi stream. Reach your audience wherever they are. We were listening to the Stratechary RSS feed. Hopefully Ben goes live at some point on restream. That'd be awesome. By the way, I think the chat has discovered that. Yes, this is a turbo puffer. This is one of two. I sent the other one to Simon. Turbo puffer, but you can see it. Looks great, especially with the green background, the color grade. The production team is on point today. Let's. Let's hear from the production team. So, so yes, I agree with you on the, on the energy front, I think Ben Thompson, his point today. And fabs. Yeah, and fabs too. Yeah. His point today was like. Was like OpenAI needs to be crystal clear about the position that they're in, which is that they are the hottest company in the world. There is unlimited demand for their shares. They could be a public company. They could go raise more private capital. They need to be on the opposite end of the risk curve from the stuff that's like, no one really wants to invest in an American fab that might lose money for a decade. There is truly much less appetite for. Yeah, let's go and build a nuclear power plant. That might take a decade. And who knows? When you think about the things that make money in the short term, it's SaaS, right? AI for SaaS. Just go and transform the legacy business with some AI sprinkled on top and just start printing money like this is what works. You know, people's concern for government backed data center lending is that you're lending against chips which have a really fast depreciation schedule. Yep. We don't know. Right? There was some pushback. I made the comment, hey, maybe that these things don't depreciate as fast. There was some. There was some pushback in the comments. I think everyone kind of agrees that these things depreciate quickly. And so Energy infrastructure is the place. Yeah. And if you look at right now it's core weaves, corporate default swaps are now sitting around 500 basis points jumped up dramatically and so this is one of the leading Neo clouds. 500 basis points. 5%. Yeah, they jumped 5%. They jumped from like I think like 2 to 5. Try to find somewhere in the stack. Yeah, yeah, yeah, yeah. But sorry, I don't mean to like ask particular stats but clearly like people are worried about this. Yeah. So, so and again this is for a leading neocloud, right. We, we semi analysis cluster Max. The updated version came out yesterday. They're only in the platinum neocloud in the platinum platinum tier. And people are worried about them, right? Yep. Potentially, you know, having some bankruptcy risk. And so if you start doing, you know, basically government guaranteed data center lending, you could get in a situation where there's a bunch of new data centers that come online that really don't have a clear pathway to roi. Yeah. And it just incentivizes the entire stack to just get, you know, really exuberant. Right. And again going back to Sarah Fryer's interview on Wednesday, she was, she felt the market was not exuberant enough. And I think a lot of people disagree with that. Right. There's a lot of, there's been a lot of insanity this year silliness. Maybe we don't need more. But we will see the other news that was interesting out of today. The director of the Federal Housing Finance Agency, William Polt, says Fannie and Freddie eyeing stakes in tech firms. Bill Polt, the director of the Federal Housing Finance Agency, said that Fannie Mae and Freddie Mac are looking at ways to take equity stakes in technology companies. We have some of the biggest technology and public companies offering equity to Fannie and Freddie in exchange for Fannie and Freddie partnering with them in our business. Holt said Friday during an interview at a housing conference. We're looking at taking stakes in companies that are willing to give it to us because of how much power Fannie and Freddie have over the whole ecosystem. So yeah, this, this Wall Street Journal event is just. So many articles came out of this. The Wall Street Journal did a fantastic job bringing a ton of people together. This is where that Sarah Fryer quote came from. It's also where the core weave CEO was on stage. You were mentioning core weave earlier and the core weave CEO. The headline in the Wall Street Journal is Core Weave CEO plays down concerns about AI spending Bubbles. And the quote is from Michael. If you're building something that accelerates the economy and has fundamental value to the world. The world will find ways to finance an enormous amount of business. And he went on and said, if the economy doubles in size, it's not a lot of money to build all those data centers. And so there's a lot of folks to, you know, addressing bubble concerns right now. He says it's very hard for me to worry about a bubble as one of the narratives when you have buyers of infrastructure that are changing the economy, the economics of their company. They are building the future. And so if you are. If. If the products are, you know, effective in growing the economy, then all of the investment is worth it. There's a fascinating mansion story we should get to that's actually related to this. Good. So there is a. Before we do, let me tell you about privy wallet infrastructure for every bank. Privy makes it easy to build on crypto rails, securely spin up white label wallets, sign transactions, integrate on chain infrastructure all through one. Civil aviation. Did you get a whole new soundboard? What's going on with the soundboard? I did. Thank you. Wait, really? Like, they swapped them all out? Michael, do you still have the horse? I still got all the classics. Working with some new material. Working with some new material. I like the. Sheesh. Can we get the vine? Boom on there? Is that on there? I don't know. Boom. The thud. That's a big one. Anyway, Casa Encantada is a 1930s estate in Los Angeles. It's one of the most important homes in the 20th century. Of the century. In 2023, it was also briefly, the most expensive home for sale in the United States, with an ambitious asking price of $250 million. This is in Bel Air. So it's a Bel Air property. It was sold in the foreclosure auction after the death of its longtime owner, financier Gary Winick. So it's an 8.5 acre property. He bought it in 2000 for $94 million. He bought it in the year 2000? No way. Do you know who this guy is? Gary Winick. He had something to do with the last bub. He did have something to do with the last bub. It's one of the craziest stories. So is it to satisfy the debt which has now grown. Blah, blah, blah, blah, blah. So Basically, it's a 40,000 square foot house built in the 1930s. Counts Hotelier Conrad Hilton and dole food billionaire David Murdoch among its former owners. It's located right next to the Bel Air Country Club golf course. It has a It's seven bedrooms, has a swimming pool, tennis court. It's awesome. Anyway, Gary is perhaps best known as the founder of Global Crossing, which built fiber optic cable, a fiber optic cable network across the world. The company made him a billionaire, but it imploded. Did they have a little bit of dark. Did they have a little bit of dark fiber? A little bit of dark fiber imploded in the early 2000s under the weight of massive debt. Casa Encantada, Gary's primary home, went on the market in June of 2023, five months before his death. Now it's asking 190 million and they kind of move on from there. But is that Global Crossing? Was. He was somewhat of a global businessman. He was. The headquarters were in Bermuda. You know this. I didn't know that. I wonder why. I wonder why. I mean, it might literally. Because it insulates the assets in America. Like that is one thing that you can do if you don't want to have to give up your lovely home post bankruptcy. You want to be able to get liquidity before the bubble pops. That's the lesson here. Here. Sell. Sell the shares before the top. Buy low, sell high. That's the phrase that we. That we live by here. Was there anything else, Tyler, that came out of your deep research report on Gary Winick? Did you get a chance to. I mean, so let's see. How do you tell us. He also started Pacific Capital Group in 1985. Oh, wow. That was kind of the precursor. So he was set up to actually marshal all the debt to do the big infrastructure. He had been a global businessman for a while before this. Nice. Let's give it up for global businessmen. Let's give it up. Yeah. What a wild, wild story. We're hyper local businessmen. We like to do our business right here in the ultra dome. But we do got to give it up for them. Also in the mansion section, which we should continue on. But first, let me tell you about Cognition, the makers of Devon, the AI software engineer. Crush your backlog with your personal AI engineering team. You have a new neighbor. Jordyn, you have a new neighbor. So Tom Petty apparently lived in your neighborhood in Malibu, California. The late free fallen rocker had a personal music studio. Some deep lore, please. I saw Tom Petty, I believe it was the first ever who headlined the first ever outside lands was the first or second. He was living in Malibu in a $11.2 million house. 8,744 square feet, seven bedrooms as a music studio. The buyer is Steven Slade Tien, a psychoanalyst and author, according to people with knowledge of the deal. Tien didn't respond for a request for comment. I wonder if he. If he'd want to, you know, get beer sometime, hang out, maybe go surfing, go for a walk on the beach. We should reach out to him for coming on. Okay. So the. I was at the first ever outside lands. Really? How old are you? I thought. I thought you were. It was, it was my. Basically the first ever, like major concert experience. I didn't realize that Tom Petty headlined on Saturday. This was in August of 2008. And that was my first time smelling cannabis. And I kept. I was there with my friend and his parents and I kept asking, like, his friend's parents, what is that stinky smell? Stinky smell. We can't get away from it. And very, very memorable. Very memorable. That's amazing. Do you know where this is? 2.6 acres above Escondido Beach. Is that close to you? Escondido area? It's a few minutes away, but gated property. Escondido beach is the most underrated beach. It was Petty's home decades before his death in 2017, lead singer of the Heartbreakers purchased the property for about 3.75 million. In 1998, Petty turned his guest house into his personal music studio with soundproof rooms for recording music. And said Levi Freeman, who's putting up, there's a one bedroom guest suite, seems like a very nice star. He was from Florida. He shunned the spotlight off stage. He's a member of the Rock and Roll hall of Fame, best known for songs like American Girl and I Won't Back Down. What a. What a lovely little house. Well, that's always fun. Anyway, we should move on to our top story. Do you want to go through the Elon. The Elon pay package thing a little bit? I had two questions for you on that and then we can go into our Mag 7 review. Does that sound good? Let's do it. Okay. First, let me tell you about figma. Think bigger, build faster. Figma helps design and development teams build great products together. I really enjoy this graphic package we got. This is great. So Elon's trillion dollar pay package is done. It's signed, it's approved. I'm sure it will be contested in the courts. It's always contested in the courts, but the Wall Street Journal has a very nice little breakdown of how it works. They have a nice little infographic here I can share and it kind of shows this. This is. This is what technology podcasting Is pull that up. Holding. Pull that up. Yeah. Pull that newspaper. Yeah, yeah. So basically Elon could get 1 trillion in Tesla stock if he hits all these different tranches. And so it's actually not that many shares. So he, he, he's worth half a trillion now, but he also owns 414 million Tesla shares outright. Got another award in 2018 of 300 million shares and this next award is 424 million across 12 tranches. So it's not like they're giving him four twice as much as he already has, they're just kind of giving him a little like basically what he already had. They're giving him the same amount again. And there's a bunch of things that he has to do. He has to get the market cap really, really high. And then there's also these like qualit operational goals or I guess they're quantitative but.50 billion in EBITDA, 20 million cars delivered, 1 million robots sold, 1 million robo taxis in operation, 10 million full self driving subscription. Now some of those are obviously more gamble than others. What's the definition of a robot? If he comes out with a really cheap robot and he sells a bunch of those because it's more of a toy, does that really fulfill the goal or like what's the, what's the, you know, million robotaxis in operation? What's the definition of robotax? Yeah, it qualifies if it's like a Tesla that is enabled and I turn. On FSD and my friend rides in it for one. Is there anything for actual rides? There's, there's 10 million full self driving subscriptions. Yeah. And so some of these are more gamble than others but the market really isn't. How many full self self driving subscriptions are there today? I saw, I looked that up, it's somewhere between like 1 and 3 million right now. So he has to, he definitely has to like triple the size at least. Robo taxis obviously goes from like 0 to 1 million because there's barely any on the, on the road. He hasn't sold any robots so a million would be entirely new robots. He's obviously delivering a lot of cars. And on The EBITDA front, 50 billion in EBITDA company did like 13 last year. So that's a huge increase in EBITDA. I mean 50 billion in EBITDA is a lot of money but he's, you know, it's not 20x where he is right now.
Review. Let's do it. We got the laser pointer. We got the laser pointer. Our, our good friend Tyler Cosgrove has put together a slide deck for us that tries to help map the MAG seven. Really? I call it the TVPN Top ten. The top ten. Well, technically nine. There's nine. Well, no, no, no. There are ten. There's ten. There's ten. We'll see. Well, one of them maybe is. Shouldn't be there. No, no, no, no, no. There's actually 10 because it's the Mag 7. Oh, Oracle. I forgot Oracle. Yes, you did. So, so it is, it is the TPPN top 10. The 10 most important companies in AI. Loosely, the Mag 7, plus a few bonus ones. And we're going to try and watch your head through and look, you're a horse. We're going to try and go through the various companies and rank them based on how AGI pilled they are and how much they need AGI. Is that right? Yeah, basically. So let's pull up the actual scales. There's two axes. Let's go to the next slide. So basically on the horizontal we have how AGI pilled they are. Sure. So I feel like that's fairly self explanatory. You kind of believe that AI will become something like it can produce the median economic output of a person. Yes, yes. And there's a few different ways to understand if somebody believes in AGI. It could be the rhetoric of the CEO or the founder. It could be the actions of the company. Like they're just speak louder than words. And is there anything else that could lead someone to show that they believe in AGI? I guess it's mostly just the actions and the words. Right. I mean those are the main things that you can kind of do as a person. So you do things. Well, we have it. We will be judging them by both their actions and their words. Yeah. So then on the vertical axis we have how much they need AGI. So I think this is maybe a little harder. So I want to qualify this. Yeah. So I mean this doesn't necessarily believe that you have this kind of sentient, you know, AI that's as good as a person. But I think it more so in this context just means that AI will continue to become more and more economically valuable. Yes. To where you can kind of sustain, you know, building more and more data centers. You can do more and more capex, you know. Yeah. There's a little bit of like how much will this company be transformed if the AI wave plays out? Well, AI doesn't play out well. How chopped are they? How cooked are they? Exactly. That's a great framework. Yes. And then also, yeah, if we flash forward, nothing really changes. Total plateau, total decline in token generation or something. Is the business just continuing business as usual? Okay, so who are we starting with? So let's start with Sam Altman. Okay, Sam Altman, where is he on this? So, Sam Altman, I think this is pretty reasonable spot. He believes in AGI, Right. He runs kind of the biggest AI company. He also needs AGI because if you imagine that if the models stagnate, they have a lot of capex they need to fulfill. If models stagnate, what are they going to do? Maybe the margins somehow work out, but you're probably not in a good spot if models get worse or if people start using AI less. If you're saying all one. But he's also not, you know, in the top, top corner. Okay, Right. And I think this is. You can justify this through a lot of OpenAI's actions. You see stuff like Sora, you see maybe the erotica. This is not very. Who's. Who's. I'm running the laser pointer. I want you down here. Maybe put them down here. Okay, explain that. Explain that. I think that more and more, at least in the short term, OpenAI looks like a hyperscaler. They're kind of a junior hyperscaler. And I think their actions are more, you know, I think that OpenAI. A lot of people, you know, want to say that they're, they're bearish on OpenAI at current levels, but ultimately, when you look at how their business is evolving, they seem to me like they'd be fine if the models plateaued. Yes. But, yeah, I feel like the mood on the timeline was much more slide. Sam to the left, doesn't believe in AGI. There was that post about, like, if, if, if OpenAI really believed in AGI, they wouldn't be doing Sora or they wouldn't be doing the erotica thing. Like all of those were very much like, needs it, but it, but kind of accepts that it's not coming and so stop believing. Yeah. So I think in responsibility, I think, yeah, because it's not like this. The vertical axis is also just about, like, if it's going to continue to be economically, you know, economically useful. So if people just stop using AI in general or like people, if the kind of, you know, revenue stops accelerating or any of this stuff, I think OpenAI will be in a bad spot regardless of the models. Like Actually getting much better. Like if they just make the models much more efficient to run. You could say that's not very like AGI pilling because the models aren't getting a lot better. Yeah, but that's still like, but I'm. Just saying, like, if there was no more progress at all. Yeah. Like, we never got a new model, any of the labs. I think that OpenAI would add ads, they would add commerce, they would increase. Yeah. So they might be fine without AGI. They would make agents a lot better. But, but I mean, the 1.4 trillion enterprise, the 1.4 trillion in commitments like that is hard to justify if it's just the business today just growing. Like it's growing because it's just like, it's good. No, no crazy breakthroughs. Like just. Yeah, but they're, they're, they're. You're laying out the bulk. They're playing, they're playing crazy. You're laying out the bull case saying, oh, if they just add ads, they're going to be able to hit the 1.4 trillion, no problem. I'm not saying, I'm not saying they can pull back on a lot of these commitments. Sure. They don't, like. Okay. I don't, I don't think these are like, going to end up being like, real liabilities. Business is just cooked. Okay. They can't hit them. Right. So I'm just saying, like, I think there's a shot they have, they could, you know, when, when they talk about having success in consumer electronics. Right. Which is something you talked about yesterday. Like, they don't need, like, I, I think they can probably build a really cool device. Maybe it could be competitive with, you know, if there's, if it can be at all competitive with an iPhone like that, they could be fine without AGI. We're also getting into, I guess let's. Get some other people on the board so we can see where. Yeah. I think it's useful to be relative because these are not quantitative numbers. Yeah. So next we have Dario. Okay, well, Dario is up here. So I think Dario is kind of. When you listen to what Dario is saying, he is, you know, he's extremely AGI pilled. Yes. Right. He, he. This is kind of the reasoning why he's, he's so anti China. Yeah. Right. Because he sees it as an actual race. This is going to nuclear weapons. It is a national, you know, security total. It's a problem if China gets there first. What is that new sound cue? I don't think Tyler has sound Effects. What is it? UAV online. UAV online. Okay, I like that we got. This is the uav. We should give this UAV esthetics for sure. This is good. Okay, continue. But there's also a sense that he needs AGI because if. If AI stops becoming, if it stops growing as fast and you imagine that things kind of settle where they are now. OpenAI is definitely in the lead. Sure. So you need a lot of continued growth to. For anthropic to keep kind of making sense economically, I think. Okay, yeah, yeah, sure. Who else is on here? So I think next is Larry. Larry Ellis. Larry. Larry's in kind of an interesting spot. So this is kind of a weird place to be where you don't believe in AGI, but you need it. Okay. How did you wind up there? So I think you're probably wondering how I ended up in this corner. Record, scratch, freeze frame. So I think this is how you factor in. There's kind of the personal rhetoric and then there's the actions of his company. Sure. So when you listen to Larry speak, he doesn't seem the type that believes in some kind of super intelligent God that is going to come, that's going to birth this new thing and humanity will rise. But then you look at Oracle. Has anyone found his less wrong username? Is he on there regularly? I don't think Larry's reading gwern. Okay, got it. But then. But he's investing and they are, you know, they need AI to work. Okay. Right. They are maybe covering up a little too much or maybe not enough, depending on how AGI pilled you are. But, you know. AGI pilled. Exactly. It's kind of hard to square, but. It'S a bold bet. Yeah, this is, this is a unique spot. I think it is a yes spot. He's off the grid. Okay. Yeah. So who's next? So let's see who's next. Who did I. Okay. Okay. So I think this is a fairly reasonable spot. Yeah. Obviously there's, you know, there's some sense where he is slightly AGI pilled or maybe more than slightly. Slightly believes in the power of the technology. Yeah, I mean, he's very early on OpenAI. He thinks that AI in general will become very useful, but maybe it won't become, you know, super intelligent. Maybe it's not gonna replace every person. It's just a useful tool. It's the next. The quote I always go back to is him saying, like, my definition of AGI is just greater economic growth. So show me the economic Numbers and that will be. It's like, it's a. It's a very practical. Yeah. Definition. I think people see him as very reasonable. He's not getting over skis. Yeah. I like him. I like him in the center of the grid somewhere. That seems like he's also, you know, if AI doesn't work out. Yeah. I think Microsoft is in a very good stick around. You know, they're not crazy over investing OpenAI. They have a nice. Yeah, they'll write some code. If OpenAI works out, he'll do very well. If they don't work out, I think he's also. He's hedge. He's doing quite well. He's hedged. Yeah. The man's head. He's happy to be a leaser. Yeah. That's a good quote, too. Okay. Yeah. Yeah. You wouldn't be leasing if you were super AGI pilled. Right. You're hoovering up everything. You keep it all for yourself. Exactly. I think. Let's actually talk about that with. I think it's Jensen next. Okay. Yes, Jensen. So Jensen, he doesn't believe in AGI. He's the whole reason why we're here. Explain that. So, yeah, this is maybe my personal take. Please. If Jensen was very IGI pilled. Yes. I mean, he is the kind of. He's the rock on which this all is built on. Yes. He has the chips. Yes, he has. If he was a jack healed, he would not be giving out those chips. He would keep all to himself and he would be training his own model. Okay. So that's why I think he's. He's more on the. Doesn't believe in AGI. Yes. But if there's any kind of downturn in the economy, could you put. Could you put Sam, like potentially closer in this direction too? Because he's talking about getting into the. Yes. Like compute cloud reselling, for sure. So, yeah. If there's so much demand. Yeah, yeah. And the models progressing so quickly, wouldn't you want to just hold on to all. All that? I think this would also be interesting to see over time. Like, Sam has definitely shifted leftwards. Yes. Over time he's moving. He's basically. This summer, you've seen a lot of the actions of opening, they seem less and less AGI. Pretty much everyone has been like moving the AGI timelines outward, which is. Which you could transform into no longer believing in AGI. It's more just like the timelines have gotten longer this year, broadly, pretty much everyone. To our Cash app, there was a New blog post yesterday. It was basically AI 2027. There was a new one. It was AI 2032. So it's basically the same team. Different team. But the team of AI 2027 was promoting. It did. Yeah. AI 27. 2027 should be like. It was actually AI 3027. We missed typo. 3020S. We couldn't get that domain. We were just off by a thousand years. Yeah. But there's definitely the sense where if there's a downturn, Jensen, the stock is going to go down. Yeah, but it's not going to go down as far as Larry. Right. Because they're still not on. They don't have insane. This is not financial advice. Yes. You know, they don't have, you know, completely unreasonable capex. They're not levered up. Sure, sure, sure that I know. Oh yeah. Their Z score is through the roof. Their Allman Z score is to the Oman. Z score is through the roof. They're looking pretty safe. Yes. Okay, let's see the next one, I believe. Okay. Sundar. Sundar. He believes in AGI more than Satya. You think? Yeah, well, I think you can see this in kind of. They were even earlier in some sense than Satya. Right. With DeepMind. So I think AI has played a fairly big part in the Google story very recently. They've always, I think basically for the past 10 years they've been trying to get into AI. Maybe their actions didn't actually do much, but you know, they were the transformer paper. They, you know, applied AI. Like they're actually applying. They built self driving car discovery and core. Google search is just an AI product. It's just an index on information they're organizing. Also, I mean compared to Satya, I mean, Gemini is a frontier model. Totally. Gemini 3 is supposed to be this incredible model. Everyone's very excited. Satya is not. Microsoft is not training their own base model yet. Yeah. And also like it's like there is a little bit of like if you really believe in AGI, the actions that we see. Are you like squirming and being I got to get in. It doesn't matter if I'm 1% behind or 10% behind or 80% behind. I got to get in. And I think we know someone who's. Doing Gabe in the chat says, is this AGI or AG1 from athletic greens needs AG1 needs AG1. Fine. Without AG1 believes in AG1. Now we are the only podcast that is not partnered with AG1. But we do like green. Green is our Hero color. Let's go to the next person. Okay. But also before that, I think Sundar is also definitely below this line because Google has been doing very well. AI was at first, I mean, people thought of AI as like, oh, this is going to destroy Google. This is bad for Google. So if AI doesn't work, then Google is just in the spot they were before, which is doing very well. If AI does work, then, I mean, Gemini is one of the best models. They'll do very well too. Speaking of Gemini, Google AI Studio create an AI powered app faster than ever. Gemini understands the capabilities you need and automatically wires them up for you. Get started at AI Studio build. Yeah, continue. Okay, so I think, yeah, next is Zuck. Yes. So Zuck is also kind of in an interesting spot. Yes. I think Zuck is actually someone who has shifted rightward. It's fascinating. Yeah. You've seen this. I mean, for a while he's been traveling. Yes. Yeah. So for a while he was doing open source, which in some sense it's very AGI pilling because, you know, you're building it, you're training a model. Right. It's like you're moving the frontier forward, but it's also, it's open source, which you can kind of think as being. You're trying to commoditize everything. You don't think that there's going to be some super intelligence that will take all the value that you need to hold on to. You can kind of give it out. And now he's kind of moved towards closed source. We're going to get the best people, we're going to train the best model. It felt like Zuck was sort of like, oh, yeah, AI. Like, it's this cool thing. I'm going to check the box. I got my team, we did this fun little side project. It's this open source model. We kind of found our own, like, little lane. But we're not like competing in the big cosmic battle between OpenAI, anthropic, DeepMind. Like, well, don't. Do you think that was just a counter position to way to like, try to win the AI war? Go say, hey, we're just going to try to commodity, commodify this market. Commodify Chinese approach. Yeah. Commoditizing your. I think that is a good strategy. Yeah, that makes sense. And then maybe you could say that, oh, once the Chinese have. Have been getting better, he can kind of step out of that position. Sure. And he's moved towards closed source. Yeah. But now it feels like he's way, way going, way harder on the AGI vision, paying up for it, Investing so much money in it. Yeah. You know, it's like. And depending on how much he invests, you could see him pushing up. But right now. But right now, the business is just insane. It's so phenomenal that even if he winds up spending all this money and, you know, they don't even get a frontier model and like all the. All the. The researcher, like, yeah, we didn't discover anything and we're just gone. Like, the business is fine because it's such a behemoth. So. Yeah, that's why he's after earnings. They took a fairly big hit. Yeah. So maybe he should be a little bit higher. Yeah. Maybe a little bit of broadly is still a very safe play if it doesn't work out. And it was the same thing during the metaverse. It was like he believed in the metaverse, he invested in the metaverse, but he never needed the metaverse. And so after the stock sold off like crazy, it went right back up. Because everyone realized, wait, he's also raising debt off the balance sheet, which would kind of could push him up into this zone. Right. If he's like, you know, if you're worried about, why don't you just carry it yourself? Yeah. How do you carry yourself? What do you really. You believe in AGI? Just. Just let it ride. Yeah. Okay. Who else are we missing? Okay, so we have Most of the Mag 7 now is Jassy. Oh, Elon. Elon. Elon. Yeah. Yeah. Okay, Elon. So Elon, I mean, he's been AGI pilled, I think, for a very long time. Super AGI pill. I agree with that. OpenAI co founder, even before that, he. I think he was fairly big in the. In the safety space. Totally. You see him even on. On Joe Rogan, he was talking about AI safety. He still believes in it. And AI safety is important because it's going to become super intelligent. It's be to. Going. Going to take over the world. Yeah. Even this was part of the dialogue around the new comp package, wanting the voting power to be able to be part of securing his robot army. Oh, yeah. Interesting. Yeah. I think robots are very underrepresented on this board. Totally. And Elon is kind of the main player in that space right now. Yes. Yeah. So he talked yesterday about humanoids being sort of an infinite money glitch. And I feel like you kind of. Need. AGI in order to kind of unlock the infinite money glitch. Yes. But at the same time, very strong core business. The cars don't need AGI, the rockets don't need AGI, Starlink doesn't need AGI. So he's not entirely indexed on it in the way the foundation labs are. Right. Yeah. I mean there's definitely a significant part of the Tesla market cap that is basically betting on robots. Totally. But there's also a big part of it that's just. But that's why he's, he's in the middle of the needs are fine without. It's like. Yeah, everything on it is needs. Yeah. But that's just one piece of the. AGI more than SpaceX. SpaceX, yes. SpaceX is very. Seems very uncorrelated with the AI. Totally, totally. Totally. Unless you have the data. He's happy to be a telecom baron. He's a. Happy to. Yeah, he's happy to be the new Verizon. Who else? Okay, so I believe now is Andy Jassy. Jassy, yeah. So he is, I think broadly does not believe in AGI, although he, you know, fairly major stake in Anthropic, which maybe is very AGI, but they have not. He's hedging with Anthropic. Yeah, yeah, I think that's basically what you can say. He doesn't seem at all worried about kind of the core business. AWS seems to be. And is Google building their position in Anthropic? I saw some headline about that that was wild. Imagine like Sund such a beast. So yeah, really both of them, they just have huge stakes broadly. Andy Jassy seems very quantitative. He's focused on the numbers, realistic. He's not making these grandiose statements about the future of intelligence or what humanity is going to be like. Yeah. When you look at the AWS earnings, it feels like they invest when the demand shows up and they build data centers when there is demand. And they are not in the business of doing a 10 year hypothetical sci fi forecast based on Breakthrough. Thompson, on the recent Sharp Tech. We were listening to it on the way in. He was talking about how Amazon has repeatedly said, hey, we're, we're supply constrained. We have a lot more demand than we can fulfill with this new deal with OpenAI. The $38 billion which only got announced Monday, which again feels like forever ago at this point. But Ben was kind of hypothesizing that they kind of let OpenAI jump the line because that $38 billion deal was starting effectively immediately versus some of like Larry's deal with OpenAI was announced this massive backlog is revenue that cannot be generated in a meaningful way today because they have to build the data centers that can actually deliver the compute. Yeah, I mean, and Amazon has been building data centers. They're building data centers. Basically they built all the data centers for Anthropic. But it still seems very kind of restrained. It's not overly ambitious. Anthropic has had issues with capacity and it's probably because, you know, Andy Jassy doesn't want to get over his skis. He doesn't want to build too much. So I think that's why he's kind of on the left side. And also, I mean if you think AI doesn't work and we're going to kind of be in this, you know, the same spot of web 2.0, you need, you need AWS, you need your EC2 server. I think he's very well positioned there. And then I think the last one is Tim Cook. Tim Cook. Let's hear for Tim Cook. Criminally underpaid but has done a fantastic job not getting over his skis. Yeah. This one I think is fairly self explanatory. I mean he seems, he doesn't believe. In AGI, he doesn't believe in LLMs, doesn't believe in chatbots, apparently. I don't know the new Gemini deal signaling. I said two years ago Apple was like, yeah, we're not doing that stuff. Yeah, there was that famous three years. But I think what's under discussed, my takeaway from the, our conversation with Mark Gurman and Apple's ambitions around their new LLM experience with Gemini is that I think that there's a very real scenario where Apple like wants to compete for the same user base as OpenAI. They want that like transaction based revenue, that commerce revenue. Yeah. I just, I disagree on this point. But I'm not, I'm not saying that I'm putting their odds at like winning that market at over like 10% but I think that they would be right to realize that it's potentially an opportunity. Yeah. Is it a billion dollars a year that they're paying Google for? I believe that's the number. Yeah. That feels really low, doesn't it? I don't know, it just feels super low to me. Among all the different deals that are going on, when I think about like when I just think about like the value of AI on the iPhone and you're like $1 billion. Like when, when I think about like what is the value that we're, that the market broadly is putting on Like AI for the enterprise at the same time, whatever. When, when Sundar on the Google earnings call was talking about their top I think 10 customers and how many like trillions of tokens they were using, but it really was netting out to like $150 million of like actual revenue. And so this is, this will be their biggest customer for Gemini on day one. Until one of our listeners gets on, of course. Yeah. And I think, I mean even with the, with the Gemini news, Apple still seems very kind of reactive to AI. Yeah. They're not kind of seeing oh this future where is going to do everything and moving there right now. Yeah, they're kind of seeing where the demand is, seeing where the users are and then moving which I think is very kind of, you know, classically, you know, that's how business work. It's not very eye pilling also on. That point of $1 billion, like I have no idea how can they possibly know the amount of inference that's going to happen in Siri plus Gemini over a year period? Like there's just no way to predict that. Or can they? I feel like if the integration's good there will be a ton of queries. It will. I didn't read the 1 billion headline. Chat can correct me if I'm wrong, but I didn't read the 1 billion headline. I felt like that was a technology license, not necessarily inference and then there. Might be consumption on top of that. Yeah, okay. Or maybe. Or maybe. Yeah, yeah, maybe they're licensing Gemini and then they pay per query for the energy and the capex. Couldn't a lot of the stuff be done on device? For sure. Yeah. Because Gemini Gemma has like baked down models that could be smaller so that's possible. Chad says where's Tyler on the chart? Feels like Tyler is an AGI pilled anymore. Let's figure out. I actually am on the chart here. Let's go to Tyler. I am over here. So I, yeah, I think I'm very AGI killed. Right. You know, I'm ready for the Dyson sphere. Yes. I think it's you know, only a matter of years. Handful. Only a matter of years. It's a couple thousand hundred thousand days away. You need to, you need to publish about 100,006. Yeah. 20, 25. We still have a month left. We still have a month left. Yeah. We got AGI on Christmas. This Christmas. It's coming this Christmas, this holiday season. But why do you need AGI? I think I also need AGI. Why do you need AGI? Well, I Mean, look, if you look at the current jobs data for like college age students. Oh yeah, it's looking pretty bad. It's bad. So I think you kind of need AGI to really boost the economy. If AI does not work. The macro economy is looking not good. Oh sure, sure, sure. So I feel pretty bad about my job outlook without AGI. Sure, sure, sure. Even though you're already employed. That's hilarious. Well, thank you for taking us through Chat. Wanted to know where would you put Lisa Su? Yeah, so Lisa Su, I think she's probably been quite reactive actually. She. You've really only seen AMD start. You don't get credit for being reactive is what you're saying. Yeah, you. I think it's really only over the past year maybe that she's been making any kind of deals. Right. You saw George Hotz maybe a year or two ago basically trashing AMD chips for how bad they were in. But maybe she's, she's given so much of the company away. Maybe she thinks that shares won't have very much value in the future. She's like happy to just give away 10% of the company. Yeah. So I think I would, if I had to pick somewhere, I would say that she is, she's honestly getting a little close to Larry in that amd. It's very much like an AI play still right there. Chip company, obviously. But you don't get the, you don't get the feeling that she's a true believer. Yeah, yeah, that tracks. Okay. That true Siri. I'm reading through some of Mark Gurman's reporting speaking of Apple and AI, the new Siri is planned for March, give or take, as has been the case for months. Apple has been saying for nine months it's coming in 2026. Apple simply reiterated on that and then on the actual deal. It's a $1 billion deal for a 1.2 trillion parameter artificial intelligence model developed by Google to help power its and overhaul the Siri AI voices assistant. The two companies are finalizing an agreement that would see Apple pay roughly 1 billion annually for access to Google's technology with the Google model handling series summarizer and planner functions. Apple intends to use the Google model as an interim solution until its own models are powerful enough and is working on a 1 trillion parameter cloud based model that it hopes to have ready for consumer applications as soon as next year. That is interesting. I feel like they're going to have to pay a lot more to actually run all the queries, generate all the tokens But I'm sure we'll find out more as Google reports earnings and Apple reports earnings. Before we get to that, let me tell you about graphite.de code.
Foreign. You'Re watching TVPN. Today is Friday, November 7, 2025. We are live from the TVPN Ultradome. The temple of technology, the fortress of finance, the capital. Time is money save. Both easy use, corporate cards, bill payments, accounting and a whole lot more all in one place. Ramp.com baby. The timeline is in turmoil over again. Sam Altman again. What were we calling it? Backstop gate. Backstop gate continues unabated. Jon Timelines and turmoil over Sam Altman. Every single day for the first time, every single day this week and last week, it is nonstop. OpenAI. What will happen? Is it over? Is it the end of the global economy? Or will we, you know, live to fight another day? The main point of debate is over, you know, Sarah Fryer's comments that she used the word backstop. She backtracked on her backstop comment and said I wasn't asking for a backstop of OpenAI equity. I was advocating for this American manufacturing plan. Then Semper Satoshi is going pretty hard. He says here's an OpenAI document submitted one week ago with they advocate for including data center spend within the American manufacturing umbrella. They specifically advocate for federal loan guarantees and Semper Satoshi says Sam lied to everyone. Let's read the specifics. Yes, the specifics are AI server production and AI data centers. Broadening coverage of the amic, which is the American Advanced Manufacturing Investment Credit, will lower the effective cost of capital de risk early investment and unlock private capital to help alleviate bottlenecks and accelerate the AI build in the United States. Counter the PRC by de risking US manufacturing expansion to provide manufacturers with the certainty and capital they need to scale production quickly. The federal government should also deploy grants, cost sharing agreements, loans or loan guarantees to expand industrial base capacity and resilience. Okay, so loan guarantees. So what's happening here is OpenAI a week ago and everyone can go and read this letter which is publicly available on their website. Was making the recommendation that the government should effectively treat data centers or AI or token factories, put them in the manufacturing bucket which would qualify them for similar incentives that traditional manufacturing, defense tech, et cetera. And I don't have a problem with asking the government for a handout. I think that that's actually like best practice. It's actually in your shareholders responsibility like you have a fiduciary duty to ask the government for as much help as possible. I think that everyone should go to the, go to the government right now. Hey, if I'm paying 50% tax, how about we take that down to 20%. How about we take it down to 0%? Like you have every incentive to ask your person in Congress and your senator, the people in Washington, to do everything they can to support your mission. This has worked out in the past with Elon and Tesla. It didn't work out in the case of Solyndra. But like the game on the field is if there are taxpayer dollars that are moving around the board, you want to get those into the industries that are aligned with you. And so the thing that people are taking issue with is that in the opening of his message yesterday, he said, we do not have or want government guarantees for OpenAI data centers. Yes. And that seems to conflict with the message. The letter that they wrote a week ago, that is still up on their website. Yes. So if it's. What is it? Loan guarantees to expand industrial based capacity. Noah Xchat is taking the initiative. Just sent the the admin a safe to sign uncapped note. You literally can do this now. There's a sovereign wealth fund. It sounds crazy, but they're ripping checks. This is the new economy and you can be upset about it, but you also have to understand what is the game on the field. You can always advocate for, we should change the game. We shouldn't be doing this. I would prefer a more of a free market economy. But in the world where we're not in a free market economy, you want to have your company win. Right. That's just rational. That's just actually playing the game on the field. Now, it is weird optics to talk about the game on the field. That's something most people don't like doing. And that's very odd because when you say, oh yeah, this is a one hand washes the other situation, or oh yeah, this is a situation where a backstop will allow us to be more aggressive. That feels like the banker saying, oh yeah, I knew that the government was gonna bail us out in 08, so I was intentionally underwriting loans that where it was somebody's fifth house and I knew that they couldn't pay it. I wasn't asking about their job, I wasn't asking about their income, I wasn't asking about their assets. And so they. I pushed it way further and I made a lot of money and I got out at the top. That's what's really upsetting to Americans, because the bailout comes in, the backstop comes in. Yes, it makes sense to rationally do the backstop in that moment, but if some people get out early and then other people get cooked, that's a really bad optics. That's really, really bad optics. And which is why I kind of expect a lot more of the narrative to shift towards subsidizing and incentivizing, bringing new energy. We were talking about that yesterday, which. Directly benefits the labs and anybody building a data center, but it also feels very much in America's interest, broadly. Right. And it benefits. It theoretically would benefit the average American too. We were listening to Ben Thompson this morning. It wasn't on Restream. One livestream, 30 plus destinations, multi stream. Reach your audience wherever they are. We were listening to the Stratechary RSS feed. Hopefully Ben goes live at some point on restream. That'd be awesome. By the way, I think the chat has discovered that. Yes, this is a turbopip. This is one of two. I sent the other one to Simon. Oh, that's amazing. But you can see it looks great. Especially with the green background, the color grade. The production team is on point today. Let's. Let's hear from the production team. So yes, I agree with you on the energy front. I think Ben Thompson, his point today. And fabs. Yeah, and fabs too. Yeah. His point today was like OpenAI needs to be crystal clear about the position that they're in, which is that they are the hottest company in the world. There is unlimited demand for their shares. They could be a public company. They could go raise more private capital. They need to be on the opposite end of the risk curve from the stuff that's like no one really wants to invest in an American fab that might lose money for a decade. There is truly much less appetite for yeah, let's go and build a nuclear power plant. That might take a decade. And who knows, when you think about the things that make money in the short term, it's SaaS. Right. AI for SaaS. Just go and transform the legacy business with some AI sprinkled on top and just start printing money like this is what works. People's concern for government backed data center lending is that you're lending against chips which have a really fast depreciation schedule. We don't know. Right. There was some pushback. I made the comment, hey, maybe that these things don't depreciate as fast. There was some pushback in the comments. I think everyone kind of agrees that these things depreciate quickly. And so energy infrastructure is the place. Yeah. And if you look at right now, it's core weaves, corporate default swaps are now sitting around 500 basis points jumped up dramatically and so this is one of the leading Neo Clouds. 500 basis points. 5%. Yeah, they jumped 5%. They jumped from like, I think like 2 to 5 and try to find somewhere in the stack. Yeah, yeah, yeah. But sorry, I don't mean to like ask particular stats, but clearly like people are, are worried about this. Yeah, so, so and again, this is for a leading neocloud, right. We, we semi analysis cluster Max. The updated version came out yesterday. They're only in the platinum neocloud in the Platinum platinum tier. And people are worried about them, Right? Yep. Potentially, you know, having some bankruptcy risk. And so if you start doing, you know, basically government guaranteed data center lending, you could get in a situation where there's a bunch of new data centers that come online that really don't have a clear pathway to roi. Yeah. And it just incentivizes the entire stack to just get, you know, really exuberant. Right. And again, going back to Sarah Fryer's interview on Wednesday, she was, she felt the market was not exuberant enough. And I think a lot of people disagree with that. Right. There's a lot of, there's been a lot of insanity this year. Silliness. Maybe we don't need more. But we will see the other news that was interesting out of today. The director of the Federal Housing Finance Agency, William Polt, says Fannie and Freddie eyeing stakes in tech firms. Bill Polt, the director of the Federal Housing Finance Agency, said that Fannie Mae and Freddie Mac are looking at ways to take equity stakes in technology companies. We have some of the biggest technology and public companies offering equity to Fannie and Freddie in exchange for Fannie and Freddie partnering with them in our business. Paul said Friday during an interview at a housing conference, we're looking at taking stakes in companies that are willing to give it to us because of how much power Fannie and Freddie have over the whole ecosystem. So yeah, this Wall Street Journal event is just. So many articles came out of this. The Wall Street Journal did a fantastic job bringing a ton of people together. This is where that Sarah Fryer quote came from. It's also where the core weave CEO was on stage. You were mentioning core weave earlier and the core weave CEO. The headline in the Wall Street Journal is Core Weave CEO Plays down Concerns about AI Spending Bubble. And the quote is from Michael. If you're building something that accelerates the economy and has fundamental value to the world, the world will find ways to finance an enormous amount of business. And he went on and said, if the economy doubles in size it's not a lot of money to build all those data centers. And so there's a lot of folks to, you know, addressing bubble concerns right now. He says it's very hard for me to worry about a bubble as one of the narratives when you have buyers of infrastructure that are changing the economy, the economics of their company. They are building the future. And so if you are, if, if the products are, you know, effective in growing the economy, then all of the investment is worth it. There's a fascinating mansion story we should get to that's actually related to this. Good. So there is a. Before we do, let me tell you about Privy wallet infrastructure for every bank. Privy makes it easy to build on crypto rails, securely spin up white label wallets, sign transactions, integrate on chain infrastructure all through one simple API. Did you get a whole new soundboard? What's going on with the soundboard? I did. Thank you. Wait, really? Like, they swapped them all out? Michael, do you still have the horse? I still got all the classics. Working with some new material. Working with some new material. I like the sheesh. Can we get the vine boom on there? Is that on there? I don't know. Boom. The thud. That's a big one. Anyway. Casa Encantada is a 1930s estate in Los Angeles. It's one of the most important, important homes in the 20th of the century. In 2023, it was also briefly the most expensive home for sale in the United States with an ambitious asking price of $250 million. This is in Bel Air. So it's a Bel Air property. It was sold in the foreclosure auction after the death of its longtime owner, financier Gary Winick. So it's an 8.5 acre property. He bought it in 2000 for $94 million. He bought it in the year 2000? No way. Do you know who this guy is? Gary Winnick. He had something to do with the last bub. He did have something to do with the last bub. It's one of the craziest stories. So is it to satisfy the debt which has now grown. Blah, blah, blah, blah, blah. So Basically, it's a 40,000 square foot house built in the 1930s. Counts Hotelier Conrad Hilton and dole food billionaire David Murdoch among its former owners. It's located right next to the Bel Air Country Club golf course. It has a. It's seven bedrooms, has a swimming pool, tennis court. It's awesome. Anyway, Gary is perhaps best known as the founder of Global Crossing, which built fiber optic cable, a fiber Optic cable network across the world. The company made him a billionaire, but it imploded. Did they have a little bit of dark did they have a little bit of dark fiber? A little bit of dark fiber imploded in the early 2000s under the weight of massive debt. Casa Encantada, Gary's primary home, went on the market in June of 2023, five months before his death. Now it's asking 190 million and they kind of move on from there. But Global Crossing was. He was somewhat of a global businessman. He was. The headquarters were in Bermuda. You know this. I didn't know that. I wonder why. I wonder why. I mean, it might literally. Because it insulates the assets in America. Like that is one thing that you can do if you don't want to have to give up your lovely home post bankruptcy. You want to be able to get liquidity before the bubble pops. That's the lesson here. Sell. Sell the shares before the top, buy low, sell high. That's the. That's the phrase that we. That we live by here. Was there anything else, Tyler, that came out of your deep research report on Gary Winick? Did you get a chance? I mean, so let's see. How do you tell us? He also started Pacific Capital Group in 1985. That was kind of the precursor. So he was set up to. To actually marshal all the debt to do the big. He had been a structure. He had been a global businessman for. A while before this. Okay, nice. Let's give it up for Global businessman. Let's give it up. Yeah. What a wild, wild story. We're hyper local businessmen. We like to.
Sam. You'Re watching TVPN. Today is Friday, November 7, 2025. We are live from the TVPN Ultradome. The temple of technology, the fortress of finance, the capital capital. Time is money save. Both easy to use. Corporate cards, bill payments, accounting, and a whole lot more all in one place. Ramp.com baby. The timeline is in turmoil over again. Sam Altman again. What were we calling it? Backstop gate. Backstop gate continues unabated. John, Timelines and turmoil. Turmoil over. Sam Altman telling me this for the first time. Every single day this week and last week, it is nonstop open AI. What will happen is, is it over? Is it the end of the global economy? Or will we, you know, live to fight another day? The main, the main point of debate is, is over, you know, Sarah Fryer's comments that she used the word backstop. She backtracked on her backstop comment and said, I wasn't asking for a backstop of OpenAI equity. I was advocating for this American manufacturing plan. Then Semper Satoshi is going pretty hard. He says, here's an OpenAI document submitted one week ago where they advocate for including data center spend within the American manufacturing umbrella. The they specifically advocate for federal loan guarantees. And Semper Satoshi says Sam lied to everyone. Let's read the specifics. Yes, the specifics are AI server production and AI data centers. Broadening coverage of the amic, which is the American Advanced Manufacturing Investment Credit, will lower the effective cost of capital de risk early investment and unlock private capital to help alleviate bottlenecks and accelerate the AI build in the United States. Counter the PRC by de risking US manufacturing expansion to provide manufacturers with the certainty and capital they need to scale production quickly. The federal government should also deploy grants, cost sharing agreements, loans or loan guarantees to expand industrial base capacity and resilience. Okay, so loan guarantees. So what's happening here is OpenAI a week ago and everyone can go and read this letter, which is publicly available on their website. Was making the recommendation that the government should effectively treat data centers or AI or token factories, put them in the manufacturing bucket, which would qualify them for similar incentives that traditional manufacturing, defense, defense tech, et cetera. And I don't have a problem with asking the government for a handout. I think that that's actually like best practice. It's actually in your shareholder's responsibility, like you have a fiduciary duty to ask the government for as much help as possible. I think that everyone should go to the. Go to the government right now. Hey, if I'm paying 50% tax. How about we take that down to 20%? How about we take it down to 0%? Like you have every incentive to ask your person in Congress and your senator, the people in Washington to do everything they can to support your mission. This has worked out in the past with Elon and Tesla. It didn't work out in the case of Solyndra. But like the game on the field is if there are taxpayer dollars that are moving around the board, you want to get those into the industries that are aligned with you. And so the thing that people are taking issue with is that in the opening of his message yesterday, he said we do not have or want government guarantees for OpenAI data centers. Yes. And that seems to conflict with the message. The letter that they wrote a week ago that is still up on their website. Yes. So if it's, what is it? Loan guarantees to expand in retail based capacity. Xchat is taking the initiative. Just sent the admin a safe to sign uncapped, uncap, uncap note. You literally can do this. Now there's a sovereign wealth fund, like it sounds crazy but like they're ripping checks like this is the new economy. And you know, you can be upset about it, but you also have to understand like what is the game on the field. You can always advocate for like we should change the game. Like we shouldn't be doing this. Like I would prefer a more of a free market economy. But in the world where we're not in a free market economy, you want to have your company win. Right. That's just rational. That's just actually playing the game on the field. Now it is weird optics to talk about the game on the field. That's something most people don't like doing. And that's very odd because when you say oh yeah, this is a one hand wash is the other situation or oh yeah, this is a situation where a backstop will allow us to be more aggressive. That feels like the banker saying oh yeah, I knew that the government was going to bail us out in 08 so I was intention underwriting loans that where it was somebody's fifth house and I knew that they couldn't pay it. I wasn't asking about their job, I wasn't asking about their income, I wasn't asking about their assets. And so they, I pushed it way further and I made a lot of money and I got out at the top. That's what's really upsetting to Americans because the bailout comes in, the backstop comes in. Yes, it makes sense to rationally do the backstop in that moment. But if some people get out early and then other people get cooked, that's a really bad optics. That's really, really bad optics. Which is why I kind of expect a lot more of the narrative to shift towards subsidizing and incentivizing, bringing new energy. We were talking about that yesterday, which. Directly benefits the labs and anybody building a data center, but it also feels very much in America's interest, broadly. Right. And it benefits. It theoretically would benefit the average American too. We were listening to Ben Thompson this morning. It wasn't on Restream. One livestream, 30 plus destinations, multi stream. Reach your audience wherever they are. We were listening to the Strathecary RSS feed. Hopefully Ben goes live at some point on restream. That'd be awesome. By the way, I think the chat has discovered that yes, this is a turbo puffer. This is one of two. I sent the other one to Simon, but you can. It looks great, especially with the green background, the color grade. The production team is on point today. Let's hear from the production team. So yes, I agree with you on the energy front. I think Ben Thompson, his point today. And fabs. Yeah, and fabs too. Yeah. His point today was like OpenAI needs to be crystal clear about the position that they're in, which is that they are the hottest company in the world. There is unlimited demand for their shares. They could be a public company. They could go raise more private capital. They need to be on the opposite end of the risk curve from the stuff that's like no one really wants to invest in an American fab that might lose money for a decade. There is truly much less appetite for. Yeah, let's go and build a nuclear power plant. That might take a decade. And who knows, when you think about the things that make money and in the Short term it's SaaS. Right. AI for SaaS. Just go and transform the legacy business with some AI sprinkled on top and just start printing money like this is what works. People's concern for government backed data center lending is that you're lending against chips which have a really fast depreciation schedule. We don't know. Right. There was some pushback. I made the comment, hey, maybe that these things don't depreciate as fast. There was some pushback in the comments. I think everyone kind of agrees that these things depreciate quickly. And so energy infrastructure is the place. Yeah. And if you look at right now, it's core weaves Corporate default swaps are now sitting around 500 basis points jumped up dramatically and so this is one of the leading Neo Clouds. 500 basis points. 5%. Yeah, they jumped 5%. They jumped from like I think like 2 to 5 and try to find somewhere in the stack. Yeah, yeah, yeah, yeah. But sorry, I don't mean to like ask particular stats but clearly like people are, are worried about this. Yeah, so, so and again this is for a leading neocloud, right. We, we semi analysis cluster Max. The updated version came out yesterday. They're only in the platinum neocloud in the Platinum platinum tier. And people are worried about them, right? Yep. Potentially you know, having some bankruptcy risk. And so if you start doing, you know, basically government guaranteed data center lending, you could get in a situation where there's a bunch of new data centers that come online that really don't have a clear pathway to roi. Yeah. And it just incentivizes the entire stack to just get, you know, really exuberant. Right. And again going back to Sarah Fryers interview on Wednesday, she was, she felt the market was not exuberant enough. And I think a lot of people disagree with that. Right. There's a lot of, there's been a lot of insanity this year silliness, maybe we don't need more. But we will see the other news that was interesting out of today. The director of the Federal Housing Finance Agency, William Polt says Fannie and Freddie eyeing stakes in tech firms. Bill Polt, the director of the Federal Housing Finance Agency said that Fannie Mae and Freddie Mac are looking at ways to take equity stakes in technology companies. We have some of the biggest technology and public companies offering equity to Fannie and Freddie in exchange for Fannie and Freddie partnering with them in our business. Paul said Friday during an interview at a housing conference, we're looking at taking stakes in companies that are willing to give it to us because of how much power Fannie and Freddie have over the whole ecosystem. So yeah, this Wall Street Journal event is just. So many articles came out of this. The Wall Street Journal did a fantastic job bringing a ton of people together. This is where that Sarah Fryer quote came from. It's also where the Core Weave CEO was on stage. You were mentioning core weave earlier and the Core Weave CEO. The headline in the Wall Street Journal is Core Weave CEO plays down concerns about AI Spending bubble. And the quote is from Michael. If you're building something that accelerates the economy and has fundamental value to the world, the world will find ways to finance an enormous amount of business. And he went on and said, if the economy doubles in size, it's not a lot of money to build all those data centers. And so there's a lot of folks to, you know, addressing bubble concerns right now. He says it's very hard for me to worry about a bubble as one of the narratives when you have buyers of infrastructure that are changing the economics of their company. They are building the future. And so if you are. If the products are effective in growing the economy, then all of the investment is worth it. There's a fascinating mansion story we should get to that's actually related to this. Good. So there is a. Before we do, let me tell you about privy wallet infrastructure for every bank. Privy makes it easy to build on crypto rails, securely spin up white label wallets, sign transactions, integrate on chain infrastructure all through one. Civil aviation. Did you get a whole new soundboard? What's going on with the soundboard? I did. Thank you. Wait, really? Like, they swapped them all out? Michael, do you still have the horse? I still got all the classics. Got all the classics. Working with some new material. Working with some new material. I like the sheesh. Can we get the vine boom on there? Is that on there? I don't know. Boom. The thud. That's a big one. Anyway, Casa Encantada is a 1930s estate in Los Angeles. It's one of the most important, important homes in the 20th of the century. In 2023, it was also briefly the most expensive home for sale in the United States, with an ambitious asking price of $250 million. This is in Bel Air. So it's a Bel Air property. It was sold in the foreclosure auction after the death of its longtime owner, financier Gary Winick. So it's an 8.5 acre property. He bought it in 2000 for $94 million. He bought it in the year 2000. No way. Do you know who this guy is? Gary Winick. He had something to do with the last bub. He did have something to do with the last bub. It's one of the craziest stories. So is it to satisfy the debt which has now grown. Blah, blah, blah, blah, blah. So Basically, it's a 40,000 square foot house built in the 1930s. Counts Hotelier Conrad Hilton and dole food billionaire David Murdoch among its former owners. It's located right next to the Bel Air Country Club golf course. It has a seven bedrooms, has a swimming pool, tennis court. It's awesome. Anyway, Gary is perhaps best known as the founder of Global Crossing, which built fiber optic cable, a fiber optic cable network across the world. The company made him a billionaire, but it imploded. Did they have a little bit of dark did they have a little bit of dark fiber? A little bit of dark fiber imploded in the early 2000s under the weight of massive debt. Casa Encantada, Gary's primary home, went on the market in June of 2023, five months before his death. Now it's asking 190 million and they kind of move on from there. But is that Global Crossing was. He was somewhat of a. Of a global businessman. He was. The headquarters were in Bermuda. You know this. I didn't know that. I wonder why. I wonder why. I mean, it might literally. Because it insulates the assets in America. Like that is one thing that you can do if you don't want to have to give up your lovely home post bankruptcy. You want to be able to get liquidity before the bubble pops. That's the lesson here. Sell the shares before the top, buy low, sell high. That's the phrase that we live by here. Was there anything else, Tyler, that came out of your deep research report on Gary Winick? Did you get a chance? I mean, so let's see. How do you tell us? He also started Pacific Capital Group in 1985. Oh, wow. That was kind of the precursor. So he was set up to actually marshal all the debt to do the big infrastructure. He had been a global businessman for a while before that. Okay, nice. Let's give it up for global businessman. Let's give it up. Yeah. What a wild, wild story. We're hyper local businessmen. We like to do our business right here in the ultra dome. But we do got to give it up for them. Also in the mansion section, which we should continue on. But first, let me tell you about cognition, the makers of Devon, the AI software engineer. Crush your backlog with your personal AI engineering team. You have a new neighbor. Jordi, you have a new neighbor. So Tom Petty apparently lived in your neighborhood in Malibu, California. The late free fallen rocker had a personal music studio. Some deep lore, please. I saw Tom Petty. I believe it was the first ever. Who headlined the first ever. Outside lands was the first or second. He was living in Malibu in a $11.2 million house. 8,744 square feet, seven bedrooms as a music studio. The buyer is Steven Slade Tien, a psychoanalyst and author, according to people with knowledge of the deal. Tien didn't respond for a request for comment. I wonder if he if he'd want to, you know, get beer sometime, hang out, maybe go surfing, go for a walk on the beach. We should reach out to him for coming on. Okay. So the. I was at the first ever outside lands. Really? How old are you? I thought. I thought you were. It was my. Basically the first ever, like, major concert experience. I didn't realize that's Tom Petty headlined on Saturday. This was in August of 2008. And that was my first time smelling cannabis. And I kept. I was there with my friend and his parents, and I kept asking, like, his friend's parents, what is that stinky smell? Stinky smell. We can't get away from it. And very, very memorable. Very memorable. That's amazing. Do you know where this is? 2.6 acres above Escondido Beach. Is that close to you? Escondido beach market area. It's a few minutes away, but Escondido beach is the most underrated beach. It was Petty's home decades before his death in 2017, lead singer of the Heartbreakers purchased the property for about 3.75 million. In 1998, Petty turned his guest house into his personal music studio with soundproof rooms for recording music. And said Levi Freeman, who's putting up, there's a one bedroom guest suite, seems like a very nice star. He was from Florida. He shunned the spotlight off stage. He's a member of the Rock and Roll hall of Fame, best known for songs like American Girl and I Won't Back Down. What a. What a lovely little house. Well, that's always fun. Anyway, we should move on to our top story. Do you want to go through the Elon. The Elon pay package thing a little bit? I had two questions for you on that and then we can go into our Mag 7 review. Is that. Does that sound good? Let's do it. Okay. First, let me tell you about figma. Think bigger, build faster. Figma helps design and development teams build great products together. I really enjoy this graphic package we got. This is great. So Elon's trillion dollar pay package is done. It's signed, it's approved. I'm sure it will be contested in the courts. It's always contested in the courts, but the Wall Street Journal has a very nice little breakdown of how it works. They have a nice little infographic here I can share. And it kind of shows this. This is. This is what technology podcasting is. Pull that up. Holding. Pull that up. Yeah. Pull that. Newspaper. Yeah, yeah. So basically, Elon could get 1 trillion in Tesla stock if he hits all these different tranches and so it's actually not that many shares. So he's worth half a trillion now. But he also owns 414 million Tesla shares outright. Got another award in 2018 of 300 million shares. And this next award is 424 million across 12 tranches. So it's not like they're giving him four twice as much as he already has. They're just kind of giving him a little like basically what he already had. They're giving him the same amount again. And there's a bunch of things that he has to do. He has to get the market cap really, really high. And then there's also these like qualitative operational goals or I guess they're quantitative but.50 billion in EBITDA, 20 million cars delivered, 1 million robots sold, 1 million robotaxis in operation, 10 million full self driving subscription. Now some of those are obviously more gamble than others. What's the definition of a robot? If he comes out with a really cheap robot and he sells a bunch of those because it's more of a toy, does that really fulfill the goal? Or like what's the, what's the, you know, million robotaxis in oper, what's the. Definition of a robotax qualifies if it's like a Tesla that is enabled. Yep. And I turn on FSD and my friend rides in it for one. Is there anything for actual rides? There's, there's 10 million full self driving subscriptions. Yeah. And so some of these are more gamble than others, but the market really isn't. How many full self self driving subscriptions are there today? I saw, I looked that up. It's somewhere between like 1 and 3 million right now. So he has to, he definitely has to like triple the size at least. Robo taxis obviously goes from like 0 to 1 million because there's barely any on the, on the road. He hasn't sold any robots, so a million would be entirely new robots. He's obviously delivering a lot of cars. And on The EBITDA front, 50 billion in EBITDA company did like 13 last year. So that's, that's a huge increase in EBITDA. I mean 50 billion in EBITDA is a lot of money, but he's, you know, it's not, it's not 20x where he is right now and neither is the market cap. Like he's, he only has to take the market cap to 8.5 trillion and Tesla's already worth a trillion. So it's, it's it's within, you know, striking distance. The. The market cap is now around 1.5 trillion, actually. So my two questions were, one, like, it's going to be weird to live in the world of the trillionaire. Like, but we are getting close. Like, that's going to happen not just within our lifetime. Like, definitely within the next decade. This sets him up to be the first one, but it's going to happen. And I wonder how that's going to reshape our culture, like, the world in America. Because when I had this, I had this realization that when billionaires became so prevalent and prominent, there was a lot of heat that was taken off the millionaire. Like, if you're just like, a guy, yeah, I have an H Vac. Billionaires are the heat shield. Yeah, exactly, exactly. Like, yeah, I'm a millionaire. I have a boat. I go to Bass Pro Shops, but I'm not getting protested because I have a million dollars in my house and boat. And isn't it like, practice? One out of ten Americans is a. Yeah, yeah. So the millionaire became more accessible, and the billionaire became the thing that the society scapegoats for all the problems. Approximately 9.4 to 9.5% of American adults are millionaires. Yeah. But my question was, what do you. You, like, what happens to the billionaire when trillionaires come in? Because, you know, like, Bernie Sanders and there's a whole crew that say, like, billionaires shouldn't exist. Every billionaire is a policy failure. Like, what happens when you have to say, like, well, like, trillionaires are the real policy failure, but billionaires are also the policies failure. And millionaires were, like, kind of okay with, but it's not great. It's like, it becomes much more complicated. But at the same time, it definitely, like, if Elon is the only trillionaire, it's going to be really, really easy to target him and be like, he's bad. He's a trillionaire. Any more targeted? I don't know. Yeah, maybe max it out already. But I thought that was interesting. And then the flip side was, what does this mean for other companies? What does it mean for Sam Altman at OpenAI? Can he run a similar playbook? It's clear that he had a ton of soft power during the OpenAI coup. Could he go to the OpenAI for profit board and say, hey, if. If OpenAI IPOs at 2 trillion, I want 20%. If OpenAI moons to 10 trillion, I want 50%. Like, how extreme can Sam get? We know that Sam runs a bit of an Elon playbook. They were in business together, they co founded OpenAI together. So clearly they learn from each other. I wonder what, what Sam Altman can do similarly. And then I also wonder what, what will happen at the garden variety unicorn. If you're just the CEO of a $5 billion company and you're just kind of hanging out there and you say like, Yeah, I had 30% of the company when I started. I've been diluted down to 5 or 10. But I like this company and I want to get it up. What if I say, hey, could I go to the board and say, okay, we're at 5 billion now if I get us to 50, will you double my equity position? And how would shareholders treat that, how would Sequoia treat that or Founders Fund or Kleiner or a 16Z? Like how would the growth stage venture companies, the venture capital firms feel about that? So I don't know any reactions to that stuff? I think there's a sentiment like there's, you know, any, any venture backed founder is like going to be hyper conscious of dilution. Right. There's a sense that it's like one way. Yes. Right. It just goes down and down and down and down. And I think the right way for founders to think about that is like, okay, you're not actually like no one's taking your shares unless you decide to sell them. Yes. Your job is just to make the share price go up and there's going to be more shares issued over time. But if you just make the share price go up forever, it doesn't really matter. And you can also, but then there's. Also, you can also buy back. You can also, you know, you know. Get, buy back like Drew Houston. Yeah, you know Drew Houston. And then give more share, create new shares. If you want to get your percentage ownership way down. Yeah. If you want to go to zero. More like if you want to bail. I mean, yeah, we, we, we need to do, we need to do an analysis of the most diluted CEOs in the public markets. Because if you look at some of the IPOs from the last 10 years, like some of the guys that are still hanging around running these companies have, have sold out, you know, sold down so much of. And this is what makes Larry so admirable. Yeah. Because just buyback, Buyback, buyback. And he's the second richest person. 300 billion. It's remarkable. By the way, I think Oracle is like fully round tripped now from. It happens. Well, what about the. So, so yeah, I mean the question is like on what timescale do you think this happens? Like, Jordy? Do you think we'd actually hear the story of somebody, a CEO, founder, maybe, pass their vesting cliff, maybe?
And then there's also these like qualitative operational goals or I guess they're quantitative but.50 billion in EBITDA, 20 million cars delivered. 1 million robots sold. 1 million robotaxis in operation, 10 million full self driving subscription. Now some of those are obviously more gamble than others. What's the definition of a robot? If he comes out with a really cheap robot and he sells a bunch of those because it's more of a toy, does that really fulfill the goal? Or like what's the, you know, million robotaxis in operation. What's the definition of a robotax? Yeah, it qualifies if it's a Tesla that is enabled. Yep. And I turn on FSD and my friend rides in it for one. Is there anything for actual rides? There's 10 million full self driving subscriptions. Yeah. And so some of these are more gamble than others. But the market cap really isn't. How many full self self driving subscriptions are there today? I looked that up. It's somewhere between like 1 and 3 million right now. So he definitely has to like triple the size at least. Robo taxis obviously goes from like 0 to 1 million because there's barely any on the road. He hasn't sold any robots, so million would be entirely new robots. He's obviously delivering a lot of cars. And on The EBITDA front, 50 billion in EBITDA company did like 13 last year. So that's a huge increase in EBITDA. I mean 50 billion in EBITDA is a lot of money. But he's, you know, it's not 20x where he is right now and neither is the market cap. Like he's, he only has to take the market cap to 8.5 trillion and Tesla's already worth a trillion. So it's within striking distance. The market cap is now around 1.5 trillion actually. So my two questions were one, it's gonna be weird to live in the world of the trillionaire, but we are getting close. That's going to happen not just within our lifetime, definitely within the next decade. This sets him up to be the first one, but it's gonna happen. And I wonder how that's gonna reshape our culture, like the world in America. Because when I had this realization that when billionaires prevalent and prominent, there was a lot of heat that was taken off the millionaire. Like if you're just like a guy, yeah, I have an H Vac. Billionaires are the heat shield. Yeah, exactly, exactly. Like yeah, I'm a millionaire, I have a boat, I go to Bass Pro Shops, but I'm not getting protested because I have a million dollars in my house and boat. And isn't it like, practice? One out of ten Americans is a. Yeah, yeah. So the millionaire became more accessible and the billionaire became the thing that the society scapegoats for all the problems. Approximately 9.4 to 9.5% of American adults are millionaires. Yeah. And, but my question was, what do you, like, what happens to the billionaire when trillionaires come in? Because, you know, like, Bernie Sanders and there's a whole crew that say, like, billionaire shouldn't exist. Every billionaire is a policy failure. Like, what happens when you have to say, like, well, like, trillionaires are the real policy failure, but billionaires are also the policy failure. And millionaires were like, kind of okay with. But it's not great. It's like, it becomes much more complicated. But at the same time, it definitely, like, if Elon is the only trillionaire, it's going to be really, really easy to target him and be like, he's bad. He's a trillionaire. That any more targeted? I don't know. Yeah, maybe he maxed it out already, but I thought that was interesting. And then the flip side was, what does this mean for other companies? What does it mean for Sam Altman at OpenAI can he run a similar playbook? It's clear that he had a ton of soft power during the OpenAI coup. Could he go to the OpenAI for profit board and say, hey, if. If OpenAI IPOs at 2 trillion, I want 20%. If OpenAI moons to 10 trillion, I want 50%. Like, how extreme can Sam get? We know that Sam runs a bit of an Elon playbook. They were in business together, they co founded OpenAI together. So clearly they learn from each other. I wonder what, what Sam Altman can do similarly, and then I also wonder what will happen at the garden variety unicorn. If you're just the CEO of a $5 billion company and you're just kind of hanging out there and you say, like, Yeah, I had 30% of the company when I started. I've been diluted down to 5 or 10. But I like this company and I want to get it up. Like, what if I say, hey, could I go to the board and say, okay, we're at 5 billion now. If I get us to 50, will you double my equity position? And how would shareholders treat that? How would Sequoia treat that? Or Founders Fund or Kleiner or a 16Z? Like, how would the growth stage venture Companies, the venture capital firms feel about that. So I don't know any reaction to that stuff? Yeah, I mean I think there's a sentiment like there's, you know, any, any venture backed founder is like going to be hyper conscious of dilution. Right. There's a sense that it's like one way. Yes. Right. It just goes down and down and down and down. And I think the right way for founders to think about that is like, okay, you're not actually like no one's taking your shares unless you decide to sell them. Your job is just to make the share price go up and there's going to be more shares issued over time. But if you just make the share price go up forever, it doesn't really matter. But then there's also, you can also buy back. You can also get buy back like Drew Houston. Yeah. You can also sell all your best. Examples and then give more share. Create new shares if you want to get your percentage ownership way down. Yeah. Like if you want to go to zero more like if you want to bail. I mean, yeah, we, we, we need to do, we need to do. An. Analysis of the most diluted CEOs in the public markets. Because if you look at some of the IPOs from the last 10 years, like some of the guys that are still hanging around running these companies have, have sold out, you know, sold down so much of. And this is what makes Larry so admirable. Yeah. Because just buyback, buyback, buyback. And he's the second richest person. 300 billion. It's remarkable. By the way, I think Oracle is like fully round tripped now from. It happens. Well, what about the. So, so yeah, I mean the question is like on what timescale do you think this happens? Like Jordy, do you think we'd actually hear the story of somebody, a CEO, founder, maybe pass their vesting cliff? Maybe one of their co founders has left. Because I think about that a lot where it's like, okay, yeah, there were like there were like two or three people. They were basically equal. They did their full like 4 year earn out. But then there's clearly one that's like still there grinding for the next decade. Like they kind of do deserve more. It's not that crazy. And yes there is the. Just the stock buyback don't sell. But is there a world where someone like Drew Houston goes to the board and just says like I think I can 5x this and I want the pay package to do it and I'm going to be in the office nonstop. And I'm going to go more time. I think that happens all the time. Not to this degree. No way. Okay. Not. Not to that degree. Yeah. Not even the headline number? Not even the headline number. Just. Just in this sense of, like, we're going to dilute everyone, like, 5 or 10% if this happens. Like, a trillion on 8.5 is serious dilution for the rest of the shareholders. But if I'm holding at 1.5 trillion and I'm like, you're going to take me to 8.5 trillion, like, I'm totally in for 10% dilution. You're going to 5x my shares? Like, I'm in. But what's interesting is that we just haven't seen other CEOs pull that from the Elon playbook and say, and this. Is why was it Kimball or someone else saying, like, very. Almost no other CEOs would take a deal like this because it's so ambitious? Yep. And so I think it's healthy. Yeah, no, no. I think people are going to. People are offended by the headline number. So, I mean, obviously, I'm very pro this. I think we're all pro this. That's not the discussion. The question is, like, how widespread does this become? Does it become memetic? Is it like every CEO? Because there's a lot of people that are just like, oh, Elon did it this way. I want to do it that way, you know? And I'm wondering how much it actually spreads. Anyway, we'll have to keep monitoring it. We'll also have to tell you about Vanta. Automate Compliance, Manage risk, and Accelerate trust with artificial intelligence. Vanta helps you get compliant fast. And we don't stop there. Our AI and automation powers everything from evidence collection to continuous monitoring to security reviews and vendor risk. What else is in the timeline? There's so much in the timeline. Should we do our Mega Cycle review? Let's do it. We got the laser pointer. We got the laser pointer. Our good friend Tyler Cosgrove has put together a slide deck for us that tries to help map the Mag seven. Really? I call it the TVPN Top ten. The top ten. Well, technically, nine. There's nine? Well, no, no, no. There are ten. There's ten. There's ten. We'll see. Well, one of them maybe shouldn't be there. No, no, no, no, no. There's actually 10 because it's the Mag 7 Plus. Oh, Oracle. I forgot Oracle. Yes, you did. Okay, so it is the TPPN Top 10. The 10 most important companies in AI loosely, the Mag 7, plus a few bonus ones. And we're going to try and watch your head and look, you got a. Hook, you got a hoof. We're going to try and go through the various companies and rank them based on how AGI pilled they are and how much they need AGI. Is that right? Yeah, basically. So let's pull up the actual scales. There's two axes. Let's go to the next slide. So basically on the horizontal we have how AGI pilled they are. Sure. So I feel like that's fairly self explanatory. You kind of believe that AI will become something like it can produce the median economic output of a person. And there's a few different ways to understand if somebody believes in AGI. It could be the rhetoric of the CEO or the founder. It could be the actions of the company. Like they're just speak louder than. And is there anything else that could lead someone to show that they believe in AGI? I guess it's mostly just the actions and the words. Right. I mean those are the main things that you can kind of do as a person. While we have it, then we will be judging them by both their actions and their words. Yeah. So then on the vertical axis we have how much they need AGI. So I think this is maybe a little harder. So I want to qualify this. Yeah. So I mean this doesn't necessarily believe that you have this kind of sentient AI that's as good as a person. But I think it more so in this context just means that AI will continue to become more and more economically valuable. Yes. To where you can kind of sustain building more and more data centers. You can do more and more capex. Yeah, there's a little bit of like how much will this company be transformed if the AI wave plays out well? AI doesn't play play out well. How chopped are they? How cooked are they? Exactly. That's a great framework. Yes. And then also, yeah, if we flash forward, nothing really changes. Total plateau, total decline in token generation or something. Is the business just continuing business as usual? Okay, so who are we starting with? Let's start with Sam Altman. Okay, Sam Altman, where is he on this? So Sam Altman, I think this is pretty reasonable spot. Sure. He believes in AGI. Right. He runs kind of the biggest AI company. He also needs AGI because if you imagine that if the models stagnate, they have a lot of capex they need to fulfill. If models stagnate, what are they going to do? Maybe the margins somehow work out. But you're probably not in a good spot if models get worse or if people start using AI less. If you're saying all one. But he's also not in the top, top corner. Right. And I think this is. You can justify this through a lot of OpenAI's actions. You see stuff like Sora, you see maybe the erotica. This is not very. I'm running the laser pointer. You want him down here? Maybe put him down here. Okay, explain that. Explain that. I think that more and more, at least in the short term, OpenAI looks like a hyperscaler. They're kind of a junior hyperscaler. And I think their actions are more, you know, I think that OpenAI. A lot of people, you know, want to say that they're. They're bearish on, on OpenAI current level, but ultimately, when you look at how their business is evolving, they seem to me like they'd be fine if the models plateaued. Yes. But, yeah, I feel like the. I feel like the mood on the timeline was much more slide. Sam to the left, doesn't believe in AGI. There was that post about, like, if, if, if OpenAI really believed in AGI, they wouldn't be doing Sora or they wouldn't be doing the erotica thing. Like, all of those were very much like, needs it, but it. But kind of accepts that it's not coming and so stop believing it. Yeah. So I think in responsibility, I think, yeah. Because it's not like this. The vertical axis is also just about, like, if it's going to continue to be economically, you know, economically useful. So if people just stop using AI in general or like, people. If the kind of, you know, revenue stops accelerating or any of this stuff, I think OpenAI will be in a bad spot regardless of the models, like, actually getting much better. Like, if they just make the models much more efficient to run, you could say that's not very like, AGI pilling because the models aren't getting a lot better. Yeah, but that's still like. But I'm just saying, like, if there was no more progress at all. Yeah. Like, we never got a new model for many of the labs. I think that OpenAI would add ads, they would add commerce, they would increase. Yeah. So they might be fine without AGI. They would make agents a lot better. But I mean, the 1.4 trillion enterprise, the 1.4 trillion in commitments like that is hard to justify if it's just the business today just growing. Like, it's growing because it's Just like, it's good. No crazy breakthroughs. Like, just. Yeah, but they're. They're. You're laying out the bulk. They're playing. They're playing crazy. You're laying out the bull case saying, oh, if they just add ads, they're gonna be able to hit the 1.4 trillion, no problem. I'm not saying. I'm not saying they can pull back on a lot of these commitments. Sure. They don't, like. Okay. I don't. I don't think these are, like, gonna end up being, like, real liabilities. Business is just cooked. Okay. They can't hit them. Right. So I'm just saying, like, I think there's a shot they have. They could, you know, when. When they talk about having success in consumer electronics. Right. Which is something you talked about yesterday. Like, they don't need. Like, I. I think they can probably build a really cool device. Maybe it could be competitive with, you know, if there's. If it can be at all competitive with an iPhone like that, they could be fine without AGI. We're also getting into. I guess let's get some other people on board so we can see where. Yeah. I think it's useful to be relative because these are not quantitative numbers. Yeah. So next we have Dario. Okay, well, Dario is up here. So I think Dario is kind of. When you listen to what Dario is saying, he is, you know, he's extremely AGI pilled. Yes. Right. He. He. This is kind of the reasoning why he's. He's so anti China. Yeah. Right. Because he sees it as an actual race. This is going to happen. It is a national, you know, security total. It's a problem if China gets there first. Wait, what is that new sound cue? I don't think Tyler has sound effects. What is it? UAV online. UAV online. Okay. I like that we got. This is the uav. We should give this UAV aesthetics. For sure. This is good. Okay, Continue. But there's also a sense that he needs AGI because if AI stops becoming. If it stops growing as fast and you imagine that things kind of settle where they are now. OpenAI is definitely in the lead. Sure. So you need a lot of continued growth for anthropic to keep kind of making sense economically, I think. Okay. Yeah, yeah, sure. Who else is on here? So I think next is Larry. Larry Ellis. Larry's in kind of an interesting spot. Yeah. So this is kind of a weird place to be where you don't believe in AGI, but you need it. Okay. How did you wind up there. So I think probably wondering how I ended up in this corner. Record, scratch, freeze frame. So I think this is how you factor in there's kind of the personal rhetoric and then there's the actions of his company. Sure. So when you listen to Larry speak, he doesn't seem the type that is believes in some kind of super intelligent God that is going to come, that's going to, you know, birth this new thing. Humanity will rise. But then you look at Oracle, has. Anyone found his less wrong username on there? I don't think, you know, I don't think Larry's reading gwern. Okay, got it. But then, but he's investing in. And they are, you know, they need AI to work. Okay. Right. They are maybe covering up a little too much or maybe not enough, depending on how AGI pilled you are. But, you know. AGI pilled. Exactly. It's kind of hard to square, but. It'S a bold bet. Yeah. This is a unique spot. I think it is a unique spot. He's off the grid. Okay. Yeah. So who's next? So let's see who's next? Who did I. Okay, okay, so we go. I think this is a fairly reasonable spot. Yeah. Obviously there's, you know, there's some sense where he is slightly pilled or maybe. More than slightly believes in the power of the technology. I mean, he's very early on OpenAI. Yeah. He thinks that AI in general will become very useful, but maybe it won't become, you know, super intelligent. Maybe it's not going to replace every person. It's just a useful tool. It's. The next quote I always go back to is him saying like my definition of AGI is just greater economic growth. So show me the economic numbers and that will be. It's like, it's a, it's a very practical. Yeah. Definition. I think people see him as very reasonable. He's not getting over skis. Yeah. I like him. I like him in the center of the grid somewhere. That seems like he's also, you know, if AI doesn't work out. Yeah. I think Microsoft is in a very. Good spot to stick around. You know, they're not crazy over investing open AI. They have a nice. Yeah, they'll write some code. If OpenAI works out, he'll do very well. If they don't work out. I think he's also hedge. He's doing quite well. He's hedged. Yeah. The man's head. He's happy to be a leaser. Yeah, that's a good quote too. Okay. Yeah. Yeah. You wouldn't be leasing if you were super AGI pilled. Right. You're hoovering up everything. You keeping it all for yourself. Exactly. I think. Let's actually talk about that with. I think it's Jensen next. Okay. Yes, Jensen. So Jensen, he doesn't believe in AGI. He's the whole reason why we're here. Explain that. So, yeah, this is maybe my personal take. Please. If Jensen was very igi pilled. Yes. I mean, he is the kind of. He's the rock on which this all is built on. Yes. He has the chips. Yes, he has the chips. He was a gi pilled. He would not be giving out those chips. He would keep all to himself and he would be training his own model. Okay. So that's why I think he's. He's more on the doesn't believe in AGI side. But if there's any kind of downturn in the economy, could you put. Could you put Sam, like potentially closer in this direction too? Because he's talking about getting into the. Like, compute cloud reselling for sure. So if there's so much demand. Yeah, yeah. And the models progressing so quickly, wouldn't you want to just hold on to all. All that? I think this would also be interesting to see over time. Like, Sam has definitely shifted leftwards. Yes. Over time he's moving. He's basically. This summer you've seen a lot of the actions of opening. They seem less and less AGI. Pretty much everyone has been like moving the AGI timelines outward, which is which you could transform into no longer believing in AGI. It's more just like the timelines have gotten longer this year. Broadly. Pretty much everyone dwarkash timelines. There was a new blog post yesterday. It was basically AI 2027. There was a new one. It was AI 2032. So it's basically based on our same team. Different team. But the team of AI 2027 was promoting it did. Yeah. AI 2027 should be like. It was actually AI 3027. We missed typo. AI 3020. We couldn't get that domain. We were just off by a thousand years. Yeah. But there's definitely the sense where if there's a downturn, Jensen, the stock is going to go down. Yeah. But it's not going to go down as far as Larry. Right. Because they're still not on. They don't have insane. This is not financial advice. Yes. They don't have completely unreasonable capex. They're not levered up that I know of. Oh, yeah. Their Z score is through the roof. Their Allman Z score is to the roof. Allman's Z score is through the roof. They're looking pretty safe. Yes. Okay, let's see the next one, I believe. Okay. Sundar. Sundar believes in AGI more than Satya. You think? Yeah, Well, I think you can see this in kind of. They were even earlier in some sense than Satya. Right. With. With DeepMind. Yeah. So I think AI has played a fairly big part in the Google story very recently. Yeah, they've always, I think basically for the past 10 years they've been trying to get into AI. Maybe their actions didn't actually do much, but you know, they were the Transformer paper. They, you know, applied AI like they're actually applying. They built self driving cars. Core Google search is just an AI product. It's just an index on information they're organizing. Also, I mean, compared to Satya, I mean, Gemini is a frontier model. Totally. Gemini 3 is supposed to be this incredible model. Everyone's very excited. Satya is not. Microsoft is not training their own base model yet. Yeah. And also, like, it's like there is a little bit of like, if you really believe in AGI, the actions that we see are you like squirming and being. I got to get in. It doesn't matter if I'm 1% behind or 10% behind or 80% behind. I got to get in. And I think we know someone who's. Doing Gabe in the chat says, is this AGI or AG1 from athletic greens needs AG1 needs AG1. Fine. Without AG1 believes in AG1. Now we are the only podcast that is not partnered with AG1. But we do like green. Green is our hero color. Let's go to the next person. Okay. But also before then, I think Sundar is also definitely below this line because Google has been doing very well. AI was at first people thought of AI as like, oh, this is going to destroy Google. This is bad for Google. So if AI doesn't work, then Google is just in the spot they were before, which is doing very well. If AI does work, then, I mean, Gemini is one of the best models. They'll do very well too. Speaking of Gemini, Google AI Studio create an AI powered app faster than ever. Gemini understands the capabilities you need and automatically wires them up for you. Get started at AI Studio build. Yeah, continue. Okay, so I think, yeah, next is Zuck. Yes. So Zuck is also kind of in an interesting spot. Yes. I think Zuck is actually someone who has shifted rightward. It's fascinating. Yeah. You've seen this, basically. I mean, for a while he's been traveling. Yes. Yeah. So for a while he was doing open source. Yeah. Which in some sense it's very AGI pilling because, you know, you're building it, you're training a model. Right. It's like you're, you're moving the frontier forward, but it's also, it's open source, which you can kind of think as being. You're trying to commoditize everything. You don't think that there's going to be some super intelligence that will take all the value that you need to hold on to. You can kind of give it out. And now he's kind of moved towards closed source. We're going to get the best people, we're going to train the best model. It felt like Zuck was sort of like, oh, yeah, I like. It's this cool thing. I'm going to check the box. I got my team. We did this fun little side project. It's this open source model. We kind of found our own little lane. But we're not like competing in the big cosmic battle between OpenAI, anthropic, DeepMind. Like, well, don't. Do you think that was just a counter position to. Way to like, try to win the AI war? Go say, hey, we're just going to try to commodity commodify this market and like, commodifies your Chinese approach. Yeah. Commoditizing your copies is a good strategy. Yeah, that makes sense. And then maybe you could say that, oh, once the Chinese have, have been getting better, he can kind of step out of that position. Sure. And move towards closed source. Yeah. But now it feels like he's way, way going, way harder on the AGI vision, paying up for it, investing so much money in it. Yeah. You know, it's like. And depending on how much he invests, you could see him pushing up. It's true. He's also. But right now the business is just insane. It's so phenomenal that even if he winds up spending all this money and, you know, they don't even get a frontier model. And like all the, all the, the researchers are like, yeah, we didn't discover anything and we're just gone. Like, the business is fine because it's such a behemoth. So, yeah, that's why he's after earnings. They took a fairly big hit. Yep. So maybe he should be a little bit higher. Yeah, maybe a little bit of Broadly is still a very safe play if AI doesn't work out. And it was the same thing during the metaverse. It was like he believed in the metaverse, he invested in the metaverse, but he never needed the metaverse. And so after the stock sold off like crazy, it went right back up. Because everyone realized, wait, he's also raising debt off the balance sheet, which would kind of could push him up into this zone. Right. If he's like, you know, if you're worried about, why don't you just carry it yourself? Yeah, yeah, yeah. How do you carry yourself? Yeah. What do you, what do you. You believe in AGI? Just. Just let it ride. Yeah. Okay. Who else are we missing? Okay, so we have Most of the Mag 7 now is Jassy. Oh, Elon. Elon. Elon. Yeah. Yeah. Okay. Elon. So Elon, I mean, he's been AGI pilled, I think, for a very long time. Super AGI pill. I agree with that. OpenAI co founder, even before that, he. I think he was fairly big in the. In the safety space. Totally. You see him even on. On Joe Rogan, he was talking about AI safety. He still believes in it. And AI safety is important. Going to become super intelligent. It's going to take over the world. Yeah. Even this was part of the dialogue around the new comp package. Wanting the voting power to be able to be part of securing his robot army. Oh, yeah. Interesting. Yeah. I think robots are very underrepresented on this board. Totally. And Elon is kind of the main player in that space right now. Yeah. So he talked yesterday about. About humanoids being sort of an infinite money glitch. And I feel like you kind of need AGI in order to kind of unlock the infinite money glitch. Yes. But at the same time, very strong core business. The cars don't need AGI, the rockets don't need AGI, starlink doesn't need AGI. So he's not entirely indexed on it in the way the foundation labs are. Right. Yeah. I mean, there's definitely a significant part of the Tesla market cap that is basically betting on robots. Totally. But there's also a big part of it that's just. But that's why he's. He's in the middle of. The needs are fine without. It's like. Yeah. Everything on it is needs. Yeah. That's just one piece of the AGI more than SpaceX. SpaceX? Yes. SpaceX is very. Seems very uncorrelated with the. Totally. Totally, totally. Unless you have the data, he's happy. To be a telecom baron. He's happy to. Yeah, he's happy to be the new Verizon. Who else? Okay, so I believe now is Andy Jassy. Jassy. So he is, I think broadly does not believe in AGI, although he, you know, fairly major stake in Anthropic. Yeah, yeah. Which maybe is very AGI playing, but. But they have not. He's hedging with Anthropic. Yeah. Yeah. I think that's basically what you can say. He doesn't seem at all worried about kind of the core business. AWS seems to be. And is Google building their position in Anthropic. I saw some headline about that. That was to imagine Sundar such a beast. Sundar and Satya, really, both of them, they just huge stakes, broadly. Andy Jassy seems very quantitative. He's focused on the numbers realistic. He's.
Like this because it's so ambitious. Yep. And so I think it's healthy. Yeah, no, no, I think people are gonna. People are offended by the headline number. Totally. Yeah. So I mean, obviously I'm very pro this. I think we're all pro this. That's not the discussion. The question is like, how widespread does this become? Does it become mimetic? Is it like every CEO? Because there's a lot of people that are just like, oh, Elon did it this way, I wanna do it that way. And I'm wondering how much it actually spreads. Anyway, we'll have to keep monitoring it. We'll also have to tell you about Vanta. Automate compliance, Manage risk and accelerate trust with artificial intelligence. Vanta helps you get compliant fast. And we don't stop there. Our AI and automation powers everything from evidence collection to continuous monitoring to security reviews and vendor risk. What else is in the timeline? There's so much in the timeline. Should we do our mega cycle review? Let's do it. We got the laser pointer. We got the laser pointer. Our good friend Tyler Cosgrove has put together a slide deck for us that tries to help map the Mag 7. Really? I call it the TVPN top 10. The top 10? Well, technically 9. There's 9? Well, no, no, there are 10. There's 10, there's 10. We'll see. Well, one of them maybe shouldn't be there. No, no, no, no, no. There's actually 10 because it's the Mag 7+ Oracle. I forgot Oracle. Yes, you did. Okay, so it is the TPPN top 10. The 10 most important companies in AI. Loosely, the Mag 7 plus a few bonus ones. And we're going to try and watch your head through and look, you got a hoof. We're going to try and go through the various companies and rank them based on how AGI pilled they are and how much they need AGI. Is that right? Yeah, basically. So let's pull up the actual scales. There's two axes. Let's go to the next slide. So basically on the horizontal we have how AGI they are. Sure. So I feel like that's fairly self explanatory. You kind of believe that AI will become something like it can produce the median economic output of a person. Yes. And there's a few different ways to understand if somebody believes in AGI. It could be the rhetoric of the CEO or the founder. It could be the actions of the company. Like they're just speak louder than the actions. And is there anything else that could Lead someone to show that they believe in AGI. I guess it's mostly just the actions and the words. Right. I mean, those are the main things that you can kind of do as a person. Say or do things. Well, we have it. We will be judging them by both their actions and their words. Yeah. So then on the vertical axis, we have how much they need AGI. So I think this is maybe a little harder. So I want to qualify this. Yeah. So, I mean, this doesn't necessarily believe that you have this kind of sentient, you know, AI that's as good as a person. Yeah. But I think it more so in this context just means that AI will continue to become more and more economically valuable. Yes. To where you can kind of sustain, you know, building more and more data centers. You can do more and more capex, you know. Yeah. There's a little bit of, like, how much will this company be transformed if the AI wave plays out? Well, AI doesn't play it play out well. How chopped are they? How cooked are they? Exactly. That's a great framework. Yes. And then also, yeah, if we flash forward, nothing really changes. Total plateau, Total decline in token generation or something. Is the business just continuing business as usual? Okay, so who are we starting with? So let's start with Sam Altman. Okay, Sam Altman. Where is he on this? So, Sam Altman, I think this is pretty reasonable spot. Sure. He believes in AGI. Right. He runs kind of the biggest AI company. He also needs AGI because if you imagine that if the models stagnate, they have a lot of capex they need to fulfill. If models stagnate, what are they going to do? Maybe the margins somehow work out, but you're probably not in a good spot if models get worse or if people start using AI less. If you're saying all one. But he's also not in the top, top corner. Right. And I think this is. You can justify this through a lot of OpenAI's actions. You see stuff like Sora, you see maybe the erotica. This is not very. Who's running the laser pointer? I'm running the laser pointer. You want him down here? Maybe? Put them down here. Okay, explain that. Explain that. I think that more and more, at least in the short term, OpenAI looks like a hyperscaler. They're kind of a junior hyperscaler. And I think their actions are more, you know, I think that OpenAI, a lot of people, you know, want to say that they're. They're bearish on. On OpenAI at current levels. But ultimately, when you look at how their business is evolving, they seem to me like they'd be fine if the models plateaued. Yes. But, yeah, I feel like the mood on the timeline was much more Slide. Sam to the left. Doesn't believe in AGI. There was that post about, like, if, if, if OpenAI really believed in AGI, they wouldn't be doing SORA or they wouldn't be doing the erotica thing. Like, all of those were very much like, needs it, but it. But kind of accepts that it's not coming and so stop believing. Yeah. So I think, in response to Jordy, I think, yeah, because it's not like this. The vertical axis is also just about, like, if it's going to continue to be economically, you know, economically useful. So if people just stop using AI in general or like, people. If the kind of, you know, revenue stops accelerating or any of this stuff, I think OpenAI will be in a bad spot regardless of the models, like, actually getting much better. Like, if they just make the models much more efficient to run, you could say that's not very like, AGI pilling because the models aren't getting a lot better. Yeah, but that's still like. But I'm just saying, like, if there was no more progress at all. Yeah. Like, we never got a new model, any of the labs. I think that OpenAI would add ads, they would add commerce, they would increase. Yeah. So they might be fine without AGI. They would make agents a lot better. But. But I mean, the 1.4 trillion enterprise, the 1.4 trillion in commitments like that is hard to justify. If it's just the business today just growing. Like, it's growing because it's just, like, it's good. No. No crazy breakthroughs. Like, just. Yeah, but they're, They're. You're laying out the bulk. They're playing. They're playing crazy. You're laying out the bull case saying, oh, if they just add ads, they're going to be able to hit the 1.4 trillion, no problem. I'm not saying, I'm not saying they can pull back on a lot of these commitments. Sure, sure. They don't, like. Okay. I don't, I don't think these are, like, going to end up being, like, real liabilities. Business is just cooked. Okay. They can't hit them. Right. So I'm just saying, like, I think there's a shot they have. They could, you know, when, when they talk about having success in consumer electronics. Right. Which is something you talked about yesterday. Like, they don't need, like, I. I think they can probably build a really cool device. Maybe it could be competitive with, you know, if there's. If it can be at all competitive with an iPhone like that, they could be fine without AGI. We're also getting into. I guess let's get some other people on board so we can see where. Yeah, I think it's useful to be relative because these are not quantitative numbers. Yeah. So next we have Dario. Okay, so Dario is up here. So I think Dario is kind of. When you listen to what Dario is saying, he is, you know, he's extremely AGI pilled. Yes. Right. He. He. This is kind of the reasoning why he's. He's so anti China. Yeah, Right. Because he sees it as an actual race. This is going to super intelligence. It is a national, you know, security total. It's a problem if China gets there first. Wait, what is that new sound cue? I don't think Tyler has sound effects. What is it? UAV online. UAV online. Okay. I like that we got. This is the uav. We should give this UAV aesthetics. For sure. This is good. Okay, continue. But there's also a sense that he needs AGI because if AI stops becoming. If it stops growing as fast and you imagine that things kind of settle where they are now, OpenAI is definitely in the lead. Sure. So you need a lot of continued growth for anthropic to keep kind of making sense economically, I think. Okay. Yeah, yeah, sure. Who else is on here? So I think next is Larry. Larry Ellis. Larry. Larry's in kind of an interesting spot. So this is kind of a weird place to be where you don't believe in AGI, but you need it. Okay. How did you wind up there? So I think you're probably wondering how I ended up in this corner. Record, scratch, freeze frame. So I think this is how you factor in. There's kind of the personal rhetoric and then there's the actions of his company. Sure. So when you listen to Larry speak, he doesn't seem the type that believes in some kind of super intelligent God that is going to come, that's going to birth this new thing and humanity will rise. But then you look at Oracle. Has anyone found his less wrong username? Is he on there regularly? I don't think Larry's reading gwern. Okay, got it. But then. But he's investing and they are, you know, they need AI to work. Okay. Right. They are maybe covering up a little too much or maybe not enough, depending on how AGI pulled You are, but, you know. AGI pilled. Exactly. It's kind of hard to square, but. It'S a bold bet. Yeah, this is. This is a unique spot. I think it is a yes spot. He's off the grid. Okay. Yeah. So who's next? So let's see who's next. Who did I. Okay. Okay. So I think this is a fairly reasonable spot. Yeah. Obviously there's, you know, there's some sense where he is slightly AGI pilled, or maybe more than slightly. Believes in the power of the technology. Yeah. I mean, he's very early on OpenAI. He thinks that AI in general will become very useful, but maybe it won't become, you know, super intelligent. Maybe it's not gonna replace every person. It's just a useful tool. It's the next. The quote I always go back to is him saying, like, my definition of AGI is just greater economic growth. So show me the economic numbers and that will be. It's like, it's a. It's a very practical. Yeah. Definition. I think people see him as very reasonable. He's not getting over skis. Yeah. I like him. I like him in the center of the grid somewhere. That seems like he's also, you know, if AI doesn't work out. Yeah. I think Microsoft is in a very. Good to stick around. You know, they're not crazy over investing OpenAI. They have a nice. Yeah, they'll write some code if opening I works out. He'll do very well. If they don't work out. I think he's also. He's hedge. He's doing quite well. He's hedged. Yeah. The man's head. He's happy to be a leaser. Yeah. That's a good quote, too. Okay. Yeah. Yeah. You wouldn't be leasing if you were super AGI pilled. Right. You're hoovering up everything you keep it all for. Exactly. I think. Let's actually talk about that with. I think it's Jensen next. Okay. Yes, Jensen. So Jensen, he doesn't believe in AGI. He's the whole reason why we're here. Explain that. So, yeah, this is maybe my personal take. Please. If Jensen was very AGI pilled. Yes. I mean, he is the kind of. He's the rock on which this all is built on. Yes, he has the chips. Yes, he has. If he was a GI Peel, he would not be giving out those chips. He would keep all to himself and he would be training his own model. Okay. So that's why I think he's. He's more on the doesn't believe in AGI side. But if there's any kind of downturn in the economy, could you put. Could you put Sam like potentially closer in this direction too? Because he's talking about getting into the. Yes. Like compute cloud. Yeah, yeah. Reselling for sure. So if there's so much demand. Yeah, yeah. And the models progressing so quickly, wouldn't you want to just hold on to all. All that? I think this would also be interesting to see over time. Like Sam has definitely shifted leftwards. Yes. Over time he's moving. He's basically. This summer you've seen a lot of the actions of opening. They seem less and less AGI. Pretty much everyone has been like moving the AGI timelines outward, which is which you could transform into no longer believing in AGI. It's more just like the timelines have gotten longer this year. Broadly. Pretty much everyone. Dwark. There was a new blog post yesterday. It was basically AI 2027. There was a new one. It was AI 2032. So it's basically the same team. Different team. But the team of AI 2027 was promoting. It did. Yeah. AI 27. 2027 should be like. It was actually AI 3027. We missed by a thousand typo. AI 3020s. We couldn't get that domain. We were just off by a thousand years. Yeah. But there's definitely the sense where if there's a downturn, Jensen, the stock is going to go down. Yeah. But it's not going to go down as far as Larry. Right. Because they're still not on. They don't have insane financial advice. Yes. You know, they don't have, you know, completely unreasonable capex. They're not levered up. Sure, sure, sure. That I know. Oh yeah. Their Z score is through the roof. Their Allman Z score. Allman Z score is through the roof. They're looking pretty safe. Yes. Okay, let's see the next one, I believe. Okay. Sundar. Sundar. He believes in AGI more than Satya. You think? Yeah, well, I think you can see this in kind of. They were even earlier in some sense than Satya. Right. With DeepMind. So I think AI has played a fairly big part in the Google story very recently. They've always, I think basically for the past 10 years they've been trying to get into AI. Maybe their actions didn't actually much but you know, they were the transformer paper. They, you know, applied AI like they're actually applying. They built self driving cars. Discovery and core. Google search is Just an AI product. It's just an index on information they're organizing. Also, I mean, compared to Satya, I mean, Gemini is a frontier model. Totally. Gemini 3 is supposed to be this incredible model. Everyone's very excited. Satya is not. Microsoft is not training their own base model yet. Yeah. And also, like, it's like there is a little bit of, like, if you really believe in AGI, the actions that we see are you, like, squirming and being. I got to get in. It doesn't matter if I'm 1% behind or 10% behind or 80% behind. I got to get in. And I think we know someone who's. Doing Gabe in the chat says, is this AGI or AG1 from athletic greens needs AG1 needs AG1. Fine. Without AG1 believes in AG1. Now, we are the only podcast that is not partnered with AG1. But we do like green. Green is our hero color. Let's go to the next person. Okay. But also before that, I think Sundar is also definitely below this line because, you know, Google has been doing very well. AI was at first, I mean, people thought of AI as like, oh, this is going to. This is going to destroy Google. This is bad for Google. So if AI doesn't work, then Google is just in the spot they were before, which is doing very well. If AI does work, then, I mean, Gemini is one of the best models. They'll do very well too. Speaking of Gemini, Google AI Studio create an AI powered app faster than ever. Gemini understands the capabilities you need and automatically wires them up for you. Get started at AI Studio. Build. Yeah, continue. Okay, so I think, yeah, next is Zuck. Yes. So Zuck is also kind of in an interesting spot. Yes. I think Zuck is actually someone who has shifted rightward. It's fascinating. Yeah, you've seen this. I mean, for a while he's been traveling. Yes. Yeah. So for a while he was doing open source, which in some sense it's very AGI pilling because you're building it. You're training a model. Right. It's like you're moving the frontier forward, but it's also, it's open source, which you can kind of think as being. You're trying to commoditize everything. You don't think that there's going to be some super intelligence that will take all the value that you need to hold on to. You can kind of give it out. And now he's kind of moved towards closed source. We're going to get the best people. We're going to train the best model. It felt like Zuck was sort of like, oh, yeah, AI. Like, it's this cool thing. I'm going to check the box. I got my team. We did this fun little side project. It's this open source model. We kind of found our own little lane. But we're not like, competing in the big cosmic battle between OpenAI, anthropic, DeepMind. Like, well, don't. Do you think that was just a counter position to. Way to try to win the AI warp. Go say, hey, we're just going to try to commodity commodify this market. Commoditize your Chinese approach. Yeah, commoditizing your, I think, is a good strategy. Yeah, that makes sense. And then maybe you could say that, oh, once the Chinese have. Have been getting better, he can kind of step out of that position. Sure. Move towards closed source. Yeah. But now it feels like he's way, way going, way harder on the AGI vision, paying up for it, investing so much money in it. You know, it's like. And depending on how much he invests, you could see him pushing up. It's true. He's also. Right now. But right now the business is just insane. It's so phenomenal that even if he winds up spending all this money and, you know, they don't even get a frontier model. And like, all the researchers are like, yeah, we didn't discover anything and we're just gone. Like, the business is fine because it's such a behemoth. So that's why he's after earnings. They took a fairly big hit. So maybe he should be a little bit higher. Yeah, maybe a little bit. Meta broadly is still a very safe play if AI doesn't work out. And it was the same thing during the metaverse. It was like he believed in the metaverse, he invested in the metaverse, but he never needed the metaverse. And so after the stock sold off like crazy, it went right back up because everyone realized, wait a minute. But he's also raising debt off the balance sheet, which would kind of could push him up into this zone. Right. If he's like, you know, if you're worried about, why don't you just carry it yourself? Why don't you carry it yourself? You believe in AGI. Just. Just let it ride. Yeah. Okay. Who else are we missing? Okay, so Most of the Mag 7 now is Jassy. Oh, Elon. Elon. Elon. Yeah. Okay. Elon. So Elon, I mean, he's been AGI pilled, I think, for a very long time. Super AGI pill. I agree with that. OpenAI co founder, even before that, he. I think he was fairly big in the. In the safety space. Totally. You see him even on. On Joe Rogan, he was talking about AI safety. He still believes in it. Backed off. And AI safety is important because it's going to become super intelligent. It's going to take over the world. Yeah. Even this was part of the dialogue around the new comp package. Wanting the voting power to be able to be part of securing his robot army. Oh, yeah. Interesting. Yeah. I think robots are very underrepresented on this board. Totally. And Elon is kind of the main player in that space right now. Yeah. So he talked yesterday about Humanoids being sort of an infinite money glitch. And I feel like you kind of. Need. AGI in order to kind of unlock the infinite money glitch. Yes. But at the same time, very strong core business. The cars don't need AGI, the rockets don't need AGI, Starlink doesn't need AGI. So. Yeah, so he's. So he's not entirely indexed on it in the way the foundation labs are. Right. Yeah. I mean, there's definitely a significant part of the Tesla market cap that is basically betting on robots. Totally. But there's also a big part of it that's just. But that's why he's in the middle of. The needs are fine without. It's like. Yeah, everything on it needs. Yeah. But that's just one piece of the. AGI more than SpaceX. SpaceX. Yes. SpaceX is very. Seems very uncorrelated with the AI. Totally. Totally. Totally. Unless you have the data. He's happy to be a telecom baron. He's happy to. Yeah, he's happy to be the new Verizon. Who else is. Okay, so I believe now is Andy Jassy. Jassy, yeah. So he is, I think broadly does not believe in AGI, although he, you know, Fairly major stake in Anthropic. Yeah, yeah. Which maybe is very agiing, but they have not. He's hedging with Anthropic. Yeah. Yeah. I think that's basically what you can say. He doesn't seem at all worried about kind of the core business. AWS seems to be. And is Google building their position in Anthropic? I saw some headline about that. That was wild. Imagine Sundar. Such a beast. Sundar and Satya, really, both of them. They just have few stakes. Broadly. Andy. Jesse seems very quantitative. He's focused on the Numbers realistic. He's not making these, you know, grandiose statements about the future of intelligence or what humanity is going to be like. Yeah, when you look at the US earnings it feels like they invest when the demand shows up and they build data centers when there is demand. And they, they are not in the business of, of doing, you know, a 10 year hypothetical sci fi forecast based on Breakthrough. So Ben Thompson on the recent Sharp Tack, we were listen it on the way in he was talking about how Amazon has repeatedly said hey we're, we're supply constrained. We have a lot more demand than we can fulfill with this new deal with OpenAI. The $38 billion which only got announced Monday, which again feels like forever ago at this point. But Ben was kind of hypothesizing that they kind of let OpenAI jump the line because that $38 billion deal was starting effectively immediately versus some of the other. Like Larry's deal with OpenAI was announced. This like massive backlog is revenue that cannot be generated in a meaningful way today because they have to build the data centers that can actually deliver the compute. Yeah, I mean and you know, Amazon has been building data centers. They're building data centers. They're basically, they built all the data centers for Anthropic. But it still seems very kind of restrained. It's not overly ambitious. Anthropic has had issues with capacity and it's probably because you know, Andy Jassy doesn't want to get over his skis. He doesn't want to build too much. So I think that's why he's kind of on the left side. And also, I mean if you think AI doesn't work and we're going to kind of be in this, you know, the same spot of web 2.0, you need, you need AWS, you need your EC2 server. I think he's very well positioned there. And then I think the last one is Tim Cook. Tim Cook. Let's hear for Tim Cook. Criminally underpaid, but has done a fantastic job not getting over his skis. Yeah, this one I think is goat contention. Fairly self explanatory. I mean he seems, he doesn't believe. In AGI, he doesn't believe in LLMs, he doesn't believe in chatbots, apparently. I don't know the new Gemini deal. Signaling as of 2 years ago Apple was like, yeah, we're not doing that stuff. Yeah, there was that famous three years ago. But I think what's under discussed, my takeaway from our conversation with Mark Gurman and Apple's ambitions around their new LLM experience with Gemini is that I think that there's a very real scenario where Apple like wants to compete for the same user base as OpenAI. They want that like transaction based revenue, that commerce revenue. Yeah. I just, I disagree on this point. But I'm not, I'm not saying that I'm putting their odds at like winning that market at over like 10% but I think that they would be right to realize that it's potentially an opportunity. Yeah. Is it a billion dollars a year that they're paying Google for? I believe that's the number. Yeah. That feels really low, doesn't it? I don't know, it just feels super low to me. Among all the different deals that are going on. When I think about like when I just think about like the value of AI on the iPhone and you're like $1 billion. Like when, when I think about like what is the value that we're, that the market broadly is putting on like AI for the enterprise. But at the same time, whatever. When, when Sundar on the Google earnings call was talking about their top, I think 10 customers and how many like trillions of tokens they were using, but it really was netting out to like $150 million of like actual revenue. Yeah, yeah, yeah. And so this is, this will be their biggest customer for Gemini on day one. Until one of our listeners gets on, of course. Yeah. And I think, I mean even with the, with the Gemini news, Apple still seems very kind of reactive to AI. Yeah. They're not kind of seeing, oh, this future where it's going to do everything and moving there right now. Yeah, they're kind of seeing where the demand is, seeing where the users are and then moving, which I think is very kind of, you know, classically, you know, that's how business work. It's not very eye pilling. Also on that point of $1 billion. Like I have no idea how can they possibly know the amount of inference that's going to happen in Siri plus Gemini over a year period? Like there's just no way to predict that or can they? I feel like if the integration's good there will be a ton of queries. It will. I didn't read the 1 billion headline. Chat can correct me if I'm wrong, but I didn't read the 1 billion headline. I felt like that was a technology license, not necessarily inference. And then there might be consumption on top of that. Yeah, okay. Or maybe. Or maybe. Yeah, yeah, maybe they're licensing Gemini and then they pay per query for the energy and the capex. Couldn't a lot of the stuff be done on device? For sure, yeah. Because Gemini Gemma has like baked down models that could be smaller so that's possible. Chad says where's Tyler on the chart? Feels like Tyler is an AGI pilled anymore. Let's actually am on the chart here. Let's go to Tyler. I am over here. So I, yeah, I think I'm very AGI killed. Right. I, you know, I'm ready for the Dyson Sphere. Yes. I think it's, you know, only a matter of years. Handful. Only a matter of years. It's a couple thousand days away. You need to, you need to publish about 100,026 AGI. 2025. We still have a month left. Still a month left. Yeah, we got a lot AGI on Christmas. This Christmas. It's coming this Christmas. This holiday season. But why do you need AGI? I think I also need AGI. Why do you need AGI? Well, I mean look, if you look at the current jobs data for like college age students, it's looking pretty bad. It's bad. So I think you kind of need AGI to really boost the economy if AI does not work. The macro economy is looking not good. Oh sure, sure, sure. So I feel pretty bad about my job outlook without AGI. Sure, sure, sure. Even though you're already employed. That's hilarious. Well, thank you for taking us through chat. Want to know where would you put Lisa Su amd? Yeah, so Lisa Sue, I think she's probably been quite reactive actually. She. You've really only seen AMD start. You don't get credit for being reactive is what you're saying. Yeah, you. I think it's really only over the past year maybe that she's been making any kind of deals. Right. You saw George Hotz maybe a year or two ago basically trashing AMD chips for how bad they were in. But maybe she's, she's given so much of the company away. Maybe she thinks that shares won't have very much value in the future. She's like happy to just give away 10% of the company. Yeah. So I think I would. If I had to pick somewhere, I would say that she is, she's honestly getting a little close to Larry in that amd. It's very much like an AI play still right there. Chip company, obviously. But you don't get the, you don't get the feeling that she's a true believer. Yeah, yeah, that tracks. Okay. That the new Siri. I'm reading through some of Mark Gurman's reporting. Speaking of Apple and AI, the new Siri is planned for March, give or take, as has been the case for months. Apple has been saying for nine months it's coming in 2026. Apple simply reiterated on that and then on the actual deal. It's a $1 billion deal for a 1.2 trillion parameter artificial intelligence model developed by Google.
Foreign you're watching TVPN. Today is Friday, November 7, 2025. We are live from the TVPN Ultradome. The temple of technology, the fortress of finance, the capital. Time is money save. Both easy use, corporate cards, bill payments, accounting and a whole lot more all in one place. Ramp.com baby. The timeline is in turmoil over again. Sam Altman again. What were we calling it? Backstop gate. Backstop gate continues unabated. Jon Timelines and turmoil over Sam Altman. Every single day for the first time, every single day this week and last week, it is nonstop. OpenAI. What will happen? Is it over? Is it the end of the global economy? Or will we, you know, live to fight another day? The main point of debate is over, you know, Sarah Fryer's comments that she used the word backstop. She backtracked on her backstop comment and said I wasn't asking for a backstop of OpenAI equity. I was advocating for this American manufacturing plan. Then Semper Satoshi is going pretty hard. He says here's an OpenAI document submitted one week ago with they advocate for including data center spend within the American manufacturing umbrella. They specifically advocate for federal loan guarantees and Semper Satoshi says Sam lied to everyone. Let's read the specifics. Yes, the specifics are AI server production and AI data centers. Broadening coverage of the amic, which is the American Advanced Manufacturing Investment Credit, will lower the effective cost of capital de risk early investment and unlock private capital to help alleviate bottlenecks and accelerate the AI build in the United States. Counter the PRC by de risking US manufacturing expansion to provide manufacturers with the certainty and capital they need to scale production quickly. The federal government should also deploy grants, cost sharing agreements, loans or loan guarantees to expand industrial base capacity and resilience. Okay, so loan guarantees. So what's happening here is OpenAI a week ago and everyone can go and read this letter which is publicly available on their website. Was making the recommendation that the government should effectively treat data centers or AI or token factories, put them in the manufacturing bucket which would qualify them for similar incentives that traditional manufacturing, defense tech, et cetera. And I don't have a problem with asking the government for a handout. I think that that's actually like best practice. It's actually in your shareholders responsibility like you have a fiduciary duty to ask the government for as much help as possible. I think that everyone should go to the, go to the government right now. Hey, if I'm paying 50% tax, how about we take that down to 20%. How about we take it down to 0%? Like you have every incentive to ask your person in Congress and your senator, the people in Washington, to do everything they can to support your mission. This has worked out in the past with Elon and Tesla. It didn't work out in the case of Solyndra. But like the game on the field is if there are taxpayer dollars that are moving around the board, you want to get those into the industries that are aligned with you. And so the thing that people are taking issue with is that in the opening of his message yesterday, he said, we do not have or want government guarantees for OpenAI data centers. Yes. And that seems to conflict with the message. The letter that they wrote a week ago, that is still up on their website. Yes. So if it's. What is it? Loan guarantees to expand industrial based capacity. Noah Xchat is taking the initiative. Just sent the the admin a safe to sign uncapped note. You literally can do this now. There's a sovereign wealth fund. It sounds crazy, but they're ripping checks. This is the new economy and you can be upset about it, but you also have to understand what is the game on the field. You can always advocate for, we should change the game. We shouldn't be doing this. I would prefer a more of a free market economy. But in the world where we're not in a free market economy, you want to have your company win. Right. That's just rational. That's just actually playing the game on the field. Now, it is weird optics to talk about the game on the field. That's something most people don't like doing. And that's very odd because when you say, oh yeah, this is a one hand washes the other situation, or oh yeah, this is a situation where a backstop will allow us to be more aggressive. That feels like the banker saying, oh yeah, I knew that the government was gonna bail us out in 08, so I was intentionally underwriting loans that where it was somebody's fifth house and I knew that they couldn't pay it. I wasn't asking about their job, I wasn't asking about their income, I wasn't asking about their assets. And so they. I pushed it way further and I made a lot of money and I got out at the top. That's what's really upsetting to Americans, because the bailout comes in, the backstop comes in. Yes, it makes sense to rationally do the backstop in that moment, but if some people get out early and then other people get cooked, that's a really bad optics. That's really, really bad optics. And which is why I kind of expect a lot more of the narrative to shift towards subsidizing and incentivizing, bringing new energy. We were talking about that yesterday, which. Directly benefits the labs and anybody building a data center, but it also feels very much in America's interest, broadly. Right. And it benefits. It theoretically would benefit the average American too. We were listening to Ben Thompson this morning. It wasn't on Restream. One livestream, 30 plus destinations, multi stream. Reach your audience wherever they are. We were listening to the Stratechary RSS feed. Hopefully Ben goes live at some point on restream. That'd be awesome. By the way, I think the chat has discovered that. Yes, this is a turbopip. This is one of two. I sent the other one to Simon. Oh, that's amazing. But you can see it looks great. Especially with the green background, the color grade. The production team is on point today. Let's. Let's hear from the production team. So yes, I agree with you on the energy front. I think Ben Thompson, his point today. And fabs. Yeah, and fabs too. Yeah. His point today was like OpenAI needs to be crystal clear about the position that they're in, which is that they are the hottest company in the world. There is unlimited demand for their shares. They could be a public company. They could go raise more private capital. They need to be on the opposite end of the risk curve from the stuff that's like no one really wants to invest in an American fab that might lose money for a decade. There is truly much less appetite for yeah, let's go and build a nuclear power plant. That might take a decade. And who knows, when you think about the things that make money in the short term, it's SaaS. Right. AI for SaaS. Just go and transform the legacy business with some AI sprinkled on top and just start printing money like this is what works. People's concern for government backed data center lending is that you're lending against chips which have a really fast depreciation schedule. We don't know. Right. There was some pushback. I made the comment, hey, maybe that these things don't depreciate as fast. There was some pushback in the comments. I think everyone kind of agrees that these things depreciate quickly. And so energy infrastructure is the place. Yeah. And if you look at right now, it's core weaves, corporate default swaps are now sitting around 500 basis points jumped up dramatically and so this is one of the leading Neo Clouds. 500 basis points. 5%. Yeah, they jumped 5%. They jumped from like, I think like 2 to 5 and try to find somewhere in the stack. Yeah, yeah, yeah. But sorry, I don't mean to like ask particular stats, but clearly like people are, are worried about this. Yeah, so, so and again, this is for a leading neocloud, right. We, we semi analysis cluster Max. The updated version came out yesterday. They're only in the platinum neocloud in the Platinum platinum tier. And people are worried about them, Right? Yep. Potentially, you know, having some bankruptcy risk. And so if you start doing, you know, basically government guaranteed data center lending, you could get in a situation where there's a bunch of new data centers that come online that really don't have a clear pathway to roi. Yeah. And it just incentivizes the entire stack to just get, you know, really exuberant. Right. And again, going back to Sarah Fryer's interview on Wednesday, she was, she felt the market was not exuberant enough. And I think a lot of people disagree with that. Right. There's a lot of, there's been a lot of insanity this year. Silliness. Maybe we don't need more. But we will see the other news that was interesting out of today. The director of the Federal Housing Finance Agency, William Polt, says Fannie and Freddie eyeing stakes in tech firms. Bill Polt, the director of the Federal Housing Finance Agency, said that Fannie Mae and Freddie Mac are looking at ways to take equity stakes in technology companies. We have some of the biggest technology and public companies offering equity to Fannie and Freddie in exchange for Fannie and Freddie partnering with them in our business. Paul said Friday during an interview at a housing conference, we're looking at taking stakes in companies that are willing to give it to us because of how much power Fannie and Freddie have over the whole ecosystem. So yeah, this Wall Street Journal event is just. So many articles came out of this. The Wall Street Journal did a fantastic job bringing a ton of people together. This is where that Sarah Fryer quote came from. It's also where the core weave CEO was on stage. You were mentioning core weave earlier and the core weave CEO. The headline in the Wall Street Journal is Corweave CEO plays down Concerns about AI Spending Bubble. And the quote is from Michael. If you're building something that accelerates the economy and has fundamental value to the world, the world will find ways to finance an enormous amount of business. And he went on and said, if the economy doubles in size. It's not a lot of money to build all those data centers. And so there's a lot of folks to, you know, addressing bubble concerns right now. He says. It's very hard for me to worry about a bubble as one of the narratives when you have buyers of infrastructure that are changing the economy, the economics of their company. They are building the future. And so if you are, if, if the products are, you know, effective in growing the economy, then all of the investment is worth it. There's a fascinating mansion story we should get to that's actually related to this. Good. So there is a. Before we do, let me tell you about Privy wallet infrastructure for every bank. Privy makes it easy to build on crypto rails, securely spin up white label wallets, sign transactions, integrate on chain infrastructure all through one simple API. Did you get a whole new soundboard? What's going on with the soundboard? I did. Thank you. Wait, really? Like, they swapped them all out? Michael, do you still have the horse? I still got all the classics. Working with some new material. Working with some new material. I like the sheesh. Can we get the vine boom on there? Is that on there? I don't know. Boom. The thud. That's a big one. Anyway. Casa Encantada is a 1930s estate in Los Angeles. It's one of the most important, important homes in the 20th of the century. In 2023, it was also briefly the most expensive home for sale in the United States, with an ambitious asking price of $250 million. This is in Bel Air. So it's a Bel Air property. It was sold in the foreclosure auction after the death of its longtime owner, financier Gary Winick. So it's an 8.5 acre property. He bought it in 2000 for $94 million. He bought it in the year 2000? No way. Do you know who this guy is? Gary Winnick. He had something to do with the last bub. He did have something to do with the last bub. It's one of the craziest stories. So is it to satisfy the debt which has now grown. Blah, blah, blah, blah, blah. So Basically, it's a 40,000 square foot house built in the 1930s. Counts Hotelier Conrad Hilton and dole food billionaire David Murdoch among its former owners. It's located right next to the Bel Air Country Club golf course. It has a. It's seven bedrooms, has a swimming pool, tennis court. It's awesome. Anyway, Gary is perhaps best known as the founder of Global Crossing, which built fiber optic cable A fiber optic cable network across the world. The company made him a billionaire, but it imploded. Did they have a little bit of dark. Did they have a little bit of dark fiber? A little bit of dark fiber imploded in the early 2000s under the weight of massive debt. Casa Encantada, Gary's primary home went on the market in June of 2023, five months before his death. Now it's asking 190 million and they kind of move on from there. But Global Crossing was. He was somewhat of a global businessman. He was. The headquarters were in Bermuda. You know this. I didn't know that. I wonder why. I wonder why. I mean, it might literally. Because it insulates the assets in America. Like that is one thing that you can do if you don't want to have to give up your lovely home post bankruptcy. You want to be able to get liquidity before the bubble pops. That's the lesson here. Sell. Sell the shares before the top, buy low, sell high. That's the. That's the phrase that we. That we live by here. Was there anything else, Tyler, that came out of your deep research report on Gary Winick? Did you get a chance? I mean, so let's see. How do you tell us? He also started Pacific Capital Group in 1985. That was kind of the precursor. So he was set up to. To actually marshal all the debt to do the big. He had been a structure. He had been a global businessman for a while before this. Okay, nice. Let's give it up for global businessmen. Let's give it up. Yeah. What a wild, wild story. We're hyper local businessmen. We like to do our business right here in the ultra dome. But we do got to give it up for them. Also in the mansion section, which we should continue on. But first, let me tell you about Cognition, the makers of Devon, the AI software engineer. Crush your backlog with your personal AI engineering team. You have a new neighbor. Jordi, you have a new neighbor. So Tom Petty apparently lived in your neighborhood in Malibu, California. The late free fallen rocker had a personal music studio. Some deep lore, please. I saw Tom Petty. I believe it was the first ever. Who headlined the first ever Outside lands was the first or second. He was living in Malibu in a $11.2 million house. 8,744 square feet, seven bedrooms as a music studio. The buyer is Steven Slade Tien, a psychoanalyst and author. According to people with knowledge of the deal. Tien didn't respond for a request for comment. I wonder if he. If he'd want to, you know, get beer sometime, hang out, maybe go surfing, go for a walk on the beach. We should reach out to him for coming on. Okay. So the. I was at the first ever outside lands. Really? How old are you? I thought. I thought you were. It was. It was my. Basically the first ever, like, major concert experience. I didn't realize that's Tom Petty headlined on Saturday. This was in August of 2008. And that was my first time smelling cannabis. And I kept. I was there with my friend and his parents, and I kept asking, like, his friend's parents, what is that stinky, stinky smell? We can't get away from it. And very, very memorable. Very memorable. That's amazing. Do you know where this is? 2.6 acres above Escondido Beach. Is that close to you? Escondido beach market area. It's a few minutes away, but Escondido beach is the most underrated beach. It was Petty's home decades before his death in 2017, lead singer of the Heartbreakers purchased the property for about 3.75 million. In 1998, Petty turned his guest house into his personal music studio with soundproof rooms for recording music. And said Levi Freeman, who's putting up, there's a one bedroom guest suite, seems like a very nice star. He was from Florida. He shunned the spotlight off stage. He's a member of the Rock and Roll hall of Fame, best known for songs like American Girl and I Won't Back Down. What a. What a lovely little house. Well, that's always fun. Anyway, we should move on to our top story. Do you want to go through the Elon. The Elon pay package thing a little bit? I had two questions for you on that and then we can go into our Mag 7 review. Does that sound good? Let's do it. Okay. First, let me tell you about figma. Think bigger, build faster. Figma helps design and development teams build great products together. I really enjoy this graphic package we got. This is great. So Elon's trillion dollar pay package is done. It's signed, it's approved. I'm sure it will be contested in the courts. It's always contested in the courts, but the Wall Street Journal has a very nice little breakdown of how it works. They have a nice little infographic here I can share and it kind of shows this. This is. This is what technology podcasting is. Pull that up. Holding. Pull that up. Yeah. Pull that. Newspaper. Yeah, yeah. So basically, Elon could get 1 trillion in Tesla stock if he hits all These different tranches. And so it's actually not that many shares. So he's worth half a trillion now, but he also owns 414 million Tesla shares outright. Got another award in 2018 of 300 million shares. And this next award is 424 million across 12 tranches. So it's not like they're giving him four twice as much as he already has, they're just kind of giving him a little like basically what he already had. They're giving him the same amount again. And there's a bunch of things that he has to do. He has to get the market cap really, really high. And then there's also these like qualitative operational goals or I guess they're quantitative but.50 billion in EBITDA, 20 million cars delivered, 1 million robots sold, 1 million robotaxis in operation, 10 million full self driving subscription. Now some of those are obviously more gamble than others. What's the definition of a robot? If he comes out with a really cheap robot and he sells a bunch of those because it's more of a toy, does that really fulfill the goal? Or like what's the, what's the, you know, million robotaxis in oper, what's the. Definition of Robotax qualifies? If it's a Tesla that is enabled. And I turn on FSD and my friend rides in it for one. Is there anything for actual rides? There's, there's 10 million full self driving subscriptions. Yeah. And so, but some of these are more gamble than others. But the market, how many really is. How many full self self driving subscriptions are there today? I saw, I looked that up, it's somewhere between like 1 and 3 million right now. So he has to, he definitely has to like triple the size at least. Robo taxis obviously goes from like 0 to 1 million because there's barely any on the, on the road. He hasn't sold any robots, so a million would be entirely new robots. He's obviously delivering a lot of cars and on The EBITDA front, 50 billion in EBITDA company did like 13 last year. So that's, that's a huge increase in EBITDA. I mean 50 billion and even does a lot of money. But he's, you know, it's not, it's not 20x where he is right now and neither is the market cap. Like he's, he only has to take the market cap to 8.5 trillion and Tesla's already worth a trillion. So it's, it's, it's within you Know, striking distance. The market, the market cap is now around 1.5 trillion, actually. So my two questions were, one, like, it's going to be weird to live in the world of the trillionaire. Like, but we are getting close. Like, that's going to happen not just within our lifetime. Like, definitely within the next decade. This sets him up to be the first one, but it's going to happen. And I wonder how that's going to reshape our culture, like, the world in America. Because when I had this, I had this realization that when billionaires became so prevalent and prominent, there was a lot of heat that was taken off the millionaire. Like, if you're just like, a guy. Yeah, I have an H. Billionaires are the heat shield. Yeah, exactly, exactly. Like, yeah, I'm a millionaire. I have a boat. I go to Bass pro shops, but I'm not getting protested because I have a million dollars in my house and boat. And isn't it like, practice? One out of ten Americans is a. Yeah, yeah. So the millionaire became more accessible and the billionaire became the thing that the society scapegoats for all the problems. Approximately 9.4 to 9.5% of American adults are millionaires. Yeah. But my question was, what do you. You like, what happens to the billionaire when trillionaires come in? Because, you know, like, Bernie Sanders and there's a whole crew that say, like, billionaires shouldn't exist. Every billionaire is a policy failure. Like, what happens when you have to say, like, well, like, trillionaires are the real policy failure, but billionaires are also the policies failure. And millionaires, we're like, kind of okay with, but it's not great. It's like, it becomes much more complicated. But at the same time, it definitely, like, if Elon is the only trillionaire, it's going to be really, really easy to target him and be like, he's bad. He's a trillionaire. Get any more targeted? I don't know. Yeah, maybe he maxed it out already, but I thought that was interesting. And then the flip side was, what does this mean for other companies? What does it mean for Sam Altman at OpenAI? Can he run a similar playbook? It's clear that he had a ton of soft power during the OpenAI coup. Could he go to the OpenAI for Profit board and say, hey, if OpenAI IPOs at 2 trillion, I want 20%. If OpenAI moons to 10 trillion, I want 50%. Like, how extreme can Sam get? We know that Sam runs a bit of an Elon playbook. They were in business together, they co founded OpenAI together. So clearly they learn from each other. I wonder what, what Sam Altman can do similarly. And then I also wonder what, what will happen at the garden variety unicorn. If you're just the CEO of a $5 billion company and you're just kind of hanging out there and you say like, Yeah, I had 30% of the company when I started. I've been diluted down to 5 or 10. But I like this company and I want to get it up. What if I say, hey, could I go to the board and say, okay, we're at 5 billion now if I get us to 50, will you double my equity position? And how would shareholders treat that? How would Sequoia treat that? Or Founders Fund or Kleiner or a 16Z? Like how would the growth stage venture companies, the venture capital firms feel about that? So I don't know, any reaction to that stuff? Yeah, I mean I think there's a sentiment like there's, you know, any, any venture backed founder is like going to be hyper conscious of dilution. Right. There's a sense that it's like one way. Yes. Right. It just goes down and down and down and down. And I think the right way for founders to think about that is like, okay, you're not actually like no one's taking your shares unless you decide to sell them. Yes. Your job is just to make the share price go up and there's going to be more shares issued over time. But if you just make the share price go up forever, it doesn't really matter. And you can also, but then there's. Also, you can also buy back. You can also, you know, you know, get buyback. Like Drew Houston. Yeah, you know Drew Houston. Sell all the best example and then give more share. Create new shares if you want to get your percentage ownership way down. Yeah, like if you want to go to zero, more like if you want to bail. I mean, yeah, we, we, we need to do, we need to do an analysis of the most diluted CEOs in the public markets. Because if you look at some of the IPOs from the last 10 years, like some of the guys that are still hanging around running these companies have, have sold out, you know, sold down so much of. And this is what makes Larry so admirable. Yeah. Because just buy back, buyback, buyback. And he's the second richest person. 300 billion. It's remarkable. By the way, I think Oracle is like fully round tripped now from. It happens. Well, what about the. So, so yeah, I Mean, the question is like, on what timescale do you think this happens? Like, Jordy, do you think we'd actually hear the story of somebody, a CEO founder, maybe pass their vesting cliff? Maybe one of their co founders has left? Because I think about that a lot where it's like, okay, yeah, there were like, there were like two or three people. They were basically equal. They did their full like 4 year earn out. But then there's clearly one that's like, still they're grinding for the next decade. Like, they kind of do deserve more. It's not that crazy. And yes, there is. The. Just the stock buyback don't sell. But is there a world where someone like Drew Houston goes to the board and just says, like, I think I can 5x this and I want the pay package to do it and I'm going to be in the office nonstop. And I'm going to go that time? I think that happens all the time. Not to this degree. No way. Not. Okay. Not. Not to that degree. Yeah. Not even the headline number. Yeah, even headline number. Just, just in this sense of like, we're going to dilute everyone like 5 or 10% if this happens. Like a trillion on 8.5 is, is serious dilution for the rest of the shareholders. But if I'm holding at 1.5 trillion and I'm like, you're going to take me to 8.5 trillion, like I'm totally in for 10% dilution. You're going to 5x my shares. Like, I'm in. But what's interesting is that we just haven't seen other CEOs pull that from the Elon playbook and say, and this. Is why was it Kimball or someone else saying like very Almost no other CEOs would take a deal like this because it's so ambitious. Yes. And so I think it's healthy. Yeah, no, I think people are offended by the headline number. Totally.