LIVE CLIPS
EpisodeĀ 7-7-2025
Was calling R3, AGI is here and. Wasn'T able to get his video on at the time. So we. And it was this funny contrast that reminded me of you talking about you're trying to build with a lot of these tools, and in the process of building with them, you realize, like, okay, this is amazing, but it's actually just going to take a little bit longer than maybe we would all like. That's right. Yeah. But by the way, I think there's something really interesting. Tyler and I disagree on two things, and they're both related in a way. So, Tyler. You know, when oh3 came out, Tyler wrote this blog post, Marginal Revolution, where he said, AGI is here, guys. It's really AGI. But then he also believes that, look, the impact of AI is not going to be that big once you do get your AGI is going to result in 0.5% more economic growth a year. The kind of impact we saw from the Internet. Right. And so I think these two are actually quite related beliefs where I'm like, these LLMs, they're not that useful. This is not AGI. You know, the AGI will come later. And I'm like, when the AGI hits, we're going to see, like 20% economic growth as a minimum. But because he's like, this is AGI, I'd be like, if I thought this was AGI, I'd also be like, this is not that. This is not. This is not it. You know, this is not going to lead to big growth outcomes. Yeah, yeah. How are you thinking about, like, just definitions of AGI? And I'd love to. I'd love to actually get your. A little bit of a history before this piece, your journey. Because for me, you know, I grew up watching Sci Fi. It was like, yes, 3PO will be around eventually. But it's very abstract and I don't have timelines for that. And then eventually, you know, you start reading, you know, what's your P3PO. Yeah, yeah. You eventually start seeing GPT.3 GPT.3.5 DaVinci chat GPT. And it starts feeling like, okay, we passed the Turing test. We need to really have this conversation about AI. And then. And then P Doom and AGI becomes, like, the main discourse for, like, few years. Right. But it felt like this piece, even though you and Dylan were going back and forth being like, no, this is still, like, incredibly bullish for the general population. It felt like this was you pushing out timelines a little bit. So walk me through, like, where did you start? When was the nadir of your timelines? Like, when was your timeline? Like, it's happening next week, next year? And then walk me through how we got here. Yeah, so I've got this podcast where I interview people about AI and I've had on people who have quite aggressive timelines over the last few months. I've interviewed people who are like, well, there's been many people who have written pieces about how we're a couple years out, right. Leopold Aschenbrenner, AI 2027. Recently, Scott Alexander and Daniel Crocatello had the AI 2027 scenario forecast where we've got the, we've got the bots that can just take over within the next few years. So that's where my head was at as of a couple months ago. And then I recently interviewed these two researchers. I think you actually had one of them on your podcast. Sholto Douglas and Trenton Bricken from Anthropic about the path forward for rl, which seems to be the pre training seems to have been giving us these plateauing returns. We make these models bigger. GPT 4.5 didn't seem to be all that impressive. They'd had to deprecate it. So but the path forward doesn't be like, oh, three actually is very impressive. So that was more the result of this RL process. So maybe now, actually, even though pre training doesn't seem to be as powerful as we might have anticipated, this RL is even more powerful. And so we should accelerate our timelines. And so that's where my head was at as of a couple of months ago. But then in having that conversation and thinking through, okay, what specific capabilities in terms of actual applications I, as a small business owner have, or as a podcast producer have, will AI be able to do? And thinking about why is it not able to do these things right now and what is the key bottleneck? I realize there's actually no obvious way in which you can either get LLMs to solve these problems for you, or there's no easy prompt injection kind of thing which would help solve these problems. And the key problem I see is the models can't do on the job training. So if you think about a human employee, you might have some. And these human employees, the good thing about them is that, you know, you train them for six months or a year, and over time they're getting better and better. They're learning about all the context and intricacies of your workflow, what you like. They'll fail, but they'll learn from their failures. They'll interrogate them in this, like, very organic, deliberative way. They'll pick up small efficiencies and improvements as they practice a task. This just doesn't happen with an LLM. Every session you're getting this amnesiac mind. That's very smart, but it's lost all awareness of how you like things done, how your business works.
2032. And you think that's a bearish prediction. He just thinks AGI 2027 stuff is wrong. The market isn't pricing either of these scenarios. And I completely agree. Dorkesh chimes in and agrees and says the transformative impact I expect from AI over the next decade or two is very far from priced in. And he shares a screenshot. He says, while this makes me bearish on transformative AI in the next few years, it's, it makes me especially bullish on AI over the next decades. When we do solve continuous learning, we'll see a huge discontinuity in the value of the models. And we will get a lot more into this when he joins the show in 20 minutes. Yeah. So his basic thesis is when you work with someone, you know they have a set iq, but they're also capable of continual learning. You teach them and they learn and they adapt and then they, they can remember skills. Some hard won lesson from years ago. Yeah, I remember talking to somebody this, this kind of like, I don't even know if he's like a philosopher, like user. He's like a user experience designer. And he said that there's, there's multiple ways to learn. You can, you can develop habits through just doing something the same every, like really forcing yourself for a long time. I wake up at 5:30 every morning, wake up at 5 at 5:30 every morning for years. Eventually you just wake up at 5:30. But he was like, you can also form a habit by one really, really intense experience. He was like. And the example he gave was one day he goes out a true lesson. True, true, true. Something that you've fully integrated and operate against. Yeah, but he gave a great example which was that he has a river out back of his house and he goes into the river and one day he put his foot in his river shoes like these slip ons and there was a lizard in the slipper. And it freaked him out. It gave him this intense response. He was fine, but, but ever since then he's been in the habit of always checking the shoe. There's a, there's a snake in my boot. There's a snake in my boot. Exactly. And so it's not like that was something that had to be trained for years, but it's like this one really sharp, intense learning that then carries forward forever in his life. And so people and employees say he. Could have learned that in school. You gotta check your boots. Scorpions. For scorpions, snakes. But anyway, it begs the question of like, that is an important thing that employees have and white collar workers have is this ability to to learn hard won lessons and then carry them forward forever. And we don't even really know how to design against that necessarily. So Dorkesh is saying that there's a lot of work to be done at the research level to figure out continual learning and that could take a while. He says seven years from now, 2032, and he kind of goes back seven years in time that that was GPT1, which was a sloth factory. It was not a good model, but it was an important breakthrough. And so maybe in the next seven years something will happen. So very exciting. In other news, Rainmaker stands accused of having a role in the Texas floods this is a very, very sad story. It's on the COVID of the Wall Street Journal, not the Rainmaker part that has been contained on X. But I'll give you a little update on what's going on in Texas. So Texas rescue grows urgent as toll mounts at least 70 were killed in weekend as more bad weather complicates the search the search for swept away, for those swept away by punishing flash floods in central Texas over the holiday took on new urgency Sunday as the death toll climbed to 70 and nearly a dozen girls from a private summer camp remained missing. Rescuers combing the swollen banks of the Guadalupe river were holding out hope that survivors might still be found. The potential for more bad weather Sunday also loomed over ground and air operations. The National Weather Service warned of more rainfall and slow moving thunderstorms that could create flash floods and in the already saturated in the already saturated areas in the Texas Hill Country. So this blew up on tragic and and people were asking Augustus, did Rainmaker was Rainmaker operating in the area around that time? Cloud seeding startup Rainmaker is under fire after deadly July 4th floods in Texas. CEO Augustus Dirico, who's been on the show multiple times, will join us today at noon to break it down. He's already explained his side of the story on X several times, but we will ask him a lot more questions. He says the natural disaster in the Texas Texan Hill country is a tragedy. My prayers are with Texas. Rainmaker did not operate in the affected areas on the third or fourth or contribute to the floods that occurred over the region. Rainmaker will always be fully transparent and he gives a timeline of the events. He says overnight from the third and fourth, moisture surged into Hill country from the Pacific as remnants of the Tropical Storm Barry moved across the region at 1am on July 4. National Weather Service, which we work closely with to maintain awareness of severe weather systems. Issued a flash flood warning for San Angelo, Texas. Note.