LIVE CLIPS
EpisodeĀ 7-15-2025
Dollar open AI bill. Like people would push that button for sure and it would be venture backed and stuff. But what if it's 5 billion? Or what if it's 5 trillion? Like there is a point where it just won't. No one will push that button. Ideas guys would be so thrilled if it costs like $5 billion to come up with a good idea. I think it actually does right now. I think it might cost more. I think the current paradigm is that even if you, even if you ran this algorithm and you ran this strategy, like the current models are so like, what's the word? Like they're, they're not great at like compressing and they're not efficient. And so you might be able to brute force it. You're brute forcing research. But the level of brute forcing might, might like when you math it out might be like in the billions of dollars to get an insight. I don't know. I think it also like, it doesn't maybe make sense if you're like a random person. You're going to pay $5 million for an idea, but it maybe makes sense if you're a big lab and you need like you're hitting a data wall, you need new data. I think then it kind of makes sense. Yeah, it makes sense. And then also he also brings up this thing which is like, if you like kind of distill the model, even if you open source it, you're not really giving away the new ideas. Right. Because say that you have some weird concept that you find it's baked into the model, but the only way to access that is to specifically query for it. So you're not actually giving away these new ideas even if you're giving out the model. That's another kind of interesting. So you actually have to inference it enough to pull the insight out of the model. Even if they're. It's literally there's $100 million in your laptop and your job is to pull it out. There's a billion dollars in the weights of GPT 5 or 4.5. But you know, to get the insight out of it might cost another billion dollars. Yeah, it's basically like the monkeys typing on the keyboard until they get Shakespeare. Yep, it's like the same thing but for like research. Do we have a monkey sound effect yet? No, no, we got to get one of those. Get on it. What's your reaction to Chris Best, founder of Substack, friend of the show? He said he's quoting this contemporary chatbot style. LLMs have now been used Seriously, by tens of millions of people since ChatGPT new November 2022. And it does not seem like there. It does seem like there ought to be at least some examples of novelty at this point. And Chris says, potential answer there have been, but the human users hogged the credit. So maybe these chatbots are spitting out novel ideas, but people are just like, I'm not telling anyone that I got my idea from ChatGPT because that's a bad signal. Yeah, I think that could be plausible. I think it's also just like, like the way the LLMs work is like you. You're querying for a specific thing. Yeah. So like, if you have it, it's more of, like, you have a hunch of, like, this could be an idea, and then the LLM like, really gives it to you. But you, you never really see, like, LLM just like, here's a cool idea. Right. Which is what this is kind of. And this is the, the other, the. Other thing is a lot of, A lot of business ideas, like good business ideas, end up being simple and they don't, they don't always feel like, oh, look at this novel insight. It's just obvious. Like, other people will see the idea and they'll actually be. If they're this kind of founder type or maybe they're looking for their next thing, they will legitimately be annoyed because they think, oh, I could have thought of that. Yeah. And so Fitbit, for cows, I could have thought. And for dogs, I love that business. No. So that's. The other thing is, at least in our world, in the private markets, a lot of ideas that end up being worth a lot. It's not necessarily the idea that's worth a lot, but it's having the idea at the right time and then the execution against that idea for a very long time. And I think that kind of gets to my silly Coogan's eval, which is tell me a joke that requires novel insight. Because if you just tell me a joke that's already been out there, that I've heard, it's not funny. So it requires novelty, it requires insight. It's actually really hard to come up with a new joke that's never been told before because it requires putting multiple things together, realizing something. And so I think that even though I'm not going to ChatGPT regularly and asking for, like, make a scientific discovery or like, give me a genius business idea, I think the tell me a joke is, is in the same vein of, like, hard problem for LLM. And I think that if. If they get to the point where you can ask it for a novel, new humorous joke, insight, like joke with insight. That's actually new. It should be able to do the other things. And I think that is some sort of fundamental restraint. And. And it's what we're talking about when we talk about spiky intelligence. It's like very.
Compute intensive. Right. Like you're talking about like running all the data centers all the time to kind of think about this. And it feels like it puts me back on Kurzweil timelines of like 20. I think he says singularity in 2045. And so it's like, this is an interesting idea. What if it, what if it takes a thousand times as much compute as we have now? Yeah, I mean you're basically like brute forcing research which like kind of doesn't make sense in a way. But if you have these models that are like small enough, you can kind of steal them down. I don't know. Think about a scenario where founder exits their company, they do their earn out, they want to start something new instead of spending two years lost in the woods. You know, think trying to come up with a decent idea. They just spend $5 million in five minutes and just like generate a bunch of different ideas. You still then have to like pick an idea and it might seem good on the surface. And then you talk to people in the industry and they say, well, well, here's kind of the issue with how you're thinking. And then maybe you got to do another run. But I think the interesting question is 5 million feels like the point at which we would say if someone built this system and they actually said, hey, we've designed the algorithms such that you can tell it to go and it will come back with a good idea. But every time you hit go, you get a $5 million OpenAI bill. Like people would push that button for sure and it would be venture backed and stuff. But what if it's 5 billion or what if it's 5 trillion? Like there is a point where it just won't. No one will push that button. Ideas guys would be so thrilled if it costs like $5 billion to come up with a good idea. I think it actually does right now. I think it might cost more. I think the current paradigm is that even if you, even if you ran this algorithm and you ran this strategy, like the current models are so like, what's the word? Like they're, they're not great at like compressing and they're not efficient. And so you might be able to brute force it. You're brute forcing research. But the level of brute forcing might like when you math it out might be like in the billions of dollars to get an insight. I don't know. I think it also, like, it doesn't maybe make sense. If you're like a random person, you're gonna pay $5 million for an idea. But it maybe makes sense if you're a big lab and you need like you're hitting a data wall, you need new data, I think then it kind of makes sense. Yeah, that makes sense. And then also he also brings up this thing which is like, if you like, kind of distill the model, even if you like open source it, you're not really giving away the new ideas. Right. Because say that you have some weird concept that you find it's baked into the model, but the only way to access that is to specifically query for it. So you're not actually like giving away these new ideas, even if you're giving out the model. That's another kind of interesting. So you actually have to like, inference it enough to pull the insight out of the model. Even if there. It's literally the. The. There's $100 million in your laptop and your job is to pull it out. Like there's a billion dollars in the weights of GPT 5 or 4.5, but you know, to get the insight out of it might cost another billion dollars. Yeah, it's basically like the monkeys typing on the keyboard until they get Shakespeare. Yep. Like the same thing, but for like, research. Do we have a monkey sound effect yet? No, no, we got to get one of those. Get on it. What's your reaction to Chris Best, founder of Substack, friend of the show? He, he said he's quoting this contemporary chatbot style. LLMs have now been used seriously by tens of millions of people since ChatGPT November 2022. And it does not seem like they're. It does seem like there ought to be at least some ext. Examples of novelty at this point. And Chris says potential answer there have been, but the human users hogged the credit. So maybe these chatbots are spitting out novel ideas, but people are just like, I'm not telling anyone that I got my idea from ChatGPT because that's like a bad signal. Yeah, I think that could be plausible. I think it's also just like the way the LLMs work is you're querying for a specific thing. So if you have. It's more of like, you have a hunch of like, this could be an idea, and then the LLM really gives it to you. But you never really see like, LLM just like, here's a cool idea, which is what this is kind of. And this is the other. The other thing is a lot of business ideas, like good business ideas, end up being simple. And they don't always feel like, oh, look at this novel insight. It's just obvious. Like, other people will see the idea and they'll actually be. If they're this kind of founder type, or maybe they're looking for their next thing, they will legitimately be annoyed because they think, oh, I could have thought of that. Yeah. And so Fitbit, for cows, I could have thought. I love that. And for dogs, I love that business. No. So that's the other thing is, at least.