LIVE CLIPS
EpisodeĀ 11-25-2025
500 luxury watches fully authenticated in house by Bezel's team of experts. Okay, we got to talk about the. The Cremu. Yes. Yes. Keon exchange. I. I thought that Cremieux was pretty. Pretty fair. Sure. I didn't. I. I think he could have gone harder. Yeah. I. Keon said he didn't watch Kermieu's. Interview, that he was with a patient. I wish that he had. Yep. Because it seems like this is. One of the most important things on his plate right now is kind of getting. Getting beyond this. And ultimately we tried to bring up as many of kind of the key concerns as possible. To be honest, it's so it's. It's technical enough that I don't. That it's. It's. We don't have the same power level. In terms of kind of like pushing Kian on some of these issues. Yeah, totally. Cremu and Sichuan, which is why I. Think, like, a lot of the debates will be done around the data in blog posts by scientists. But. Yeah, and. And Keon's main point that I took away was. Sorry, laughing at. Laughing at the chat. But it was like, people are angry. That we have the best marketing and the best science, and I. I believe that they have the most effective marketing in terms of capturing attention right now. But I don't have the confidence after that conversation that the marketing should be happening at the scale that it is. Like, I just don't. I didn't. I didn't come away from that super confident that. That. That Kion felt like they've done anything at all that's wrong. Even if you just narrow it to what there is full agreement amongst all participants on which is that there were reviews that were anonymized and then not disclosed to be anonymized. Even if it's just that. That is enough to, if you're a customer, be like, oh, I don't trust this anymore. And it's a very, very high trust environment. It's a very trust crit. It's not. Okay, yeah, I'm buying a phone case. And like, they AI. Generated the, like an example of a person holding up the phone case. Like, that's not what this is. This is like life. It's bio. It's really important. People are selecting, you know, impact future child's life. Yeah. We've talked about. And I think that whatever happens, it's. Really important to get it right. It's really important to get it right. And ultimately, people that use the service. If they have an adverse experience. They feel like the service didn't deliver went wrong. They're going to come back to this. Are there any posts that we need to Sichuan? Mala followed up and said the fundamental problem with Keon's deflection is that even if.
Exquisite scientific physical infrastructure with the models themselves. So if you want to drive scientific. Discovery, you have to be able to. Pair what's coming off of telescopes, what's coming off of lasers and all this stuff to be able to match them with large language models, to be able. To accelerate that loop. And we're still in the very early innings of it. And in order for us to do, in order to sort of outpace and. Continue to keep our lead, like we. Do have in some of these other places, is we have to do something like the Genesis mission. We have to wake up a country and say, like, look, where do we have the most valuable scientific. Data sets. And how do we make those data. Sets available to. Our model builders to. Be able to create. The necessary tools. To pair the. Data coming off these exquisite scientific instruments back into these, these AI models? And like, think about it. For us, we want to win on fusion, for example. Yeah. So for fusion, Google's already doing this, but there's a ton of companies around, around the United. States that are very. Heavily funded that all have lots and lots of experimentation they want to do with these fusion reactors. The ability for us to be able to accelerate. The modeling of that through the gen mission. Rather than each individual. Company having doing this on its own. Can be really. Really dramatic. So back from a pacing standpoint, I think the US has all of these amazing instruments. It has.
Our next guest is director Michael Kratzios. He is the 13th director of the White House Office of Science and Technology Policy. And it is great to have him here with us on the show today. One of the best call in setups we've seen. You look fantastic. Thank you so much for taking the time to talk to us. How are you doing? I am great. Thank you guys so much for having me. I have followed your meteoric rise over the last year. I feel like I should have been on the show much earlier. Yeah, we would have loved to have you, but we're very happy to have you today because there's massive news. But I'd love for you to introduce it and actually set up the conversation. So please take us through the announcement and. And then I'm sure we'll have a ton of conversation and questions to go through. Yeah. So I think maybe we can start with the AI Action plan that the President signed out in. July of last year. And one of the main themes of the AI Action plan, essentially to win the AI race is all about how we can win in scientific discovery. And the question was, like, how do we do that? What's like the next. Chapter of using AI to drive. Scient innovation in our country? And yesterday, the President signed an executive order along.
Sa. I see multiple journalists on the horizon. Standby. You're watching TBPN. Today is Tuesday, November 25, 2025. We are live from the TVPN Ultradome. The temple of technology, the fortress of finance, the capital of capital. We had a fan Yesterday was anthropic Claude 4.5 day. We had a lot of fun talking to Sholto about that. You should go check it out. We wrote a little write up. I collaborated with Brandon and Tyler to kind of give our thoughts on the state of the AI race with regard to OpenAI and anthropic and what makes Anthropic special. The things that stuck out to me, I mean the thing that went viral was just the fact that apparently Dario goes around Slack and writes essays every single day and everyone is like, give me the essays. Turn it into a book. Paging Stripe Press. We got to get Stripe Press to turn into a book. Yeah, I mean, well that. But I was also thinking of potentially a risk factor for them of being like the. The Dario files. A disgruntled employee that leaks everything and then leaks them all. I imagine because he, even when he's on Mike he's known to say some things. He at times sort of people. People. I feel like people take him out of. Out of context. Out of context a lot. Like he will say he's the final boss if this doesn't, if this doesn't go well, we could lose 50% of white collar work or entry level white collar work and people will be like anthropic. Stated mission is to destroy jobs. Take your father's job. Yeah, yeah, it's rough but ramp time is money save. Both easy to use, corporate cards, bill payments, accounting and a whole lot more all in one place. Timeline was in turmoil over the weekend and yesterday we covered a little bit about the nucleus of dust up on the timeline. Cremieux will be coming on the show at 11:45 and then fast followed Keon, the CEO of Nucleus will be coming on the show at noon. So we will kind of have both sides. Then we have Joe Wiesenthal joining from Bloomberg and then we have. Who else do we have today? Kratzios Kratios is coming on to break down Project Genesis which we're very excited about. Anyway, let's run through what other news stories were at the top of the timeline while we pulled those up. Let me tell you about Restream 1 livestream 30 plus destinations. If you want to multi stream go to restream.com. oh yes, the biggest news in tech in AI is that the Ilya Sutskever Dwarkesh Patel podcast has dropped. Do we have hit the timeline? Do we have the opening clip? Because the opening clip is. Is iconic. It's very funny. It's a bit of a hot mic moment for Ilya and I think we should pull it up and play it because it has a fascinating. Just insight into. It feels very like, oh, this is the real Ilya. He's not even thinking that he's on camera and he gives his real feeling. So let's play this from the very. Start that all of this is real. Yeah. Don't you think so? Meaning what? Like all this AI stuff and all this Bay Area. Yeah. That it's happened. Like, isn't it straight out of science fiction? Yeah. Another thing that's crazy is, like, how normal the slow takeoff feels. The idea that we'd be investing 1%. Of GDP in AI hasn't even started. A bigger deal, you know, where right now it just feels like we get. Used to things pretty fast. Turns out. Yeah. But also it's kind of like, it's abstract. Like, what does it mean? What? It means that you see it in the news that such and such company announced such and such dollar amount. Right. That's all you see. Right. It's not really felt in any other way so far. Yeah. Should we actually begin here? I think this is an interesting discussion. Sure. It's one of the greatest podcast intros. From the advertiser's point of view. So good. So good. Anyway, we're not going to watch the whole. I think that's. That's going to be a new meta. Yes, Yes. I mean, you can't. You can't fake that. It's amazing. Also, it's just funny because, you know, it's effectively getting caught on a hot mic. But we. Always joking. I was like, of all the things that you could say on the hot mic before you sit down. Oh, okay. We're actually recording. His is just completely reaffirming everything we know about Ilyas at Square. Like, it's just completely the same. Like, okay, he's a. He is a true believer. It's not like he was sitting down and being like, nor cash. We gotta go on my private plane. I just sold so much secondary. It's crazy what's going on with this stuff. If people really think this AI thing's gonna pan out, I'm making billions of dollars. I'm cashing out. I don't believe any of this stuff is real. No, he wasn't caught on a hot mic like that. His hot mic moment is like, wow. It's exactly like science fiction, everything. It's all real. It's all real, yeah. Which is just iconic. Well, you can go and listen to that in the Dwarkesh Patel RSS feed and on the YouTube channel. And on X, he put the full thing up. It's 95 minutes long. Tyler, did you have any other takeaways from your Speedrun? You're listening to it at 5x, right? Well, on X you can only do up to 2x. I was on that. So I still have like 10 minutes left. But yeah, a bunch of good stuff here. Does he pop the scaling bubble? Does he give a bearish take about. Is it over at any point? While you're thinking about that, let me tell you about Gemini 3 Pro, Google's most intelligent model yet. State of the art reasoning, next level vibe coding and deep multimodal understanding. Continue. So I wouldn't say he's like anti scaling, but he does kind of give this interesting take, which he basically says that now AI companies, there's too few ideas for the amount of companies and for the scale that we're at. Where he basically, like, you can think of AI progress as being in these kind of distinct ages, right? So he says 2012-2020 was like the age of research where you're trying all these like different ideas and the scale of things is very small. Right? Like to train the original Alexnet was like two GPUs to do the original Transformer was like eight, maybe 64, but like, you know, very small amount of GPUs. And then once we kind of figured out that Transformers work, we entered this age of scaling and that's basically from 2020 to 2025. And now we're basically at this point where like, yes, you can keep scaling and models will get better, but even if you scale 100x, like are we really going to get super intelligence? Like, it'll get better on the benchmarks and they'll become more useful. But it's not like this. He doesn't think that just raw scaling alone is basically what's going to bring us there. I mean, this is been echoed by a lot of people, right? This was, I think Karpathy said this, where we still need a couple different kind of paradigms for this to work. And even he's like, this is even kind of what Sholto said yesterday, which is like pre training. It's not dead, but it's like the reason that Opus 4.5 was better is not just because they scaled pre training, it's scaling generally. Yeah, but even then, like the, the scaling has gone from pre training and now it's rl. Yeah. And so we basically, we need to find another paradigm and the way you do that is just doing like research. And so he talks about SSI as basically being this like return. Research. Return to research. Yeah. It's small kind of training runs. Even though, you know, they only raised 3 billion, which is like small compared to other. Sure. To other research institutions. The fact that they're basically putting it all on these kind of. I mean, I don't know if they're moonshots, but they're these small training runs where they're doing experiments and then they're going to scale it up eventually. But they're not just basically trying to win the AI race by just scaling up and doing the same thing as everyone else. Yeah, yeah. They're trying to find a new, a way to actually bend the scaling curve, find a new scaling law or find a new technology that like they can scale against. I was thinking about Ilya's talk at Neurips last year. He pulls up this chart of the relationship between the mammal's mass and the brain volume and it's a pretty linear graph. And so the elephant is a lot bigger than the mouse and so it has a proportionally larger brain to its body volume. And it's this perfect, it's this perfect linear curve. I should just try and figure it out if I, I can maybe text it in. I took a picture of it because it's a very cool chart. Here it is. Where do I send this? The timeline. Let me see. Ship it. Share. Let me see. Timeline. Sorry. Boom. So basically the mammals have this very clear linear trend, but then the non human primates are a little bit higher up on the chart and they're just doing a little bit better. But then hominids, the actual humans have a different. There's a very distinctly different curve. And so there's this interesting, like it was making me think like maybe that's what we're supposed to see when we think about. Yeah, this, this. It's like, it's like when we say like straight lines on log graphs, when we say we are seeing scaling happen with the current architectures, which line are we scaling against? Are we, are we actually scaling on the, on the human curve or are we waiting for divergence from that current scaling law? Yeah, he has this good quote where scaling has taken all the air out of the room. Right. Where like basically like we have more than enough compute to try these different ideas, but they're just all going straight into training the next big model using the next paradigm. And maybe it's slightly different, Right. You have a different way of doing RL or whatever, but it is still fundamentally the same thing. Right. And he talks about maybe continual learning is really the better approach. Right. We've been in this era of having a pre training thing for so long that we think of AI as like you train this thing and then you release it and it's like done. And RL is like a little bit different now because there's this idea of post training and you can kind of integrate different things. I also thought the interesting thing was with pre training you use the whole Internet so you don't have to decide anything. You're just applying this algorithm to just all the data, all the compute and there's no decisions. But then with rl, you have to decide, okay, we're putting in these math equations and we're maybe not putting in something else because we're actually creating the data and we're not just. This is maybe why we see these kind of like models that are super well, they do super well in evals, but not so much. Yeah, some of the overfitting. And the reason is because the data that we choose is not the correct data because researchers are basically being reward hacked maybe into just solving for benchmarks. Yeah, it's interesting, it's interesting to hear this. The conclusion is we need another breakthrough and then simultaneously consensus be, but we're definitely going to get that breakthrough in the next decade. It's hard to predict. I feel like it echoes a lot of even what Mike Noop has been saying. We need new ideas. Yeah, totally saying this for months. But it's way harder to predict the rate at which breakthroughs will arrive as opposed to you can actually chart out, okay, the formation of capital, the time it takes to build a data center, how long it takes to, you know, manufacture a bunch of GPUs, rack them, run the training run. Like that's much more predictable than like human came up with new algorithm like that sort of random. And he brings this up as the reason why you see companies doing this. Because it's just if you're raising money, it's so much easier to justify the raise by saying, we're going to buy this data center. Totally do this training run. It's going to cost exactly as much. Oh yeah, then the model will be this good. And then we can use it to monetize this way. Totally. Where if you're just saying, like, oh, yeah, we're just going to pay a bunch of like really smart researchers to do a bunch of research and then they'll, they'll figure something out that like, you can't really. Yeah. In some ways it feels like SSI is set up for like somewhat of a mini AI winter or like at least riding the hype cycle down. Yeah. Because it doesn't sound like he's sitting there being like, we raised 3 billion and we're spending it in the next 12 months. It's like 2.9 was debt. No, not, not, not. No, no, no, no. That's the point. It's not. It's like equity. It's just sitting there. It's like he can clearly pull back. I'm going to give each other researcher, all these different teams, like a. Shots on gold. No, I love it. We're going to keep taking those shots until obviously he'd be able to raise like another $10 billion whenever he wants, especially if he has like a key breakthrough insight and they can be first to scale that. Yeah. Well, let me tell you about cognition. They make Devin, the AI software engineer, crush your backlog with your personal AI engineering team. Nvidia has posted. They hit the timeline. Hit the timeline. Break this down for me. They said, we're delighted by Google's success. They've made great advances in AI and we continue to supply to Google. Nvidia is a generation ahead of the industry. It's the only platform that runs every AI model and does it everywhere computing is done. Nvidia offers greater performance, versatility, versatility, fungibility than a 6, which are designed for specific AI frameworks or functions. That is a crazy thing to post. Crazy, crazy, crazy thing to post. Sometimes we get stuff from. Indeed. I don't know, boys, but having the largest company in the world sending tweets to defend their main product is not very reassuring. Yeah, it's just an odd. I feel like this would be so much better delivered. I actually don't have that much of a problem with the actual text here. I just think this should be delivered by Jensen with some nuance in a conversational setting. It just hits a lot different when this is in at exactly 9am, like clearly scheduled, clearly typed out in a document. You know, it's like. Yeah, it feels like a press release, which is just an odd, odd thing when, when it should be, you know, this should be an answer to a question which someone Bobby Cosmic in the chat was saying like, oh, the mainstream media is just now picking up on the Gemini 3 story and there' the Wall Street Journal and other places saying like, oh, maybe Google's back. Like, you know, buy Google. Like, it's very exciting. And so Nvidia feels the need to respond to that. But it's a lot different when it's actually a response instead of just like we're putting out a press release, like, who knows why, as opposed to like Jensen saying like, well, since you asked, you know, talk show host or news anchor or whoever, he's top podcast host, whoever he's talking to. Dwarkash, Whoever he's talking to, maybe us, we'd love to have him. I can ask him that question. He can defend this here. Yeah, well, the timing is, is seems important because they are coming under a huge amount of pressure right now. There was an article in Barron's this morning by Tae Kim. Yeah. The headline is not what Nvidia's comms teams would have liked it to be. The headline is Nvidia says it's not Enron in Private Memo refuting Accounting Questions. That's a crazy thing to say. Of course it's not Enron. But let me get into the coverage. So Tay says a series of prominent stock sales and allegations of accounting irregularities have put Nvidia in the middle of a debate about the value of artificial intelligence and its related stocks. Now Nvidia is pushing back. In a private seven page memo sent by Nvidia's investor relations team to Wall street analysts over the weekend, the chip maker directly addressed a dozen claims made by skeptical investors. Nvidia's memoir, which includes fonts in the company's trademark green color, begins by addressing a social media post from Michael Burry last week which criticized the company for stock based comp dilution and stock buybacks. Burry's bet against subprime mortgages before the 2008 financial crisis was depicted in the movie the Big Short. Of course, Nvidia repurchased 91 billion shares since 2018, not 112 billion. Mr. Burry appears to have incorrectly included RSUs, RSU taxes. Employee equity grants should not be conflated with the performance of the repurchase program, Nvidia said in the memo. Employees benefiting from a rising share price does not indicate the original equity grants were excessive at the time of issuance. That makes sense. Barron's reviewed the memo, which initially appeared in social media posts over the weekend, and confirmed its authenticity. Burry told Barron's he disagrees with Nvidia's response and stands by his analysis. He said he would discuss the topic of the company, stock based comp and more details. Burry is of course now over on substack. He's charging $380 a year and if you are a perma bear, I can't. This is like Christmas coming early. Nvidia didn't respond to Barron's for request for comment, but they also responded to claims that the current situation is analogous to historical accounting frauds Enron, WorldCom and Lucent that featured vendor financing and SPVs. Nvidia does not resemble historical accounting frauds because Nvidia's underlying business is economically sound. Our reporting is complete and transparent and we care about our reputation for integrity. Unlike Enron, Nvidia does not use special purpose entities to hide debt or inflate revenue. There's no market. It's like there's 25 examples of how this is not the same. Video also addressed allegation that its customers, large technology companies aren't properly accounting for the economic value of Nvidia hardware. Some of the companies use we've talked about this use a six year depreciation schedule for GPUs. Burry said he believes the useful lives of the chips are shorter than six years, meaning Nvidia's customers are inflating profits by spreading out deep depreciation costs over a long period. Nvidia's customers depreciate GPUs over four to six years based on real world longevity and utilization patterns. Older GPUs such as A1 hundreds continue to run at high utilization and generate strong contribution margins, retaining meaningful economic value well beyond the two to three years claimed by some commentators. So again under under fire on the TPU front and from the Michael Burry camp. But again I think I think their their answers are totally valid. Matt over on X said had a post here. He said the TPUs equal bad for Nvidia take is up there with the dumbest maybe worse than deep seek as it completely misses what actually happened in the last six weeks and I will remember who is who in the zoo. My view one demand for AI is bananas. No one can meet demand. Everyone is spending more. Google said just yesterday they have to double capacity every six months to keep up. Two scaling laws are intact. He's referencing Gemini 3. The flywheel is about to speed up. Somehow the mid curve crew thinks this is zero sum competition. None of this suggests that. If you think the race is hot now, wait until you see what comes out of large coherent Blackwell clusters. All the magic from the quote God machines is pretty much still hopper based. Lastly a quick GPU TP lessen the cost and performance specs on the box aren't what you get in real life. And Google is going to get a fat margin too doubled. What matters is system level effective tokens to watt to dollars and TCO. Nvidia GPUs have higher FMU because they're already embedded in workflows. The ecosystem is massive. By the way, this is a good test if you have an opinion on this topic, but you have to look up FMU then perhaps curate better scorecu. What mfu mfu I said fmu. Sorry sorry sorry. The above effective token watt gap also likely widens with Rubin. Add in that Jensen can actually deliver volume in a tight market plus future flexibility multi cloud capable programmable for paradigm shifts and he'll sell every GPU he makes for years. Google will too since everyone wants a second supplier and TPU is a fantastic chip. But this is as far from either or as it gets. The one benefit of this confusion is that it is likely to give Google a brief stint as the world heavyweight champion, the most valuable company I would guess the midwits put the strap on them in less than two weeks. So he put a strap on them. What does that mean? Just like pile in is he saying just like so it seems like he's predicting that people will overplay the Nvidia bear take and overplay the Google opportunity. Opportunity. And that will result in Google becoming the most valuable company in the world. And he uses the phrase put the strap on them in multiple in less than two weeks. Interesting post. In other news, David Sacks has hit the timeline. He says according to today's Wall Street Journal, AI related investment accounts for half of GDP growth. A reversal would risk recession. We can't afford to go backwards. The article is how the US Economy became hooked on AI spending and we will be chatting with Kratios in about an hour on this on this very topic so we can get into a little bit. Well before we move on, let me tell you about ADEO AI native CRM. ADEO builds skills and grows your company to the next level. Fact sheet from the White House President Donald J. Trump unveils the Genesis Mission to accelerate AI for scientific scientific discovery today. And this is yesterday. Today Trump signed an executive order launching the Genesis Mission, a new national effort to use artificial intelligence to transform how scientific research is conducted and accelerate the speed of scientific discovery. The Genesis mission charges the Secretary of Energy with leveraging our national laboratories to unite America's brightest minds, most powerful computers and vast scientific data into one cooperative system for research. The Order directs the Department of Energy to create a closed loop AI experimentation platform that integrates our nation's world class supercomputers and unique data sets to generate scientific foundation models and power robotic laboratories. The Order instructs the Assistant to the President for Science and Technology to coordinate the national initiative and integrate an integration of data and infrastructure from across the federal government. The Secretary of Energy, APST and the Special Advisor for AI and Crypto will collaborate with academia and private sector innovators to support and enhance the Genesis mission. Priority areas of focus include the greatest scientific challenges of our time that can dramatically improve our nation's national economic and health security, including biotechnology, critical minerals, nuclear fission and fusion energy, space exploration, quantum information science and semiconductors, and microelectronics. Next, harnessing AI for our national security and economic development. With the Genesis mission, the Trump administration intends to dramatically expand the productivity and impact of federal research and development within a decade. So there's one more note here on strengthening America's AI dominance. Trump continues to prioritize America's global dominance in AI to usher in a new golden age of human flourishing, economic competitiveness and national security. And so we will get into more of of this with Kratzios. Yeah, I'm very interested to hear how the public private partnership actually works here. There was a time when basically every cool technology was coming out of darpa, coming out of the US Government. The US Government landed on the moon. And since then, I think a lot of people in technology have lost faith in the US Government overseeing the development of technology. Even academia. I mean people think like, you know, AGI will emerge from a private C corp. Like that's where people believe that the best work will be done with, you know, give Ilya Sutzkever, give the best scientist $3 billion, let him go cook. Like that's the thesis currently in tech. This feels like somewhat of a rejection of that in some ways. There's obviously lots of different places where having AI resources, having science and technology resources within the government make a ton of sense. But it'll be interesting to see where are the interfacing points between the two categories, the public and private sector? Because by default, I think most people in our audience in technology would say, hey, let's leave the, leave the space travel and the AI research to the private sector. And this is potentially a different direction. Potentially just very Synergistic, so be interesting to see where it breaks. Well, should we run through The Astral Codex 10 piece on trait based embryo selection to tee up our discussion with Cremieux and Kian from Nucleus? Let's do it and go through that. So this is from Scott Alexander in Astral Codex 10. He says suddenly trait based embryo selection when a couple uses so in 2021, genomic prediction announced the first polygenically selected baby. When a couple uses IVF, they may get as many as 10 embryos. If they want one child, which one do they implant? In the early days, doctors would just eyeball them and choose whichever looked the healthiest. Later, they started testing for some of the most severe and easiest to detect genetic orders, like disorders like down syndrome and cystic fibrosis. The final step was polygenic selection, genotyping each embryo and implanting the one with the best genes overall. Best in what sense? Genomic prediction claimed the ability to forecast health outcomes from diabetes to schizophrenia. For example, although the average person has a 30% chance of getting type 2 diabetes, if you genetically test five embryos and select the one with the lowest predicted risk, they'll only have a 20% chance. So you get a 10% bump there. That's nice. Since you're taking the healthiest of many embryos, you should expect a child conceived via this method to be significantly healthier than one born naturally. Polygenic selection straddles the line between disease prevention and human enhancement. In 2023, Orchid Health, founded by Noor, who we've had on the show, enter the field. Unlike Genomic prediction, which tested only the most important genetic variants, Orchid offers whole genome sequencing, which can detect the de novo mutations involved in autism, developmental disorders and certain other genetic diseases. Critics accused GP and Orcid of offering designer babies, but this is only true in the weakest sense. Customers couldn't design a baby for anything other than slightly lower risk of genetic disease. You're basically just selecting out of what you already got. They're not editing the genes, they're merely sequencing them and then allowing you to select. These companies refused to offer selection on traits, the industry term for the really controversial stuff like height, IQ or eye color. Still, these were trivial extensions of their technology, and everyone knew it was just a matter of time before someone took the plunge. Last month, a startup called Nucleus took the plunge. They had previously offered 23andMe style genetic tests for adults. Now they announced a partnership with Genomic Prediction, focusing on embryos. Although GP would continue to only test for health outcomes you could forward the raw data from GP to Nucleus, and Nucleus would predict extra traits, including height, bmi, eye color, hair color, adhd, iq, and even handedness. And it's worth noting that Nucleus is now being sued by Genomic prediction, even. Though they have this partnership. I'm assuming the partnership is no longer. Well, we can ask. Yeah. But I'm assuming it's no longer because one of GP's co founders left. Left the company to join Nucleus. Interesting. And allegedly turned off all the security cameras. That's metaphor. Or is that actually. The lawsuit alleges that he turned off all the security cameras on his. That's not a metaphor for, like, you know, sharing a Google Drive of PDFs. You literally mean it's his last day at work. Okay. And he was allegedly, like, rounding up. Okay, so he turns off the cameras. Allegedly. And the implication is that maybe he was rummaging around, like, literally taking docs documents or something like that. That's at least what the timeline is. What the lawsuit alleged. Okay. Wow, that's wild. I did not know that that was a literal accusation. And then another part of it, apparently Nucleus, it's new. People at Nucleus were emailing the former co founder at his old email address, evidence of them violating the. The. The agreement that they had. So, anyways, it's very, very, very, very messy. We can ask. Yeah, there's like, four or five companies. Involved in this, and all of them are controversial. Because this is the most, I think, the most controversial, probably, like, category that you can be in. Yeah, it's certainly up there. Health is already, like, one of the most controversial topics. Yeah, everyone has an opinion on it. Health influencer that you've gotten into. Totally various debacles. Yeah. And also there's just like the. There's just. It's so easy to throw. I mean, in the same way that people are throwing Enron at Nvidia. Like, it's so easy to throw Theranos at any biotech company that's not, you know, that's accused of anything. And also biotech, it's like, it's pretty hard to understand the underlying science. It's not as popular as. Okay, like, does the website work? Does the business make money? What's the cash flow? Like, it's way more complicated, and so it does attract even more attention. So one of the other companies in the space is Herasite and Astral Codex 10 continues here. They enter the space with the most impressive disease risk scores yet, an IQ predictor worth six to nine extra points, and A series of challenges to competitors, whom they call out for insufficient scientific rigor. Their most scathing attack is on Nucleus itself, accusing its predictions of being misleading and unreliable. Let's start with the science and then move on to the companies to see if we can litigate their dispute. In theory, all of this should work. Polygenic embryo screening is a natural extension of two well validated technologies, genetic testing of embryos and polygenic prediction of traits in adults. So genetic screening of embryos has been done for decades, usually to detect chromosomal abnormalities like down syndrome or simple gene editing disorders like cystic fibrosis. It's challenging. We've talked about this before. You need to take a very small number of cells, often only five to 10, from a tiny protoplacenta that may not have many cells to spare, and extract a readable amount of genetic material from this limited sample. But there are known solutions that mostly work. And so the companies that we're talking about today aren't necessarily doing like the fundamental lab equipment development, building the machine, figuring out how to sequence data from the first. It's the analysis that happens on top of the results. And the recommendations. And the recommendations, which is probably, which I would say is the most controversial part of this. I don't know that any of them are recommending, hey, we think you should take. We think you should pick this baby. They're more just saying like, we think that according to the data, this baby might. But if you're giving somebody risk factor, if you're giving, if you're. Yeah, but that's not a recommendation. Like if I tell you this car is 700 horsepower and does 0 to 60 in two seconds, and this one does 800 horsepower and does 0 to 60 in 2.4 seconds. Seconds. This one's faster in a straight line, this one's faster on the curves. And then like you pick like, I didn't make a recommendation. I just told you the stats. Right. Yeah, but, but from when you look at the, these companies, from a, from what they're marketing. Yeah. To consumers of what, of why you should care about the service. Sure. And then the way that they deliver the information, if they're advertising, we can effectively. Advertising, we can help you have a smarter, healthier baby. And then they're saying like, hey, we think this direction is going to get you a higher iq. I don't think it's a recommendation. It's not an explicit recommendation, but I think people are trusting the service to try to get them what was marketed to them. Yeah. People want the data, and they want the data to be accurate because they're going to make a decision based on that. But I mean, here Scott Alexander actually gets into some of the, some of the complexity of the actual trade offs because there are. So most traits are polygenic, requiring information about thousands or tens of thousands of genes to predict. These are too complicated to understand fully at current levels of technology. But some studies have chipped away at the problem and gotten to a partial understanding. Often this looks like being able to predict a few percent of the variance in the trait to determine whether someone's genetic risk is slightly higher or lower than average. And so some people might genuinely want to select on a single condition. For example, people with a strong family history of schizophrenia might want to minimize their chance of their children getting the disease. For these people, reducing schizophrenia risk by 58% while keeping everything else constant sounds pretty good. Everyone else probably wants a generically healthy embryo with low risk of all conditions. Exactly how this works depends on the customer's own value. Would they prefer an embryo with lower cancer risk to one that will have fewer heart attacks? Like that's a trade off that you have to pick. And the exact benefits will depend on how parents make that decision. Genomic prediction and Herasite try to help by providing semi objective measures of which embryo is overall healthiest according to different conditions, effects on longevity and patient rated quality of life. For genomic prediction, that's the embryo health score. This is, you know, that's close to a recommendation. I think you're getting close. Yeah. And Nucleus's subway campaign is have a healthier baby. Yeah, yeah. The marketing claims are a big, big piece of this. I think the scientific claims are potentially just as important. But it's both, they're both understanding where the science actually is, both broadly and then also within the companies and then how it's marketed. Like all of that is important to get like a complete picture of what's going on here. So for her site, it's a polygenic longevity index. They don't give exact risk reduction numbers for each disease, saying that it depends too much on a couple specific family history. But say that most people gain one to four years of healthy life. When I test it on a set of 20 embryos, the healthiest gets an extra 1.66 years. And so how much would you pay to give your children an extra one to four years of healthy life? This is no longer a hypothetical question. Here are the costs. Genomic Prediction is around $3,250. Orchid is around $12,500. Nucleus is around $9,249 and Herasite $53,250. That is expensive compared to the rest, five times the price. Is it worth it? Well, if you're already doing ivf, the claimed risk reductions are accurate. You value your kid's health as much as your own. You have low time discount rate, you're well off enough that these aren't extraordinary sums of money to you. You're okay using expected utility calculations where 50% chance of preventing X is half as good as fully preventing X. Then I'll go out on a limb and say yeah, it's obviously it's worth it. Consider genomic prediction, which costs $3,500 for five embryos and claims to lower absolute risk of type 2 diabetes by 12%. That implies that not getting type 2 diabetes is worth $27,000. Ask anyone dealing with regular insulin injections, let alone limb amputations, whether it would be worth $27,000 to wave a magic wand and not have type 2 diabetes. It's not a hard question, and that's just one of a dozen conditions you can lower the risk for. Other ones, like not getting breast cancer, might be so valuable that it's hard to even attach numbers. So what about IQ6 extra IQ points, which is herasytes estimate with 5 embryos is about a quarter of the gap between the average person and the average Ivy League student. The benefits of intelligence are hard to quantify, but it's been shown to have probably causal positive effects on income, mortality and achievement. Probably the income effects alone make up for the cost of the intervention, again assuming total parent child altruism and a low discount rate. So if we accept all of these claims and assumptions, the choice seems obvious. It probably even accounts it's probably even obvious for governments to pay for all citizens to get these, given how much they'd save on health care costs, says Scott Alexander. But in practice, it's complicated. Critics have raised both scientific and ethical objections to polygenic embryo screening. Most significantly, it's been condemned by various bodies, including the Society for Psychiatric Genetics, the European Society of Human Genetics, and the Behavioral Genetic Society. These their statements are not good. They tend towards vague language about how people are more than just their genes, or how no genetic test can be perfect, or how embryo screening is not exactly the same thing as some other form of screening, which has a longer history and more proponents. Although in general higher scores mean you are more likely to have a condition, many healthy people will have higher scores, others might develop the condition even with a low score, says the Society for Psychiatric Genetics, as if they have just blown the lid off of some dastardly conspiracy. Screening embryos for psychiatric conditions may increase stigma surrounding those diseases, they continue. An objection which, taken seriously, could be used to ban every form of medical treatment because if you take care of something, you remove them from the population. That might increase the stigma, but we should still treat these. So, he says we will mostly ignore these people and try to imagine the implications of the objections that mildly competent critics might raise, some of which will coincidentally overlap with the content of the non hypothetical statements. So so the big question he wants to answer is scientific objection. The scientific objection around efficacy does this Are we sure this works at all? Are we sure this works So a typical polygenic score is created by collecting thousands or millions of adult genomes, then matching genetic information with surveys about who has the traitcondition of interest. Reputable studies then test these scores on holdout samples, adults who were not used to make the score, to see if they still accurately predict who has the trait condition. Polygenic embryo selection depends on an assumption that the scores which work in these kinds of retrospective tests will also work prospectively on embryos. This assumption hasn't been formally proven in studies which would require years or decades to conduct, but seems commonsensical. The strongest challenge to the application of polygenic scores for embryo selection comes from a recent body of research showing that most scores combine causal genetic effects with population stratification and therefore can be expected to lose much of their predictive power when comparing two members of the same family. There is an increasing agreement in the field that unless scores are validated within families, headline results like Decreases risk of X by Y percent will be large overestimates. When I talked to company representatives, they all said that they took accuracy extremely seriously and had various white papers and journal articles where anyone could double click could double check on their methodology. But I attended an industry conference a few months ago and the gossip level was comparable to a high school cafeteria. Minus the sex rumors. Most of the attendees were having their kids via ivf. Everyone had some story about someone being careless or fudging their numbers. Some of the conflicts broke out into the open on Wednesday when herasite left stealth and published a white paper and associated blog post. They criticized genomic prediction for reporting between family rather than within family results, and orcid for smuggling a term for age into their Alzheimer's predictor. Unsurprisingly, this makes it work better. We'll get to their Accusations against Nucleus below. Note that this was recent enough that competitors haven't had time to air their own criticisms of Parasite. If this happens, I'll try and keep you updated. And to be clear, this article is from around five months ago. Yes. And since that time Nucleus has been accused of plagiarizing the paper from. Discussed. From her. From her site. Yeah. And then also accused of stealing IP from genomic prediction. So there's again a bunch of different accusations. We'll let Keon. So yeah, I mean the goal here is to just give an opportunity for Cremieux and Kian to answer some questions, try and contextualize it, try and make their case to a broader audience. I've read through as much as I can, but without actually getting the lab and rolling up my sleeves, I don't think I could come to a firm conclusion here. But I can certainly talk to them on this show and hopefully get some more information that the community can do with. With what they will. So Scott Alexander concludes this section talking about his strongest opinion of the scientific criticism. He says authorities on all sides have cited Alex Young as an authority on how polygenic scores can be confounding or misleading. Last week Alex Young revealed that he had been working with her site while it was in stealth mode and endorses their research 3 LOL. Probably that means Herasite's products are okay. That serves as proof of concept that this technology can work and means other companies claims are at least plausible. So lots of back and forth and we will be joined by Cremieu in just a few minutes. I actually need to message him and make sure that he has the information. Is there anything else that you think would be worthwhile to discuss before we hop on? Let's. Yeah, and I can just go through. I mean the original accusations came from an account called Sichuan Mala. Sichwan Mala, yes. Who wrote an extremely lengthy blog post on a bunch of the issues that they felt they had found with Nucleus. Nucleus ended up firing back and saying that Sichuan Mala was or sort of implying that Sichuan Mala was funded by a competitor or competitive service, as well as making those allegations with Cremieux. Yes, they go into issues around potentially fictitious customer reviews, which we'll ask Kian about. AI generated blog posts, accusations of intellectual property theft, saying that the Nucleus origin white paper is plagiarized, saying it has a bunch of errors. Nucleus has responded already to a lot of this stuff. Well, our first guest of the show is here. Let me tell you about LINEAR Meet the System for Modern Software Development purpose built tool for planning and building products. We will bring in Cremieux from the Restream waiting room into the TVP and ultradome and have him set the table for us. Kermieu, how are you doing? Welcome to the stream. How are we doing? Glad to be here, guys. Thanks so much. Can you hear me? Good as always. Looking good, by the way. I can go face docs if you want. Let's do it. Let's do it. We can show his actual video this time, which is great. We've had him. Welcome to the show. Hey. Hey. Good to see you. How you doing? What actually kicked this off for you? Do you know Sichuan Mala? Separately, Independently? Did you know that this was coming? Set the table for us? Like, why did Nucleus come to the top of your mind? So can I actually go back to.