LIVE CLIPS
EpisodeĀ 3-10-2026
Was that something that you had kind of always expected? I think the big law was really quick to adopt and over the past two years we've seen a real exponential step function in sophistication and complexity of the use cases that they're solving. Back in 2023 somebody would get a clap on the back for summarizing an email with AI but now you're running an entire M and A end to end process using Legora and so the A expectations are increasing and I'm actually seeing things like adding AI as part of the career frameworks within law firms to get promoted. I'm seeing AI be part of the interviewing process right like it's now a skill that is required to deliver real work. I think the enterprise departments have been patiently looking at sort of what are our law firms doing and now they're starting to follow on. And one of our last developments is something we call the Lagora portal and the portal is basically like Figma for.
A jet ski through the Strait of Hermes. That's the way to do that would be thrilling. It would be. What else is going on? Financial Times asks why did we ever think data centers in the Gulf were a good idea? US tech companies have concentrated much of their AI infrastructure build out in the Middle East. That is overly dramatic. I think so too. I think so too. It's more. It certainly is not. Yeah, it certainly is not. I mean, we should read this argument. We should understand this. But I think a lot of the building data centers in the Middle east is like, well, there's a lot of Middle east money that's going into building data centers in the United States. So it's sort of a trade. And we're like, well, you have a lot of land and power, it makes sense to do stuff over there. And we'll do this. One hand washes the other. We're all working together anyway. You want a data center, we'll help you with what we're good at. You help us with what you're good at, which is energy and money. Right. So let's raise. Read through what Rana Faroohar says in the Financial Times. I start with an obvious question this week, which is one I've been thinking about for years. The Amazon data center in the UAE that was hit by an Iranian missile attack is yet another example of how companies and countries are putting too much of a single critical economic input in one risky area. It's an example very much akin to the Taiwan semiconductor problem. Just as it wasn't good for the US, China and Europe or any other region to put 90 of all the world's high end chips in one place. It seems like an obvious blunder to concentrate so much data center power in one very risky part of the Middle East. Again, we're nowhere near 90% of compute capacity in the Middle East. I really take issue with that stat. I need some stats to back this up. I was really surprised following the hit to discover how much of the proposed US data center build out is in the Middle east, which has over the years subsidized a lot of the investment, making it much cheaper, but also allowing the US to avoid harder work of upgrading its own grid and figuring out the politics and economics of energy sharing at home. We're not avoiding that. Yeah. This is like the number one focus of the industry. Yeah, yeah. Here at home. Yeah. No one's talking about energy in the United States right now. No one's talking about energy. It's nuts to me. We are more worried about Cutting off oil to China from Iran. But we aren't worried about putting serious technology infrastructure and sensitive data in a highly geopolitically contentious part of the world. This isn't just a Trump administration thing, by the way. Back in September. September. In 2024, when Joe Biden was still in the White House, the US and UAE agreed to deepen cooperation in advanced technologies such as semiconductors and clean energy, with the aim of bolstering capacity and artificial Intelligence. Microsoft and OpenAI were among the first US companies to either begin investing or receiving Gulf funding. Part of the deal was about trying to pull more countries into the US tech orbit. So he doesn't actually share. Well, there's actually a reply here, Richard Waters, but okay. The level of concentration risk is here though is a whole different order to Taiwan. Yes, the 1 gigawatt UAE 1 gigabyte. Okay, this is a crazy article. This is a crazy article. Dilan Patel would like a word. They got a 1 gigabyte data center. The one data center. Let's assume it's a typo. At least it's not AI written. Yes, the 1 GW UAE Stargate project is massive and only the first stage in what one day might become a 5 gigawatt facility. But compare that to the United States where plans have already been filed for 150 gigabytes. Okay, we're moving on from this. Gigabytes is too much. It's too much. Let me tell you about Applovin. Profitable advertising made Easy with Axon AI. Get access to over 1 billion daily active users and grow your business today. The audacity. The audacity. The audacity to put a whole gigabyte. A whole gigabyte. This is the biggest three finger moment in Financial Times history. I still love the pink sheets. I love the paper. There's some good stuff in here. But yeah, we gotta fact check those abbreviations. Guys, we gotta step it up. We gotta get more AI involved. Seriously, just run that thing through. I know no one's gonna be upset about that. It's not this, it's that. As long as you get the facts straight. Anyway, Japan holds an oil reserve equivalent to 254 days of domestic demand and Hamptonism says dude,
You're on the. You're on the COVID of the business and finance section in the Wall Street Journal today. It says, flying tax maker Archer accuses rival Joby of concealing China ties. What happened? What is going on with the deeper supply chain in Evtol? Yeah, so Archer is building products not just for the civil side, but also for the defense side. And a big part of what we're doing is really in support of building re industrializing America and building the, you know, the industrial base here, especially on the defense side. And so we partner with a company called and and we're building new aircraft. We build the big aircraft and then Andrew will missionize them. So think they put the sensors and systems and weapons into the aircraft. So very important to me that we build and keep the supply chain in the US and we build this stuff all out here. That's not been necessarily the case for our competitors. They put, you know, factories in, in China, in Shenzhen. They set up their supply chain there. And I just think it should be table stakes for American companies working in defense to have to build out their supply chains and ultimately do it in America. And if you do go do that overseas, you also have to disclose that in a very proper way. And so I think that's a big sticking point for me. I do think companies working in defense in America need to be very transparent about that. Talk to me about the battery supply chain there because I feel like drone motors have been very difficult.
Off a cliff and no one's really using it. Maybe not, but yeah. And I just. People that are really good at building like viral AI projects. I've, I've seen some, some negativity on the deal, people saying, oh, this just says that Zuck has no AI strategy. And I just totally disagree with that stance. I just look at this as Zuck has like, bots have been a bug on social media. We've seen now how they can be a feature. Yep. I think every social media executive should be planning for bots to be more of a feature in the future than they have been in the past. Right. And I think if you're not thinking about that, you're not really being forward looking. And so there's a lot of people that are going to hate bots as a feature, but I would just assume that in the future there will be millions, billions of bots on all meta properties and they will be not, you know, I'm sure some that are, that are generated by sort of like, you know, nefarious actors, but some generated from the platform itself that are part of the product experience. I like that take. I also think that there is a.
So push back against the teleop strategy because I was totally on board with the no teleop thing during the Tesla boom. But then Waymo seemed to do a lot of teleop and it seemed to sort of work. And so when I think about ways to get a lot of data, teleop doesn't seem like the craziest thing. In a world where we have a bunch of scale AI mercore, there's all these data RLHF teams that are sort of manually curating answers to questions for LLMs like the human in the loop for a medium amount of time seems to be a tried and true path. Why does teleop now make sense in this particular industry? Great question, I'm glad you asked it. So I think self driving cars are a bit of a special case because the car is basically a robot that is very easy to teleoperate, sit in it, basically have four actuators, left, right, speed up and slow down. And we all been driving cars for a long time and you can easily put it millions of miles in a car, which is what way most of the world had to do to learn how to self drive. They did collect a lot of data, but even there they don't have all the data they need. There's a lot of so called corner cases that those cars run into that cause in the case of robotics, the problem is much more serious because now you're talking about manipulation. You're not just operating a robot with a single environment like a flat road. You're dealing with the full dexterity of a human hand. 20 degrees of freedom per hand. Every object's different, every type of task is different. And these things become very difficult to tell. The operate teleoperation process for these you got to wear a headset, you had the joysticks in your hands and you're trying to move around. It's just very hard. And Kennedy, the problem isn't just the quantity of data, although that's obviously a problem. You could spend a lifetime doing this if you think you wouldn't get to Internet scale. But the bigger issue is the diversity of data. If all the data you have is data that you've intentionally collected, then you almost by definition haven't seen the corner cases, you haven't seen all those edge scenarios that cause failure. So the way that we're approaching it is, is different altogether. Our team comes from generative AI and computer vision. And the idea is, you know, if you look at every other AI model that's worked, they all start with an incredible amount of data. Typically, you know, a whole Internet's worth data, whether it's language models or image models or video models. And then there's a small amount of fine tuning that you use to align the model. That fine tuning data set teleoperation is fine for that, by the way. That's what we're doing. But for the pre training, it's just completely inadequate. So what we did is say, what data set is there that's Internet scale, massive diversity, and from which you can learn about the physics of how things move. And there's only one answer. That's Internet video. Because our team comes from computer vision and generative modeling, that was the approach that we took. So we basically literally trained the model on hundreds of millions of videos, really millions of clips. In fact, in our view, the model has seen almost anything that you can see in reality. And then with a tiny amount of teleoperation that literally on the order of 10 hours compared to what the VLA approach requires, which is tens of thousands, if not hundreds of thousands of hours of data, you can actually teach the robot to do certain tasks. That level of data efficiency is we've never seen. That's one of the biggest, the big breakthroughs here. So what is the early customer set going to look like? Can you work with.
But prepping. But that's not like viral. It's like fairly niche. My usage is the same. The other thing I think we should watch on this front is I've heard from a lot of open claw power users that one of the best and funniest use cases is to add it to like family group chats or friend group chats because it'll just, it can be helpful sometimes and then it'll just chime in with like crazy funny, interesting thing. Yeah. Obviously that's not consumer grade. So somebody's going to have to productize that into someone that like something that like your mom can install into the group chat or, you know, your grandpa can install into the group chat. But I think it's coming because it's a big opportunity. Well, thank you so much for joining the report. The top 100 gen AI consumer apps. It's the 6th edition, it's available at a16z com and if you work at a rival venture capital firm. That's what incognito windows are for. Exactly. Thank you, Andrew Reed. Thank you, Andrew Reed. And thank you for joining the show. We'll talk to you soon, Olivia. Have a great day. Thanks, guys. Goodbye. Let me tell you about Fin AI, the number one AI agent for customer service. If you want AI to handle your customers,
Now at Andreessen Horowitz, as a reminder, we have a partner from Andreessen Horowitz joining in just seven minutes. Kristen Kyle says that his preferred definition of ARR is your single highest grossing minute of the year, times 525,600. Amazing to be able to tweet this as a VC, as an RIA. Somehow this got through legal review. Of course he's joking, but this is the new coastline paradox. Are you familiar with the coastline paradox, Jordy? I'm not. This is a fun little exercise. Please mansplain it. So the coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well defined length or perimeter. This results from the fractal like curve. So basically, if you draw a line around an object and you're just sort of like drawing straight lines, you get one number from the coastline, you get one line. So Great Britain, if you're measuring based on units that are 62 miles long, then the length of the coastline is 1700 miles. But if you cut that in half and start measuring with 31 mile increments, 50 kilometers each segment, then the coastline is 370 miles longer. And you can do this endlessly because you can measure the coastline line. Like think about Point Doom, right in Malibu. Like, you have this little. Thank you for putting this in Malibu terms. Exactly. You have this little. You have this little like, like, like spit jutting off the coastline. You can measure all the way around that and count that as extra coastline and you can go even smaller. You could measure the coastline around a little tad. Tide pool on. On Point Doom or a rock on. In the tide pool on Point Doom on the shore. And so there's no real accurate way to measure coastlines. You have to quantize to some standard metric of measurement, something like 100 kilometers. 50 kilometers. Yeah. I feel like ecom bros were weirdly prepared for ARR in the age of AI because everybody that's been building. I saw Sean in the chat earlier. What's up, Sean? Sean will have talked to a bunch of different e commerce founders that would say like, oh yeah, we're at 50 million of ARR or like 50. 50 million dollar run rate. But the thing in E commerce is like one day in the week you launch a new product or you do a set Friday, you're at a. Not even that. Not even that. But it happens all year round. Oh yeah. Like taking your Black Friday revenue, multiplying that by 365 is like insane. But then you could even, even if you're multiplying out a single month, it's like, why? And so, and so the right, like the, the more mature way to do it professional would be like take the last three months average or something like that, of course. But even then, it just doesn't tell you that much because if you did it in Q4, like, your Q4 is probably bigger than your Q1. So anyways, a lot of it's just a lot of it's just ego. Also, a lot of, a lot of subscription products will rebuild at like midnight. So 12:01, that's the minute you want to multiply by 5,500 25,600 minutes to get to your highest ARR on the first of the month. In that 12 to 1201 minute, that's going to be the highest sales for an E Commerce. Well, if it's like automatically.
Versus match Farah debating the value of the Ferrari F80. And that debate is happening. You could prompt that. But if it's already there and it's sort of happening, that could potentially be valuable. But I think the bigger value to Meta is if you look at the AI talent wars, they went and acquired a bunch of really talented researchers. They got some folks from Thinking Machines, they got a bunch of people from OpenAI, they got people from all over the industry and they put together this team of researchers that can sort of unsettle, stick the Llama project and get to the Frontier on just an in house LLM project. Maybe they open source it, maybe they don't, maybe they serve as an API. Either way, Meta needs a Frontier model. They're not just going to buy tokens from OpenAI or Anthropic, so they get their own thing. But then the question is like, what do they do with that? And I'm sure everyone on the Facebook product team is thinking about this. Everyone on the Instagram team is thinking about this. Connor at Threads is thinking about this. But if you bring in two interesting product managers that can say, oh, you got a bunch of cool frontier models, you got an image model that you trained, a video model, you got a text model, you got a coding model, let's just go do some skunk work. R and D. So that when we launch the new AI models, we have a number of projects that we're experimenting with that sort of demonstrate the capabilities. Maybe some of them take off, maybe some of them integrate. That seems valuable to, to the MSL strategy, to the Meta ecosystem. This is like the OpenAI Labs team, right? Yeah, like this. It's like, is that Riley who's on there? Yeah, Riley's on that now. But it's like they're doing these like weird projects. Maybe it's the next, you know, coding agent, maybe like Moltbot or something. But it's just like these weird things that, you know, you get access to the new internal models. Yeah, maybe there's something cool you can do with that. Yeah, it's part engineering, part product development, part marketing, part communications. Because there's a lot of times when we bring on researchers or product leaders from labs and we ask them, how are people using this? And they'll be like, the benchmark's really good. And I'm like, I want to know how this delivers value. And there's this break in the chain from we have amazing intelligence, but people want to know what the killer feature is. They want to know what the studio Ghibli Prompt is they want to have their handheld a little bit. And so having a team that can advance that, I think is, is good. I think could be very, very good. Of course, we don't know the price, we don't know the terms. But overall, I think it's exciting for the team behind Mult Book to head over to msl. So congratulations to them. Let me.
Physics and how things work in the world, like a drop of water creating ripples, that kind of thing. And so I'm sure the researchers are hard at work on this. We're shocked at the SORA data. Oh yeah, yeah, yeah. Take us through that. Bill Peebles, absolute dog. Proven a lot his downfall. Yeah. What's going on with sora? SORA is a fascinating to me. It's like probably the biggest narrative violation we've seen in consumer AI in a while. First of all, they gave us the gift of so many AI videos of Jake Paul. So. Yeah, that's right. That alone, I think was worth the money they spent on compute there. Yes. But actually, so what the data shows is downloads for store are definitely down. It was at the top of the US App Store for 20 days consecutively. It was getting 6 million downloads a month. Now it's closer to a million and a half. And so it didn't turn into like the social network that I think they envisioned, but it stayed really strong as a creative tool because the model is good and because you can create videos with these cameos. So the daos are actually still increasing. They have 3 million global DAOs, which is like I think probably the most for any video generation product on mobile. And so it's pretty impressive. If I were them, I would, I would keep investing in that. And depending on when this Disney like partnership actually rolls out, I would expect that to send it right back to. Totally, totally, totally. I think that's the question if I'm OpenAI is like, do you want those download? Do you want those users going to ChatGPT or Sora? If it is just a creative tool, over time I would expect the products to merge. I don't know. Yeah, I could see that. I do think it's interesting. I feel like the usage of AI with kids and families has been probably lower than it should be just because of concerns about hallucination or weird artifacts or like you just need to be very sure that the content is clean. And so I think something like a Sora plus Disney characters is probably going to explode consumer AI for kids in a way we haven't seen. Yeah. And.
Like there's kind of different stories happening there. So that was fun to dig into. And the main takeaway from the. From the mainstream consumer in just the foundation, the chat apps. What are you seeing between the ChatGPT, Gemini, Claude, Deepseek, Perplexity, Grok. Yeah, it's interesting. I would say Deepseek has completely fallen off in the U.S. it still makes our list pretty high because it's like the number one AI product in China and Russia, which are really big markets. Do you have any personal theories, like, things that you can't like? My thing with Deep Seek was that all of the downloads originally, when it just started charting out of nowhere, were just 100%. It was just all entirely boded. I have no way to prove that other than. Other than it was just going up the chart like crazy. And there was no. Nobody was actually using it. No one was talking about it other than the fact that it was at the top of the chart. Yeah, I think that's totally possible. I think we're actually seeing in different, in many ways, but a little bit of an analogous story playing out with Claude right now where like pre all of this press, whether it's positive or negative, no one in the US knew what Deep Seat was. Pre all of this press. I think there was some survey that clotted like 2% market awareness in the U.S. yeah. And so we see this thing happen where even if it's like the worst headline of all time, if it's going mainstream, it will drive people to try and use the product and then we just have to see if they retain and they didn't on Deep Seek. Okay, yeah, that makes a lot of sense.
Good. Thanks for having me. Thanks for hopping on. Sorry about the global chaos in the oil markets delaying this appearance, but I'm glad we had time to actually digest the report because there's so many interesting details in there, and whenever you drop one of these big reports, I feel like you sort of need the Twitter hive brain to dig through it and find all the interesting commentary and tweet each other until there's a consensus. But take us through the actual project. What did you launch? How long have you been working on?
But I'm glad we had time to actually digest the report because there's so many interesting details in there, and whenever you drop one of these big reports, I feel like you sort of need the Twitter hive brain to.
Ilya with the long flowing hair. This is great. This is who you're trading against? This is it. This is who. This is who you're trading against right now. That's my theory. Yes, it is. The full story of SSI will be fascinating to tell one day. The Daniel Gross shift to Meadow and Ilya's appearance on Dorkesh Patel sort of told one side of the story. And also as you revisit DG's AGI bets as we did last Friday, it tells you a lot about his view on the world. Ilya probably some overlap, but a different view of the world and lots of fun to at least speculate on the timeline. Before we move on, let me tell you about Vanta Automate Compliance and Security. Vanta is the leading AI trust management platform. And let me also tell you about Lambda Lambda is the super intelligent cloud building AI supercomputers for training and inference that scale from one GPU to hundreds of thousands. So Meta has acquired Mult Book, the viral social network built for AI agents. Co founders Matt Schlitt and Ben Parr will join MSL Meta Superintelligence Labs with a deal expected to close in mid March. That's now it is mid March. It is in. We are in the middle of March since this is the 10th. So this could close in a week or two. Insane. Well done says Dennis Hagstadt. And I agree why it matters according to Axios. Yeah, Matt hasn't posted at anything yet, so I think they were seemingly not wanting this to get out, but it's still fantastic news for them. Yeah. So there's no announcement or this was just exclusive from Axios. It's like Axios has learned, right? Axios has learned that Meta has acquired Multbook. Well, very, very good news for all those involved. There is a little skepticism on the timeline, especially from the guy who was like the biggest spammer on Molt Book. Apparently this is a hilarious twist. So Meta did not disclose Multbook's price when Axios asked. The deal is expected to close mid March. The pair starting at MSL March 16th, just six days from now. What day of the week is that? That's a Monday. Okay. Next Monday they will be. I thought they were starting on Sunday. That would be particularly cool. Catch up quick. Multbook's social network was designed to run in conjunction with a separate project, OpenClaw. OpenClaw was previously called Claudebot. Briefly mult bot. Last month OpenAI hired Peter Steinberger, the creator of OpenClaw. That product is now being open sourced with OpenAI's backing. So the king of spam on Moltbug Nagli says, I can't believe a single for loop script I ran on Multbook by registering a million fake agents actually helped them get acquired by Meta Mental. Did that help them get acquired? We have no idea. I mean, it's. It wasn't a secret that, that there was a lot of spam. All the accounts were bots. Yeah, that's, that's the whole pitch, actually. I think, I think the question, if people were to look at this as like, is there economic value here? Was there anything interesting happening there besides all the crypto junk? I went on Multbook as a human and spent time there. That time is monetizable, almost best for meta. That is the king of monetizing attention. And so you could put ads on that and you could put it in the family of apps next to Facebook, Instagram and threads and WhatsApp and whatnot. But were they actually driving attention? Did anyone stick around? Because I churned pretty quickly from like being a. I wasn't even a dau. I used it like two or three times and I went on there and I searched for things and I read some stuff and I was like, oh, okay, this is interesting. This is like a bunch of AI generated texts. They're talking to each other. The system prompt seemed kind of interesting. It was clearly asking the AI agents to kind of like reflect on their own sci fi cognition and awareness and, you know, like their souls, essentially. It was interesting to see some screenshots, people had some fun with it. It's probably monetizable to some degree, but if it fell off a cliff and no one's really using it, maybe not. But yeah. And I just see people that are really good at building like viral AI projects. I've seen some, some negativity on the deal, people saying, oh, this just says that Zuck has no AI strategy. And I just totally disagree with that stance. I just look at this as Zuck has like, bots have been a bug on social media. We've seen now how they can be a feature. Yep. I think every social media executive should be planning for bots to be more of a feature in the future than they have been in the past. Right. And I think if you're not, if you're not thinking about that, you're not like really being forward looking. And so there's a lot of people that are going to hate bots as a feature. But I would just assume that in the future, there will be millions, billions of bots on all meta properties and they will be not, not, you know, I'm sure some that are, that are generated by sort of like, you know, nefarious actors, but some generated from the platform itself that are part of the product experience. I like that take. I also think that there is a, there's another side of this which is just that look at what's happened with MSL over the last year. Like it didn't exist a year ago. It really started over the summer with like the talent raids and the AI talent wars. Van says, I just don't think having bots clicking on my E commerce ads is a net positive blockchain. Yeah, but truthfully, if there's a bot that can interact with your E commerce content and add context and debate the pros and cons of one thing in your category versus another and effectively you have sort of a Reddit style experience around your product on day one, or you have five products and bots are in there discussing them. That potentially could be an interesting modality to interrogate. And the other thing is that when you have these bots sort of preemptively discussing something, you are effectively caching the tokens before someone actually queries them. So instead of needing to find a product and then click tell me about this and pretend like you take a link to new bed or car or something and you dump that in ChatGPT and you say debate this car like you're a bunch of people that are experts and it's Doug Demuro versus Matt Farah debating the value of the Ferrari F80 and that debate is happening, you could prompt that. But if it's already there and it's sort of happening, that could potentially be valuable. But I think the bigger, the bigger value to meta is if you look at the AI talent wars, they went and acquired a bunch of, a bunch of really talented researchers. They got some folks from Thinking Machines, they got a bunch of people from OpenAI, they got people from all over the industry and they put together this team of like researchers that can sort of like unstick the Llama project and get to the frontier on just an in house LLM project. Maybe they open source it, maybe they don't. Maybe they serve as an API. Either way, Meta needs a frontier model. They're not just going to buy tokens from OpenAI or Anthropic, so they get their own thing. But then the question is like what do they do with that? And I'm sure everyone on the Facebook Product team is thinking about this. Everyone on the Instagram team is thinking about this. Connor at Threads is thinking about this. But if you bring in two interesting product managers, they can say, oh, you got a bunch of cool frontier models, you got an image model that you train, a video model, you got a text model, you got a coding model. Let's just go do some skunk work. R and D. So that when we launch the new AI models, we have a number of projects that we're experimenting with that sort of demonstrate the capabilities. Maybe some of them take off, maybe some of them integrate. Like that seems valuable to the MSL strategy, to the meta ecosystem. How do you think about this is like the OpenAI Labs team, right? Yeah, like this. It's like, is that Riley, who's on there? Yeah, Riley's on that now. But it's like they're doing these like weird projects. Maybe it's the next, you know, coding agent, maybe like Moldbot or something. But it's just like these weird things that you get access to the new internal models. Maybe there's something cool you can do with it. Yeah, it's part engineering, part product development, part marketing, part communications. Because there's a lot of times when we bring on researchers or product leaders from labs and we ask them, like, how are people using this? And they'll be like, the benchmark's really good. And I'm like, I want to know how this delivers value. And there's this break in the chain from like, we have amazing intelligence, but people want to know what the killer feature is. They want to know what the studio Ghibli prompt is. They want to have their handheld a little bit. And so having a team that can advance that I think is good, I think could be very, very good. Of course we don't know the price, we don't know the terms, but overall I think it's exciting for the team behind Molt Book to head over to msl. So congratulations to them. Let me tell you about cognition. They're the makers of Devon, the software engineer. Crush your backlog with your personal engineering team. And let me also tell you about Cisco, critical infrastructure for the AI era. I love that horse unlock. Seamless real time experiences and new value with Cisco. So Kevin Ruse over the New York Times made a blind taste taste test to see whether New York Times readers prefer human writing or AI writing. 86,000 people have taken it so far and the results are fascinating. Overall, 84% of quiz takers prefer AI. It's over. It's over. It's over. There was another interesting post about Axios and who Axios is hiring. They're particularly interested in hiring domain experts who don't write ideologically and are not generalists. They're looking for someone who is very narrowly focused on a particular beat on a particular topic, and an expert in that. And someone was reflecting on what this says about modern journalism, that it's going to be more focused, more investigatory, more alpha beyond the models. Because just being able to instantiate a piece of write up an article about some random topic is getting commoditized. And so the alpha moves to deep expertise. Should we take this five question quiz? Should we see something like more O.