June 4, 2025

AI Severance: One Memory or Many?

AI Severance: One Memory or Many?
The player is loading ...
AI Severance: One Memory or Many?

In this episode of Hallway Chat, Nabeel and Fraser explore the future of AI through the lenses of memory, context, and product design. They debate whether users will maintain separate AIs for work and home, why visually verifiable output is critical to product success, and how startups should behave in a world where the best models become commodities. From the case for uneconomical AI to the trade-off between 500 average agents and one brilliant one, they unpack what it takes to build great AI products—and when, if ever, VCs should preempt funding rounds.

  • (00:00) - Chapter 1
  • (01:54) - Will AI Democratize Like the iPhone?
  • (03:03) - Should You Burn Tokens to Win?
  • (06:25) - The Case for Going Big with Compute
  • (09:09) - 500 Agents or One Genius?
  • (18:19) - The Importance of Proof of Work
  • (19:49) - Building the Right UI for Agents
  • (22:23) - Visualizing AI Output: A Design Challenge
  • (33:37) - Memory Wars: Work AI vs. Personal AI
  • (35:01) - Consumer Benefit > Privacy?
  • (37:36) - Whats at stake, your memory
  • (45:28) - Model Switching as a Feature, Not a Bug
  • (53:53) - VC Talk: When to Preempt a Round

00:00 -

01:54 - Will AI Democratize Like the iPhone?

03:03 - Should You Burn Tokens to Win?

06:25 - The Case for Going Big with Compute

09:09 - 500 Agents or One Genius?

18:19 - The Importance of Proof of Work

19:49 - Building the Right UI for Agents

22:23 - Visualizing AI Output: A Design Challenge

33:37 - Memory Wars: Work AI vs. Personal AI

35:01 - Consumer Benefit > Privacy?

37:36 - Whats at stake, your memory

45:28 - Model Switching as a Feature, Not a Bug

53:53 - VC Talk: When to Preempt a Round

AI Severance: One Memory or Many?


[00:00:00]


Nabeel Hyatt: So then what? What is the long term implications of the idea that these are going to have very, very long contextual memories for what we put into the,


Fraser Kelton: everything that you just said is the most important set of problems from a product perspective to be working on over the next little bit?


Nabeel Hyatt: Human behavior is, first of all, doesn't care about privacy, and second of all, finds ways around it if there's a consumer benefit.


Fraser Kelton: It reminded me that I'm now in an environment where somebody is watching absolutely everything that I do.


Nabeel Hyatt: We do not as an economy reward that level of intellect. That's the most optimistic story I've heard yet for why a horizontal agent that's not from any of the labs could still become quite important.


Fraser Kelton: Hey everybody. Welcome to Hallway Chat. I'm Nabeel. I'm Fraser. Welcome back. One of the great [00:01:00] things about technology over the past, certainly the past 15 years is that if you were the world's richest person or somebody making a median wage in middle America, you had access to exactly the same technology.


You're


Nabeel Hyatt: both using an iPhone. That's right. Doesn't matter. Bill Gates uses the same. Yeah, he uses the best phone. Uses the best phone. Maybe not Bill. 'cause he's at Microsoft and so do Yeah, that's right. He does use the best phone. By the way, I resent this 'cause I lived through an era where I was trying to import random, weird Japanese phones Yes.


That were like weird and quirky and we're trying to be a camera or a like whatever note taking thing. It was like I, I lived through the era of plentiful phones and now I just have the same glass sheet of everybody else. But yes, it is democratizing equal access to the very best product. Yeah, so that's a good framing.


Will AI Democratize Like the iPhone?


Nabeel Hyatt: The question I was gonna ask is, 'cause we were covering a little bit last night, do you think the same's going to [00:02:00] be true? No. No More assertive if that was going to be true in five years. I see. If you felt like the same thing was actually gonna happen to AI models in five to six years time, which is to say that if you were a founder today, that is inside of the window of you starting a company to getting an exit.


So it's like in your lifetime of your company, the best models will be accessible to everybody at a reasonable price, accessible price. How would you behave? What would advice would you give them? How would you behave differently? Oof. I'm going to


Fraser Kelton: answer that, but stick with me because I think you need to like tighten up.


What has to be true? What do you have to believe for that future to be the case? Oh,


Nabeel Hyatt: no, I wanna do that at the end. You don't wanna do that at the end. I wanna do that at the end. Okay. I wanna just, I wanna jump to a world where you're sitting down with the founder and I see we have just, we just had this hallway chat together.


Okay. And we realized we've convinced each other 100%. I see that in five to six years time, like everybody has access to the highest best model at a reasonable price. Yep. And you're like, oh, I need to go [00:03:00] take action, I guess should go talk to founders about this. Yep.


Fraser Kelton: What would you say?


Should You Burn Tokens to Win?


Fraser Kelton: Provide the best possible experience at a low cost and absorb the losses in the short term.


So we see people say, oh, I'm losing money on these queries. And so I, I either have to put in a credit system or some sort of other types of friction on use. Yeah. So that they can be unit profitable. And I think you would tell them f that you say no, you are going to provide the best experience. And you were going to absorb the cost because in five years


Nabeel Hyatt: presupposing that you have enough cash in the bank to do.


Yeah, for sure. Which is why you need VC dollars. Sounds like. I think a VC would say No, just your point is, if I just sum it up, is like, oh, if I knew that it was about to become commodity, the cost was really gonna fall that fast, then you would be okay taking loss leader if you believed it prevented a better experience.


[00:04:00] Yeah. Look at Google's New video model as just the latest example. You should just see what happens when you throw tokens at the problem.


Fraser Kelton: Yeah. I had a dinner with my in-laws the other night.


Nabeel Hyatt: Yeah.


Fraser Kelton: And I was telling them about a remarkable experience I had with O three. Yeah. And my father-in-law said, is that available on the free version? I have no idea what their pricing is anymore.


I don't know if they have very limited amount of O three tokens or not, but like I said, I'm not sure. Mm-hmm. And in that case, I think if you believe that we're heading towards that world, I think that you would provide the best model to all of your users.


Nabeel Hyatt: I think. Probably more than anybody, Open AI is trying to exhibit that behavior, right?


Yeah. They are trying to move as much stuff to the free tier as possible, eating an incredible amount of money because they do believe that this is true. And so the reason I asked that, I feel further than


Fraser Kelton: that. I think that chat GPT, like the phenomenon and hidden the zeitgeist said that it did. Yeah. Can be chalked up to a handful of wonderful serendipitous, good luck and good fortune, as well as the ability [00:05:00] to redeploy a cluster of the size that the research team had currently.


Yeah. And continue to make it for free. Yeah. No startup in that moment in time could have done that.


Nabeel Hyatt: Yeah. The reason I asked it as an assertive Yes Uhhuh is because I don't know whether it's realistically actually the iPhone and everybody has access to it, but I also think that nobody can predict how quickly the price is going to drop.


Yeah. Especially over that kind of timeframe, and so effectively you should behave that way anyway. I got a debate with A CEO last week. Where I was literally having this conversation actually, and it was reminded, I was reminded of it last night, Hey, you're doing something at stic. It's cool. The thing is running away and it's doing a thing.


I saw a demo, and this is for a portfolio company, it looks amazing, but what if you throw four times the tokens at the problem? It's a profitability is a problem. Not sure how we'd price it, all of blah. Like, have you done it yet? No, it wouldn't make that kinda sense. It's like, well, it's possible that you five, five times of the tokens of the [00:06:00] problem and nothing happens.


Yeah. But at the very least, you better know before your competitors do what that does. Go figure out whether throwing $400 on a single query from a user Yeah. Instead of 4 cents generates materially different and more interesting output. Because if it does, you better make sure you're there before other people are, because of course the cost curve is gonna go down and predicting how quickly it's gonna happen is a fool's errand.


The Case for Going Big with Compute


Nabeel Hyatt: You just have to be ready for it to drop. And maybe you hold it if you only have a million dollars from the bank. Okay. So maybe you don't release it to everybody, you release it to two people, or you just show it on Twitter. You frankly just show it to investors to raise the money or whatever it is. But you best be testing what it feels like to throw an uneconomical amount of money at a set of agent behaviors and seeing what comes out of it to figure out where you are on your curve of expertise.


Fraser Kelton: I wonder if there's two separate things here, and I think what you just said is undoubtedly the case. Okay. Like you should [00:07:00] not worry about the budget of your tokens as you iterate to deliver the most delightful product experience that you can. Yep. And find product market fit. Yep. Because delivering that product, you have high certainty you're gonna be able to reduce the price and the much harder thing that we've seen is being able to deliver something that people actually care about today.


Sure. There's another question as to if you always have to just scale compute. And or like tons of tokens at the problem and be on the frontier model. Should you just peg and stay at that place because your scenario was like a finite moment in time. And if you deliver that and you can't, so we had a company who delivered that.


They launched their product. Yeah. Their spend exploded. Yep. They then spent the next couple of weeks driving down the cost 50% and then 50% again. And that's even before riding the The deflation.


Nabeel Hyatt: Yeah.


Fraser Kelton: Great. If they hadn't been able to reduce the cost and they had to actually sit on the [00:08:00] frontier, so every new model comes out year after year, should they stay delivering that product and absorbing that loss over the next handful


Nabeel Hyatt: of years.


Yeah. What's your time horizon is really the question. Yeah. Yeah. I think in a world where most of the feedback cycles are tuned short, it's three months. In an incubator it's one year until your next fundraising round. Yeah. Being long term greedy, while being short term, high velocity is. The winning formula for most of the outperforming startups that I have an interaction with.


Those CEOs might word it differently. Yeah. But that's basically what happens. The people that are incredibly short term oriented, what do I have to do just to raise man X round and that kind of stuff, tend to make a lot of very short-term decisions that don't play out the people that are only long-term oriented.


A lot of times what that actually means is, I don't wanna face failure early. I'm just, I'm hiding. And the balance of the two of those things is the great. Now there are times where I have three months of cash flow left. I mean, I've been there as a founder, blah, blah, where you're like, I can only think short term.


Yeah. Like life gets in the way. Sometimes you [00:09:00] can't. And similarly, if you have a million dollars in the bank, you're not subsidizing for the next five years anyway. But like in terms of orientation, what are you trying to get done? Yeah. I think you wanna be long term oriented, right? No. Yeah, I think so.


500 Agents or One Genius?


Nabeel Hyatt: Yeah. I think so. Another question. Okay. This is related ish. Would you rather five I. Thousand iffy agents that are pretty good at what they do, but reliable but not crazy brilliant Uhhuh or one crazy brilliant, smartest person in Google versus 500 mid-level Google people. Do you want one Alia or 500


Fraser Kelton: mes?


I think it totally depends on the task. Yeah. Yeah. Undoubtedly it depends on the task. And what does that mean though? Well, so first of all, do you disagree with that? I think that there's certain science questions that I don't want to, I don't want to throw out the wisdom of crowds. The, like you want to, you want somebody who's brilliant to solve the hard science problem that hasn't been solved.


And [00:10:00] then you don't want the one brilliant person to, to crash through the expense report automation. So,


Nabeel Hyatt: so the dinner party academic. Like ea version of this conversation is you of course want the one brilliant person. 'cause the one brilliant person will figure out how to manage all the other people and they'll run the economy and they'll also make themselves more brilliant and they'll be self effect a SI and all of the other things.


Right? So the kind of party conversation version of this is fairly obvious. I think an economist would answer that it's 500 mid-level people. Yeah. Because if you just look at supply and demand to the market, it's very clear that the most brilliant person is not paid 5 billion times more than 500 average people.


Right? We do not as an economy reward that level of intellect. Now, you could also have an argument that intellect is not directly related to economic output anyway. And so what is it like? So that's why you don't see that loop. Is that true?


Fraser Kelton: What does the average [00:11:00] worker not make? However much less than Greg Brockman.


I think


Nabeel Hyatt: compensation is tied to a wide variety of things. Sure. Yeah. Yeah, yeah. I mean that the risk that we took on all that other stuff and yeah. I, yeah. I think we give an incredible amount of economic benefit to extroversion.


Fraser Kelton: Yeah.


Nabeel Hyatt: We give an incredible amount of Yeah. Economic benefit to males for being male.


Yeah. To people who are friendly. Yeah. There's a bunch of, there's a bunch of social science studies Yeah. About, about this kind of stuff. Like agreeable. Yes. Yeah. All things being equal meritocracy. Yes. Yeah. All things are not equal. Humans have preferences. This isn't Freakonomics. Sorry for dere us. We're going your point.


Fine. No, but my, the loop I would come back to is how many things do you do every week? Yeah. And how much corporate output, how much personal output are you trying to be the top 1% in, and how much of it is just work that needs to get done? I do [00:12:00] not need the top ones percent best. House cleaner on the planet, and I am not willing to compensate the best 1% house cleaner.


I just need somebody to help with the dishes sometimes. And my Uber driver, I do not need the best driver on the planet. Do I just need somebody who's gonna stop? What about your third thing slamming on the accelerator and making me car sick? Put the dots on your iPhone. Do you, what about your Thursday at work?


I think we, we ask this question all the time. I would argue that, I don't know what it's like. We could talk about what a spar startup is like. But Frazier, look, one of the reasons I guess you're gonna, you're throwing it back in my face because what I'm trying to lean into is you just want 500 people doing really good work.


But my actions would betray that given that I strongly advocate at Spark to keep the partnership to six, seven people and to not scale.


Fraser Kelton: No, but I think, uh, I push us there not to throw it back in your face. 'cause I actually think it's interesting. My guess is that the [00:13:00] lion's share of your job. Yeah. Would benefit more from having 500 people running around.


And then I think there's very acute moments where you are probably in the P 99 and it's decisions in those moments and your ability to perform in those very small moments that make the difference though. Is there a rubric


Nabeel Hyatt: that, 'cause this would lean into how you think about model development if you're a founder, this would lean into how you would think about expenses and revenues.


It would think it would lean into what products you take on now versus three years from now. I mean there's, this is a parlor conversation, but it's actually, I think we have a lot of applications for implications for where you'd lean. So is there a rubric for the things that you would want 50 or 500 or 5,000 or 500,000 mid-level people to do versus the one genius, which would be another proxy way of saying what are the tasks that we as a startup community [00:14:00] should try and take on right now?


And which ones should we wait? That'd be another way of wording it, but that's not the way phrasing I want it to be. Yeah. I want it to be like, is there a way of framing or thinking about the job to be done, the work that would help you figure out one or the other.


Fraser Kelton: Is there marginal returns for being exceptional at the task?


And I think that most, as much as I don't like to admit it, most of our tasks there aren't.


Nabeel Hyatt: Right.


Fraser Kelton: And then there's a few small finite set of tasks where it's true. And I think that what does this mean? I think that there's,


Nabeel Hyatt: that's the thing we've found with coding.


Fraser Kelton: Yeah.


Nabeel Hyatt: The thing we've found with coding over the last few years is that when I sit down with really amazing programmers and they look at five coded code, they're like, this is horrible uhhuh.


And the whole point is, yeah, gets the job done, it's fine. Like, I understand it's not as structured or as efficient as you would've done it. And like it can be done better, but like average is fine. It executes, the webpage comes up. Yeah. The search function works. [00:15:00] Yeah. And that's knowledge work. So I'm not even talking about like brute force copy these things out of an Excel spreadsheet and put them in my database stuff.


Fraser Kelton: I'm still of the mind that you are going to want to have software engineers orchestrate 500 agents to do those tasks, and then on the stuff where they can actually have an edge and add value, they're still doing that. What's the UI layer for orchestrating 500 agents? Well, isn't that, that's a great question.


What is the UI layer? And let's start with the simpler question, and that is, do we need a new UI layer for orchestrating agents?


Nabeel Hyatt: This is the conversation that, yeah, we were having with one CEO recently who was just saying, we already have a way to talk to 500 agents and it's called Slack and email and all of the other office affordances.


You're just building Microsoft Office for agents basically, or even, you know, you don't even have to refer to 'em as agents. I. [00:16:00] And empathetic and love the simplicity of that. You're not teaching a person anything new. They're using project management software. That's they've, same way. They've always used project management software.


They're just assigning it to an agent to do or not. Yeah, there's a simplicity to that. That maybe is a in between step. For me. I don't buy any of that. You don't buy any of it? I don't buy any of that. I slack an engineer and I say, listen, we gotta make something. It's due by Tuesday and they run away and they do stuff and they try and get it to me by Tuesday.


I don't know whether that's a human or an agent. It doesn't matter. Like why not use the things that we have evolved software to be good at, which is communicating with other entities and this is just an alien in our midst. I can believe


Fraser Kelton: that. Okay. Have you back? Yeah. My response initially was because I think we're a ways away from having agents be able to take a task like you just said, and come back with the output.


And in, in essence, that's what the product is today for the most part, or the [00:17:00] products that are getting built. And I think the, whether it's in Slack or otherwise, the more interesting thing mimicking what happens with that teammate today where it would be a very special work relationship for you to give three directions to somebody who runs off and works for two weeks and comes back with exactly what you were looking for or, or else that, that, that was so mundane of a project that there was no nuance that had to be like teased out and understood around like what the objectives are, what the goals are.


Nabeel Hyatt: Okay. Okay. I think you're touching on something that, that I have been thinking about all morning and I think I just got to a little bit of a, alright, let's do it. A little bit of clarity, so now you can muck it up once I say I, I don't think it's complexity of the task. Uhhuh, I think it's visualization of the output.


Okay. And when a lot of people have a conversation about an AI model and how it gets better, and especially if we're talking about something like say, vibe coding. One of the RA ways that [00:18:00] we talk about why it works for coding is because you evaluate, you have evals. Yep. You have an evaluation of output that's deterministic.


You can figure out whether the code runs or not. Does it compile blah, blah, blah, blah, blah, and it's, oh, that's supposed to be helpful. I think that a little bit of it, but it's actually not the broader point because that actually doesn't abstract out to everything as a good example. Then why does something like Midjourney work?


The Importance of Proof of Work


Nabeel Hyatt: Uhhuh? How about random webpage where you don't even know if they run or not, but they're like, they just come up really fast. Yeah. Those seem to be, those companies seem to be doing early as well. I think it's proof of work. I think that the correlation is the parts of the economy where the AI can very quickly generate proof of work is the stuff that's working and the stuff where you can't generate proof of work easily.


Is the stuff that's struggling a little more. And even inside of coding, this is true. And I'll explain what I mean. Okay. If I am coding a lovable webpage Yep. Really fast. Yep. Comes up in 90 seconds. Okay. I have a visual display of all of the things that were done well, and I can look at it [00:19:00] in a half a second.


I can be like, this is broken. I didn't like that. That thing was over there. Stop doing that. Blah, blah, blah. I don't know how many hundreds of lines of codes were written or thousands or millions. Yeah. But like in 15 seconds I have proof of work, of evaluation of all things that went into something. Right.


Okay. You do the same thing on server side coding. Yeah. You don't trust it at all. Yeah. It runs away for 10 minutes. It writes up a, it makes a bespoke database from scratch. It does blah. You know what you're doing? You're inspecting code like for the next 20 minutes and there's no proof of work. There's no way of having a visual proof of the thing that I just did in a way that is a evaluative by the person on the outside.


This is also why. By producing textile books works really well. Hey, make this deep research report. The research report comes out. I glance at it really quickly and I'm like, no, this whole thing's too pedantic. Can you still be more harder on me and blah, blah. And also find evidence.


Building the Right UI for Agents


Nabeel Hyatt: But that's really quickly I look at the thing and you can have a visual evaluation.


Yep. And so I think there's these two camps of, [00:20:00] we just get in these loops of these conversations where it's, do we need new UI or is chat gonna be enough? Yeah, it's not enough. There's gonna be new, there's gonna be new ways to talk in the future to ai. And then the other side is, no, that's how you talk to humans.


So that's how you talk to the aliens. I think it's more if you are in a knowledge work economy in which the thing that's being output or produced is from the ai, not from you. So think output, not input. Yep. Is quick and visual in nature. Yep. And you can instruct it quickly through chat then. Great. Okay.


And probably that the lowest common denominator is gonna work chat. We already know how to chat with people. I can text your friend, you can send me back an image. I can be like, I don't like that image. Yeah. It's not gili enough. And then you can go do it again and bring it back to me. Okay. Um, like that works.


And so that's the loop that all of AI is in as well. And yeah. All the text stuff and, and some of the coding stuff. When the coding stuff is making an artifact, when it's making it quick and easy artifact. Okay. If for work that's more abstract in nature, I think it will struggle for AI adoption [00:21:00] no matter how good the models are until we may need to invent new visualization layers.


That helped to show me the work that was done. I have a friend who's working on a way to do AI prompting of games. Okay. And so I can say, make a verse person shooter, or make a chess game or whatever. And the output to a certain extent is visual. I can fire it up and I can see a thing. But the problem is, let's say it's made five levels of a game without me playing through the whole thing.


Okay. You, there's no visualization layer, like a site map or a way we don't have a ludology or an encyclopedia or a visual of explaining how Assassin's Creed or Fortnite is. Yep. Actually plays. What is the feel of that game? We haven't yet invented some layer and so the problem is then the feedback loop is incredibly slow.


I can't figure out that you made a decision on level two to make it almost near impossible. And by the way, the gun should have come in a little bit earlier, not later. And I don't like the recoil amount and like that 'cause that's not really fun. 'cause I suddenly lose track of who I was targeting and like a million things that go into [00:22:00] something like building a game.


Yep. That we just don't have a visual language for to describe all that dense data. And so that means I have to play through it for 20 minutes every time, and I play for 20 minutes every time now, like my loop deck is slower. So I just think there's many pieces of work that if we fast forward five, 10 years and we look back and we say, Hey, when did the AI take off in that environment, it will not just be about model capability.


Visualizing AI Output: A Design Challenge


Nabeel Hyatt: It will be that we possibly invented a new way to visualize the work that was being done so that the AI could say, oh, here are the 35 decisions I made and this I'm looping back to. When do I have 500 agents going and doing things? Okay. When 500 agents have some way of giving me a report of the work that was done, okay, that is coherent and makes sense and that I can evaluate and say, I think you did these two of these 25 things wrong.


Go fix those.


Fraser Kelton: I think I started to get it at the end there, and then I just had a, I had a sip of my diet Coke, and so I'm getting focused. Let me see if I can say that back to [00:23:00] you. First of all, if the output is, first of all, human interpretable. Yeah, but also it has to be further than that and it has to be like the adjudication of the quality of the work has to be readily available.


I could see your


Nabeel Hyatt: judgment.


Fraser Kelton: Yeah. Yeah. Pro fluent outputs and amino acid sequence. Yes. Not human interpretable. Yes. Very hard for us to automate without going to the wet lab and everything else like that. The interesting thing is you do it in the wet lab. I'm sure that you then get the charts at the end of those tests that are human interpretable.


That's right. So it says maybe that's the path toward automation then. Yeah. I see. And so then back to your 500 agents and or when you want a mass number of median workers versus the exceptional worker. When you want a new ui, how do to think about all that stuff is if the work product, you can look at it and you can just either have it engender trust or you can give a feedback in more direction.


That's the path. Yeah. Okay, [00:24:00] sure. I get it. And in software engineering you have it's swipe front end task, perfect front end task. Probably a bunch of different reasons as well, but the reason that you can put the model in and get good product out of it for automating code is for this reason. But then on, on like backend, it's a bunch of different complexities as well.


Nabeel Hyatt: I'm just calling out that if I have this right now, I, I'm working on a website, something's messed up on the front end. Yeah. I can see the problem I made. I can fix it, blah, blah. Yes. And then I have a, yesterday I had a, like login flow that got messed up. Yeah. And I've been trying very hard to not look at the code.


Yeah. 'cause I'm trying to like, only, I'm only allowed to talk to you in prompts and so on for, just for discipline purposes. Yeah. Because I'm trying to get used to, what does this feel like to use if I've never been a coder before and I can't analyze me, figure out what's wrong with the login. Why is that?


It's because we don't have some way of actually showing. The logic of what's going on yet when we use one of these code editors, yeah, we can show the visual layer, but we've not shown the logic [00:25:00] layer. Okay. Very interesting. So my argument would be an AI researcher argument version of this conversation, which we have had before with some of these places is, oh, that's just like model capabilities.


So all I just gotta do, I said, wait for the I'll model, get smarter, more evals, more smart people solving bugs. And then the model gets better and then it doesn't happen anymore. And I was like, like maybe. Sure. Yeah. But more importantly, you can't show me. Really. Interesting. And so a chart of this logic, which would let me point out


Fraser Kelton: where you made the mistake on front end, you run it locally, you deploy, you look at it, you tweak your code, you fix it, you go


Nabeel Hyatt: Yep.


Fraser Kelton: On back end, there's like a whole process. It exists on the front end as well. But like you have your tests, your unit tests, you have the concept of a pull request with code review. Oh yeah. Like you pull in the person who's an expert on these other systems. Yep. So that they can do the code review and the absence of that today.


Yeah, I get it. That


Nabeel Hyatt: makes sense. Or put differently, we've gotten to this point where the AI models [00:26:00] in cogen are some of the more advanced ones are like, they're like leaning towards mini PRD land. Yeah. You're like, you talk to Repli and it's like, I think do these five tasks. It's not even PRD, right? Yeah.


It's really just task list. I'm do these five things. I I say that back to you and you feel better. Yeah. And maybe the right visual feedback is to think of it like it should be generating a real PRD. Yep. It should be generating a PRD at the logic layer of what it's gonna be going in coding. Yeah. So that I can glance through that and even not as a coder.


Yeah. If I'm a pm I can read a good PRD and be like, dude, yeah. Stop on number three. Yeah. Yeah. You don't logic test after that. Yeah. That's dumb. We're not ready for that yet. Like that kind of thing. Yeah. Maybe the whole point is how do I manifest this in human readable language and until we're there, yeah.


If the human readable language proof of work doesn't exist, then that's the areas where you just have to invent it or add it to your product if Oh, and, and that will compensate for a lower model. 'cause then you're human's in the loop.


Fraser Kelton: Yes, I agree.


Nabeel Hyatt: I agree with


Fraser Kelton: all of that. And I feel vindicated by you because I think that everything that you just said [00:27:00] is the most important set of problems from a product perspective to be working on over the next little bit because I think the models are quite capable at doing an awful lot.


Nabeel Hyatt: Oh, this feeds back into your, listen, the models as today, we got 10 years of innovation. If we, if the models don't even get better, we got 10 years of innovation. But the


Fraser Kelton: thing that needs to be innovated on is like getting feedback from the model and giving direction to the model.


Nabeel Hyatt: Yeah.


Fraser Kelton: And think about what you just said.


There's a, a finite number of tasks in our lives that you get a human verifiable work product where you can adjudicate the quality of and give feedback immediately.


Nabeel Hyatt: Yeah. The path toward, and those are the ones more importantly, like it's not random. Yeah. That those are the ones that seem to be working.


Well first it's not just that the model is quote unquote good at that. That's right. It is that we are good at humans at evaluating that. That's right. That's


Fraser Kelton: right. And so then everything that's outside of that zone Yep. Should get lit up increasingly


Nabeel Hyatt: by current state of models. Once the role designers too, that's where the designers comes in and says, look, you [00:28:00] get, here's a task that you, the world, we don't know how the model speaks to you about the task.


It just did. Yeah. And so like that's, go figure out what that proof of work is.


Fraser Kelton: That's it. And it will come from both ways. It will come from the model getting better, understanding that it should come back for feedback or input. Yep. And then I don't, this goes full circle to our friend Noah's observation, that there's three ways generally for a model to be able to do that.


I can't remember what they are. One is, one is like to present the options. One is, I have, we should ask 'em. Do we have notes on this somewhere? No. Well, let's see if we can suffer through it. Do you know what I'm referring to? No. Oh, it was at dinner that he was at most recently. Yeah. Where he was saying that there's really only three ways to elicit feedback from the users for the model.


Yeah. And I very much agree with him at the time. Yeah, it sounded good. It sounded right. That guy just, if you wanna know what's happening two years from now, we should just ask him. Yeah. The model can just ask for [00:29:00] direction freeform.


Nabeel Hyatt: Yes.


Fraser Kelton: The model can present the set of options that are in front of it.


Hmm. And there was a third one that was profound, but I can't remember it.


Nabeel Hyatt: This is it. I found it. Where did you, what? AI searching works. Okay. Noah said that the model can interact is this granola uses in three ways. This is applaud. Okay. One is. Making good assumptions on behalf of the user and hiding complexity.


Two is coming back and asking the user for input, and three is presenting what the model is about to do and allowing the user to make changes with smart defaults and a rich UI that is ready to modify those defaults at a moment's notice. I


Fraser Kelton: think that's right, three broad categories and you can imagine a very vast product space within each of those.


Yep. I think that's the most important thing. Yeah. Is innovation on that axis and it's not surprising. 'cause to, [00:30:00] back to your comment about being in, in Slack and giving direction to, to humans is that is what a great collaboration has. Yep. You dictate something for me to go and do. I run off and I make sensible assumptions and if it turns out that I show you work product, I.


Where the assumptions were brutal.


Nabeel Hyatt: Yeah.


Fraser Kelton: And I spent two weeks on it. Yeah. That's horrible.


Nabeel Hyatt: Yeah.


Fraser Kelton: But if I make sensible assumptions and I show you something that's like directionally what you wanted, yeah, that's great. If I come back and say I'm stuck, I don't wait two weeks. I come back in an hour and I say, Ooh, I'm stuck.


Here is the situation summarized. Here are the three options in front of us, and here are the perceived pros and cons of each of those. What do we do? Those are the right


Nabeel Hyatt: feels. And yet I feel like most of the founders that I've been interacting with, and frankly most of the [00:31:00] way I evaluate companies, has not been even asking the question about whether that product is doing a good job of when to do bucket one, bucket two, and Beck three.


I think


Fraser Kelton: there's very few products that even do any of that deep research in OpenAI


Nabeel Hyatt: does it a little. But if then statement. Yeah. It always asks for a little feedback. It always does. Yeah. That's


Fraser Kelton: not smart. That's just, it's not a great product experience, but it's the first step towards something. I've used one of those like long horizon agents that we don't need to name where I gave it a job.


Mm-hmm. And spent six hours I'm, I don't know, like 45 minutes grinding on something. Yep. And it came back and it showed me the output and like the second step, there was a flaw in this assumption and it was all wasted. Which also also goes back to visualizing the work.


Nabeel Hyatt: Yeah, absolutely. Yeah. You said when you were running away there for a second, you said on the prior topic you said [00:32:00] that, does it have proof of work?


It has implications for our day job. What did you mean by


Fraser Kelton: that? Kinda like what you were just saying around, I haven't been asking these questions. I assume that this is a solvable design space from a product design perspective rather than just a model perspective. And so then more to the point is, what did I mean is we just mentioned that the products that we've seen get traction Yep.


Have been ones where the verifiable proof of work feedback loop was present without having to innovate on the feedback mechanism. There is going to be things that are one step outside of that. Yes. That with very simple innovation on the feedback between the user and the model or the user and the agent.


Yep. There will be domains that become solvable. That's right. With


Nabeel Hyatt: today's models. That's right. And you also, similarly, if you are trying to work with the team. It's also casting properly for the problem. Yeah. It's understanding like, oh, do [00:33:00] we need another prompt engineer or do we need another AI model builder?


And it's like, oh, actually we need somebody who's gonna be thinking about proof of work problem. But like how do you visualize the decisions that the model made back to the person? Yeah. The technical will be like some kind of like mechanistic interpretability, but I think we're also talking about the application layer a hundred, which, which just like make A PRD.


Fraser Kelton: It is. Both is is we, hopefully we'll see innovation on this at the model layer. Yep. But it's totally at the application layer as well. Yep. At this point, what is the clarity that you require to be able to go and do the next meaty piece of work? Got it. Next question. Okay.


Memory Wars: Work AI vs. Personal AI


Nabeel Hyatt: What are the second and secondary and tertiary implications of the idea that these models now have? Long-term memory? AKA chat GBT releases memory. The first. Implication is, oh, lock in. Right? Like, that was my first like, oh, I, maybe I should chat with chat to BT more than I'd bounce around to 15 other different chat prompts.


Because now I know that I can ask [00:34:00] it questions about how I thought about things over long periods of time and blah, blah, blah. And I want that memory in one place. Sure. I somehow don't think that's how it's all gonna shake out Uhhuh. So, so then what, what, what is the long term implications of the idea that these are going to have a very long contextual memories for what we put into the, if anything, did you have the same emotional response with chat PT when it said, we have memory in it, you're like, tell me more about myself.


And it grabbed everything from the last whatever it is for you, probably longer than most people. Uh, I didn't, no, no, you didn't, did you give it all the little prompts they put on Twitter that were like, tell me about myself. Oh, sure. Did you do that? Sure. Yeah. But it had no implication for you?


Fraser Kelton: Uh, no. I mean that, that, it's interesting.


It's


Nabeel Hyatt: revealing, it's frightening. Didn't change your behavior at all. You went right back, say your next text into Claude or Perplexity or whatever, bounced around.


Fraser Kelton: No, I've been wrestling with, I think probably like a second order effect of all of this, and that is am I going to make the decision myself or have it thrust upon me by my employer into a world where I [00:35:00] have a work?


Consumer Benefit > Privacy?


Fraser Kelton: Chet, GPT. Yep. And a home Chet GPT. Yeah. Like a consumer and a, and an enterprise version. I don't even think


Nabeel Hyatt: you have to get into all the normal Yeah. Work home profile context to just simply say, do you want two different sets of memories? Yeah, that's right. That's right. You know, like, do you effectively want AI severance?


Fraser Kelton: Yeah.


Nabeel Hyatt: Yeah,


Fraser Kelton: that's right. Yeah. Do you want your primary handle on Reddit to be aware of your non Reddit handle?


Nabeel Hyatt: Yeah. To also be aware of your home email account and also be aware of your text with your wife. Yeah, I do. You do? I do work as well. Everything. I think if I worked in a place we have, we work in a weird place.


We have lots of agency about what we, the it that gets involved here. But look, if we worked in a place where, here's what I think is gonna happen. I think a whole bunch of founders are gonna [00:36:00] make the horrible mistake that there's a split between work and home for very similar products. Uhhuh. So there's a world where a product I need at work is different than the one I want at home, even though they feel some similar.


Right. Okay. So the, the spreadsheet I use at home and work is different. So maybe there's two spreadsheet products, analogy loosely, but if we play out the work home analogy, in reality for most corporate software today, the most ways I interact with software email is basically the same in both uhhuh spreadsheets.


Same in both PowerPoint. People use that for both. Yep. Like most of the things that you use to make Yep. Uh, blur the line. The only times that they don't is when a corporate mandate security Yeah. We won't let you. Yep. Happens. And I'm not saying that won't happen, it's happening right now. I have a buddy at Amazon, like senior executive Amazon, we were just talking the other day about all of the like rigmaroles that he goes through in order to be able to try to stay on the cutting edge of ai.


And that's not easy. But I look at human behavior. And human [00:37:00] behavior is first of all, doesn't care about privacy. And second of all, finds ways around it if there's a consumer benefit. Yep. And in order for an AI to be incredibly the most helpful thing it can be, to me it needs context and it needs context about my wife and work and my hobbies and blah, blah, blah.


In order for me to, for instance, at the end of the year, be like, Hey Jimmy, my AI friend, can we talk about next year's goals? I think as we evolve these creatures around us, we are ostensibly in a context war.


Whats at stake, your memory


Nabeel Hyatt: That's what all of this boils down to. The same way that there were previous. Social wars where it's like, who's gonna get your friend graphed?


And then just after that was the kind of like data war. Who's gonna get all the data? Yeah. I, I think this is different than a data war in that it's not all the data on the internet, although we're certainly playing a little bit of that war too. This is the value to you of an AI truly understanding you. . The User benefit of all of that is too high for people to not break whatever corporate [00:38:00] rules are put in place. Is my contention


Fraser Kelton: in three years time? Yes. Is it and security and privacy going to be able to monitor all of the information that gets sent to these?


No, I don't believe that. Look, there's gonna be


Nabeel Hyatt: total lockdown procedures. Yeah, where I work in a government job and I have wasn't appointed by Trump, so I actually have to obey all security procedures and I have to use whatever software happens to be there. But by the way, there was a time period where the government decided they, Noah was allowed to use cloud software.


What happens? Oh, there was a time period where you, you're definitely not allowed to use Gmail or google cloud. that's insane. Those people like make search engines and stuff and guess what happened? The benefit to the user was high enough over time that all that stuff crumbled. And so if you fast forward, I don't know if we're saying different things here.


I think there's gonna be a user benefit that for home and work context to be blurred because we are one human and I want this [00:39:00] thing to understand all the context of everything that's happened in my life. 'cause it'll be more useful to me and that will override any one job. Because more importantly, I wanted to know about my last job.


Like I've had four jobs in 10 years or something like that. And I'm trying to get advice from this thing about what my next job should be. The best way it can give me advice is if it knew everything I did to all those jobs.


Fraser Kelton: I can't imagine. I'm trying my best, I'm trying my best. I can't imagine a world where that happens within a corporate environment.


Nabeel Hyatt: Don't get me wrong. I think there'll be a couple of wonderful enterprise locked down security, boring focused software companies that are just there to give you a gift version. They will rise to a billion dollars and then they will die. Because over time, I think except for maybe government or really law, there's a couple of very, very small areas where if you really have to, but otherwise, outside of that, I think in general corporate environments, the benefits, because again, [00:40:00] you said can you can an IT person lock it down?


At this point, we're five years from now, I'm wearing AI smart glasses that see everything in my life.


Fraser Kelton: Yeah. And streams it to it. No. Stream it to me. I bought the classes man. Okay. Okay. Because it's beneficial to me. Okay. You are not logging in with your Proctor and Gamble account into to this environment.


Then I am on my computer, but I'm still wearing my glasses. But who's getting the, I'm like so confused right now. I somehow avoided our IT onboarding. Yes, you did. So did I. Yeah. And I got a note, don't I? To my computer. Oh, well I got a note that said you have to get software that allows us to monitor it.


So I added that last Friday. Oh yeah. They shouldn't have done that. I've learned my lesson 'cause I got an email today that said you have a virus, an application with a virus on your computer and it's sitting in your trash can.


Nabeel Hyatt: Mm-hmm.


Fraser Kelton: And first of all, I was like, I don't think I do. And my second response was, [00:41:00] how on earth do you know that it's sitting in my trash can?


And I felt so uncomfortable. You felt violated. Yeah. Yeah. Felt violated. Yeah. And turned out it's a startup that, that a genetically takes over your computer. And I can understand why they thought it was a virus. It's an agent. It's an agent that control you, seed all control to your Yeah. Yeah. You do that every two weeks.


Yeah. And yeah, it reminded me that I'm now in an environment where somebody is watching absolutely everything that I do. Yep. We certainly don't have that on our clot accounts. And I asked Claude for things, uh, with my family, like, help me work through this complicated situation. And all of a sudden I'm going to have to start thinking about, is this the one that I want to put into this profile?


I very much agree that, that people give up privacy all the time for consumer benefit.


Nabeel Hyatt: I'm coming off more, more determined. Then I really am for the purposes of discussion. Yeah, of course. I understand that. Even today I log into Google and I have to switch [00:42:00] between my Gmail work account, my Gmail personal account, and I keep my Google photos in my personal account and they're separated from work account.


I'm not saying there might not be two memories. Okay. Or frankly, they might be like 50 because I'm logged into AI agents that are doing all kinds of different things in my life. Yeah. I suspect that AI does so much with more context that I can't imagine personally living in a world once these things are smarter, that I don't have an AI that has all of the context and I suspect much like a lot of the privacy arguments that as long as that benefit accrues to a user and a user can see that this thing that knows me and really knows me Uhhuh, they will do whatever they have to do in order to make sure that thing can really know you.


And maybe that becomes an IT tete war, right? I'm, yeah, but there's supply and demand. Some founder makes a startup that gets around the, it's weird thing and installs and stuff so he can get contacts 'cause the consumer can then ask it [00:43:00] personal questions and it knows what it's working personal life is like.


Or maybe it's again an external device that gets attached to your body and then they're trying to pat you down every time you walk in to go work at Fidelity or whatever it is. There's a lot of people


Fraser Kelton: who carry around two phones. Yes, sure. Like when you saw like that. Sure, sure. When you back down, no, not back down.


Like you're being bombastic for driving home the point and stimulating conversation. I think we are seeing that these are exceptionally helpful in the work environment. Yes. And I think that we are seeing that they are exceptionally helpful in a personal environment and personal environments are one where we are like a lot of people are lonely.


A lot of people are isolated. Yep. A lot of people are dealing with all sorts of personal struggles that they are turning these to these things for. Yep. And I can't imagine most people doing that on a work account. [00:44:00] Last


Nabeel Hyatt: question from me, will you, and it will not sound related, but my brain's related.


Would you stop using Windsurf now that it was acquired by OpenAI and use Cursor? Presuming that next week Windsurf only uses OpenAI models?


Fraser Kelton: I downloaded Windsurf, so this is not the question that you're asking, but I downloaded Windsurf and it is so clearly still a hardcore engineer product. Mm-hmm. That I deleted Windsurf and I went to rep lit and lovable and web sim.


There's just no need in my life for that type of product today. Yep. But to your question, I don't care about that. You could answer


Nabeel Hyatt: the same question by the if. Rep Lit was bought by OpenAI. Yeah. No, I don't care. You don't care?


Fraser Kelton: I don't care. No. What I've been wrestling with is undoubtedly there's been for these pro users, like very early adopter pro users, there's been value in being able to switch between models.


Nabeel Hyatt: Yeah.


Fraser Kelton: And you even see that from [00:45:00] Cursor. They're like, this model's now available in the product, but we don't recommend it as the default. Things are marginally better tit for tat week over week. It's an amazing time if you love that type of stuff. Yeah. I think there's going to be a really interesting question as to whether the vertical integration of adapting the model to the UI and the job to be done is better for the end user versus the benefit of being able to like tit for tat take in the best model of the week.


Model Switching as a Feature, Not a Bug


Nabeel Hyatt: Yeah, I think we're talking past each other a little bit. We do that a lot. But this helps though because I don't disagree with you. I use Rept still regularly. I probably spent $40 on Rept literally this week. Yeah. Talk about a great pricing model. They're like, what if we just increased the token price by 10 x?


People will still play and there's no model exposure there. In the primary agent interface, we have some sense of what they're using, like they're not telling us. And so they could switch models back and forth and I wouldn't know. Yeah. It just doesn't get the job done or not, so I, I agree with your like, Hey, the [00:46:00] users, which I really always love your, I just love your orientation always to this, which is just to think about how a consumer is trying to get a job done as simply as possible.


And if you serve that need, then we don't have to be tweaky about the whole thing. And that is generally your North Star and I, and it's usually right, whereas I just am a control freak. But in the case of windsurf. I specifically have information that would make me change behavior, which is that like I do not like opening eyes models for coding.


Hmm. In the words, if philanthropic had bought them, I would be like, all right, is the difference 10%? Is it 10% better or worse? It is certainly 10% better or worse. It's not a thousand percent better or worse, Uhhuh. But also the difference between windsurf and cursor is less than 10% Uhhuh, I get that. Like they are in a tete war.


Yep. They're very close. Yep. Every time somebody launches something that's interesting, the other guys run at it and get it out soon afterwards. Yeah. And so I used to bop in between them fairly regularly and so I would say something a little bit more [00:47:00] expansive, which is like in a red ocean situation, which certainly forks of VS code are Yeah.


In a red ocean situation. You're not just communicating to your consumer that you are the best in any even week. What you're trying to communicate to a consumer is that you will be with them for the next 20 best. Yep. Yep. And that's the break that just happens. Yep. Like I, I don't even sure care if they show me the model, but if Cursor hid the model, fine.


I just wanna make sure I have access to the best model because I know life's gonna change a lot in the next 18 months. This is another good reason why in a very red ocean market, I take some of my opposite views on polish. I normally think you want to like release something that are super polished and super great and just do the right thing by the user and make sure there's no errors.


But when a very red ocean market, you wanna be in the opposite side of the coin and not because the thing wasn't slightly buggy and blah, blah, blah, but because you were trying to communicate to a user that they should be committed and invest. Nobody wants to change software every week. Yeah. And so what they're trying [00:48:00] to do in a red ocean market is figure out who's gonna be there for the next 35 features.


Yep. And so, being a little bit on the edge. And launching the thing first or very fast follow right afterwards. It look, there's some empty calories in there that feel like waste. It feels frenetic, but guess what? Like you, you're a founder. You bought into a red ocean market. Like yep. This is the basket that comes with that fight and the worry I have for Windsurf Now, kudos to the founders and I actually thought the product is better than Cursor.


So I have been a windsurf customer, but my problem is that my trajectory of what I expect, what wars they're willing to go into or not go into over the next year just changed. They are not willing to change from open AI models, would be my guess. Yeah. And I want a company that is willing to change anything, which is an interesting way of thinking about the reason, of course, I don't bring this up because of WinDor for Cursor.


Yeah. I bring it up because I think this is true for lots of categories of things that are competing at the AI layer. Yeah. Application layer.


Fraser Kelton: I think what you're saying is there's basic parody between the UI layer of those products. Yes. [00:49:00] Uh, and you as an end user can absolutely absorb the value of the model improvements.


The, the model that is best Yes. Is changing so fast regularly. Yeah. And you still benefit from all of that, like improvement. Yep.


Nabeel Hyatt: Yeah. Sure. Another way of saying, uh, them being part of OpenAI means they hamstrung a whole particular area of possible product improvement. It's not about me controlling which model I can switch between.


Yeah. It's that model switches are a feature, not a bug. We, we shall


Fraser Kelton: see because there's a world Yep. Where having the ability to adjust your model for the UI that you're serving it in. Might actually lead to a better product. The promises of vertical integration. Yeah. Yeah.


Nabeel Hyatt: We'll see. We'll see. That is the counter.


Yeah. That is the Claude Code. Yeah. Playbook. That will hopefully also manifest itself over time. Yeah. As Claude code becomes what it's gonna [00:50:00] become. Yeah. I'm very interested for that war.


Fraser Kelton: Yep. That's the most optimistic story I've heard yet for why a horizontal agent that's not from any of the labs could still become quite important.


Yeah. Because you might have your 500 p fifties. Yeah. Your 500 median workers. Yep. And your one superlative agent.


Nabeel Hyatt: Yes.


Fraser Kelton: All working in the same platform.


Nabeel Hyatt: Yep.


Fraser Kelton: And those 500 could be from Gemini. Yep. And that one could be oh six from OpenAI. And you would want both of those. You know what


Nabeel Hyatt: we don't advise our startups to do that We should, is exactly what you just said.


Which is if you're trying to make a case for a product, an application product you have out in the world, yeah. That can talk to multiple models. Yeah. And if you're trying to make the case to them as to why you should not use chat GPT for that task, then it doesn't mean you need to have a dropdown that says, would you like to use Gemini for this, this task?


Of course. Or talk this task. That's a choice that no one actually really cares about. That's [00:51:00] right. But saying, Hey, we have found that all the math should go to this model and all the deep research should go to this model. And by the way, we're dynamically changing those every day, depending on how these models evolve.


Every three weeks, it changes. And so in the interface that is apparent to a user so that you, that is a new product feature that you now can believe in this product for. Oh, you make good model choices for me. Yep. Yeah. That's like


Fraser Kelton: a product for


Nabeel Hyatt: sure. Decision. For sure. Yeah. You make good model choices for me and the


Fraser Kelton: freedom to use the


Nabeel Hyatt: best model.


And you have, yes. You have the independence to keep doing that over time as an exchange.


Fraser Kelton: Yep. We've asked that to a lot of different founders and nobody's given me that answer. Yeah. Why is that?


Nabeel Hyatt: I think the only answers you've gotten is people make models switching a user capability. So things like poe.


Yeah, yeah. Where you can ask five models the same question to see what you get.


Fraser Kelton: The culmination of this rambling conversation coming to this point makes me very much believe that's true.


Nabeel Hyatt: [00:52:00] What's an example of a company that would benefit from surfacing to you? First of all, making good model choices for you.


Yeah. So one query or sets of queries go to sets of different models. Yeah. And then two would benefit from surfacing that to you. Uh, surfacing it to you, saying you want trust. Oh yeah. The trust comes from saying, oh, that's an interesting query. That's math. We're gonna go here for that, and that kind


Fraser Kelton: of thing.


I think in the fullness of time, users won't even care because they'll just be able to feel it. If there really is differentiated value, they'll feel it in different models. You'll just feel it. I think Elicit should do this, so, yes.


Nabeel Hyatt: Yeah. They are? No, I think elicit should surface to a customer. I see that they did great model routing.


That model routing was a feature. I see. And they're doing a great job of it, so that then I trust them. Yeah. To make better model routing choices over time. I don't know why Perplexity doesn't do this. Perplexity has handcrafted. I can switch. Yep. And it has an auto button. Yep. But it doesn't have a router button.


[00:53:00] No. Ooh, this seems like a deep research question. Yep.


Fraser Kelton: Yep. Do you want me to ask this for deep research? But you, and going back to your question of who should do it, the broad agents that are doing long horizon tasks Absolutely. Should. Oh, Manis, Manis, cognition, and Devon. Yes. Ev. Everybody who's routing multiple tasks across what I have to assume are multiple models.


Yeah. That change consistently should do it. Yeah.


Nabeel Hyatt: Yeah. And I know the internal debate is to users care, but I think they do. 'cause I think, I think you wanna work with a good model. The models change. Yeah. The idea that you can not have to read one more tweet about how Gemini 3.1 B 3 9 7 is better at blah, blah, blah, and I'm trying to remember to make sure I route it to that than you.


Yep.


Fraser Kelton: I think that's great. I very much agree. That's the bull case for a broad horizontal agent that is not from the large lab.


VC Talk: When to Preempt a Round


Fraser Kelton: Yeah, that's right. I have a question for you. Yep. A venture question, Uhhuh, I've been doing this now for two years, Jesus. Which [00:54:00] is amazing. Yep. I now have had some investments that are raising their next round.


Yep. When would we, as the existing investors, want to extend an offer ahead of them going to market? When do VCs preempt, when do VCs preempt?


Nabeel Hyatt: I'm not a huge fan of preemption. It's becoming a more common thing in the market because people wanna get ahead of things. I think generally preemption is obviously to the benefit of the firm more than it is to the founder. And the pitch is obviously the time to get to do less work and so forth. But obviously you are giving some investor a better price for your lack of desire to go out to the market.


And I generally just try and think about what's best for the company long term. And if you do that, things generally work out okay. The two times you use do preemption, the first one that is often happens is you do preemption because you're worried about a round coming together and you are not really preempting the [00:55:00] round, you're pricing the round so a round can happen.


So there's a $40 million round, so it's a $40 million round that's gonna have two to three players that are involved in it. You don't know if the market's gonna price it really well. And so the often the conversation internally that a founder will bring up to a VC if the founder's like savvy or pro or being coached smartly by another VC is like, Hey, you're pro rata and this was gonna be 10 million anyway.


What about if we take that plus my two buddies in one strategic and we've already at 20 of the 40, right? Why don't we call that around? And then I can go out and raise the rest and that'll maybe help catalyze things. That's the first time the quote unquote preemption happens. More often than not, that's the case.


Every VC knows that and sees through that move, by the way. Uh, so there's nothing, there's no like, ooh, my insiders are super excited. And then, and you could just, you've done this long enough. You look at it for two seconds, you understand exactly what's happening. So it doesn't, I, I


Fraser Kelton: think especially with the second one, if I understand where this is going, 'cause what's the second, what's the second time when you would preempt?


Nabeel Hyatt: Well, the second time when you would [00:56:00] preempt, you literally. Are just trying to get more of this company than you have, and you're trying to get it at a better price than you think the market is willing to give. And so you're saying, well, you could price this at a hundred if he goes out, but internally, I would rather this be at 80, so why don't I try and write a check?


Obviously it doesn't benefit the company, right. The comp, they're giving up more dilution. It, quote unquote, saves the company time. But I tend to think of actually going out to market and fundraising at least every 18 months as a good forcing mechanism to test yourself against the market.


A big reason startups are more effective than corporate R&D Is that you don't get to go hang out in a lab for five years. Yeah. And never test your ideas with the world. And sometimes we don't like the results of those tests, and sometimes we think these stupid VCs, I often think these stupid VCs don't think long term horizon enough and aren't like I, I can complain as much as about it, but it is.


Better than every other market mechanism that's been, invented. It's like the Winston Churchill line. The kind of like democracy is. Yeah. [00:57:00] The worst, worst form of government except for all the others. Look, we know that, for instance, innovation happening inside of r and d labs, way worse track record.


Way worse, government, way worse. Each model for innovating does contribute something, but for startups you Have to get market signal. Signal back up is like an incredibly good, it's iron sharpens iron. It's hard and it's stressful it, but I don't think it's a distraction.


Right. I think it's good. Yep. So I think you're cheating the founder a little bit if you do it. Sometimes founders don't do the math. Razer like it often doesn't benefit us. If we were the largest investor in the Series A, then what's the problem? The problem is that if we're writing a $20 million check where we are the largest investor in the series A then.


Who are we diluting? We're mostly diluting ourselves. Like the amount you'd have to invest of the round in order to actually increase your ownership is incredibly large. So like when does that happen that you wanna do that? Basically, if you have more money than you know what to do with.


One of the factors of this new [00:58:00] vc meets private equity class with assets under management is your product. And so you want to have 10, 20, $40 billion funds, is that you really just trying to find a way to deploy cash.


Not for great returns, but just for like average returns. So I'm okay, just dollar cost averaging into you private equity style investing versus venture investing. But we have to admit that lots of large funds are basically doing that. Now, again, I don't think that benefits the founders even though it's sold to something that benefits the founders.


The times when a VC actually wants to put the a hundred million to work are always the times when you would go out and raise and want to raise anyway because you can raise a hundred million dollars, you will go do it.


Yeah. Because you get a better price than the insider's gonna give you anyway. Yeah. So that's why the really big fund shouldn't be your investor, at least I believe that. And that's why we raised the fund sizes that we raised.


so that's why we don't preempt that much. It's the same reason we have relatively simple term sheets and we don't really negotiate a bunch of little, tiny little things on term sheets. It's, yeah, it's like the, this is a job that is [00:59:00] very easy to describe. And very hard to execute. Yeah. Yeah. Yeah. And, and, and usually trying to whittle around the edges of innovation on the core of it, which is just like, just invest in good early stage companies.


Help them. Yeah. And then help them raise more capital. Be aligned. Be aligned, is like the


Fraser Kelton: right way to do it. Yeah. I spoke to our friend that we've discussed recently. Yeah. And I spoke to him yesterday. And. Came back today and he was like, yeah, it's not just about me learning how to do that and getting good at that.


I think it's the right thing for the business for these like fairly nuanced reasons.


Nabeel Hyatt: Yeah. What were his


Fraser Kelton: nuanced reasons? So it's gonna be a capital intensive business. Yep. Having a mature set of investors around the table who can continue to help with that. Yep. Signal to the market. Yep. I think he looked at it through the lens of long term, what is best for the business.


Nabeel Hyatt: Yep.


Fraser Kelton: And there's just a bunch of different things that that line up for that.


Nabeel Hyatt: There's the third reason why people do inside rounds. Okay. The first one is the soft. I'm [01:00:00] worried about my founder being able to raise


the second is I wanna buy up more ownership, mostly because. I have too much money to deploy, and I can take advantage as a VC. . the third reason that is sometimes common is that seed investors do it because it's not from their fund. They're forming an SPV. And so for them, quote unquote, a preempt isn't really a preempt. It's just more free money and so different set of math.


That honestly sometimes can be beneficial for a founder to take. I'm not speaking on Spark's behalf. We don't do that kind of thing. We don't do SPVs. But like from a founder standpoint, especially if you're worried about going out to market and you just want the capital, yeah, then those can be okay.


There are a bunch of reasons that founders have found why SPV suck and are. Not great, but we don't have to worry about that right now and turning this podcast to an SPV podcast maybe some other time. Cool. All right. Should we be done?


Fraser Kelton: We should be done. Let's do it. Thank you. Cool. Take care. Bye-bye.


See you next [01:01:00] time.