What Happens When A $30B Founder Uses ChatGPT

- July 16, 2025 (8 months ago) • 54:04

Transcript

Start TimeSpeakerText
Dharmesh Shah
"Every day—every day, you should be in **ChatGPT**. I don't care what your job is. You could be a sommelier at a restaurant, and you should be using **ChatGPT** every day to make yourself better at whatever it is you do."
Sam Parr
"Can I ask you about the story really quick? You have a list of stuff here that's all amazing. A lot of it's very actionable. But the reason I want to ask you about the story is for the listener: **Darmesh** founded **HubSpot**, a $30 billion company. You're the **CTO**, so you're an OG for Web 1? Or Web 2? Or—one of the early people. One of your first rounds was funded by **Sequoia**. Your partner **Brian** is an investor at Sequoia, so you are an insider. I believe you may not acknowledge it—I don't know if you do or not—but you are an insider. The cool part is that you're accessible to us. When did you first see what **Sam** was working on, and how long have you felt that this is going to change everything?"
Dharmesh Shah
So, I actually had known **Sam** before he started **OpenAI**, and I got access to the **GPT API**. It was a toolkit for developers to be able to, kinda, build AI applications, right—effectively. I built this little chat application that used the API, so I could have a conversation with him. I actually built that thing that night. It was a Sunday. I had the full transcript two years before **ChatGPT** came out.
Sam Parr
So that's four years ago.
Dharmesh Shah
It was 2020, so five years ago.
Sam Parr
Wow. Okay.
Dharmesh Shah
This summer... and as soon as you have that moment, it's the same moment that all of us had with **ChatGPT**. I had it two years earlier, and I was showing everyone: "Brian, you are not going to believe this — look, I have this thing through a company called **OpenAI**." I would type stuff into it and watch what happened. We would ask it strategic questions about **HubSpot**, for example: "How should it... who are the top competitors?" Even then — two years before ChatGPT became widespread — it was shockingly good. The thing you have to understand about the constraints of how a large language model (an **LLM**) actually works is this: you type, and you have a limited space. Imagine a sheet of paper that can only fit a certain number of words. That limit includes both what you write on it (for example, "I want you to do this") and the response; both must fit on that sheet. That sheet of paper is, in technical terms, called a **context window**. You’ll hear this phrase tossed around: "ChatGPT has a context window of X," or "this model has a context window of Y." Why does anyone care about the context window? Because sometimes you want to provide a large piece of text and say, "Summarize this for me." In order to do that, the text has to fit inside the context window. So if you want to take two books' worth of information and ask for a 50‑word summary, those two books' worth of information have to fit inside the context window for the LLM to process it. Most of the frontier models have context windows of roughly **100,000 to 200,000** — they measure this in **tokens** [i.e., units of text]. </FormattedResponse>
Dharmesh Shah
Seven five of a word, but that's like "book".
Sam Parr
So, *yeah*. Is that book?
Dharmesh Shah
It's an— I think it's an... I think the average book is like 240,000 words, I think, but I'm not sure.
Sam Parr
"That's not a lot. So the way that I use **Tratchipity** is—I'll, for example, put a historical book that I loved reading and say, 'Summarize this so I remember the details.' So you're telling me that if it's a 1,000-page book, it's not even going to accurately summarize that book?"
Dharmesh Shah
You're like, "It won't fit." If you paste something large enough into ChatGPT or whatever AI application you're using, it will come back and say, "Sorry, that doesn't fit." Effectively, what they're saying is that it does not fit in the **context window**, so you're going to have to do something different.
Sam Parr
Alright — a few episodes ago I talked about something and I got thousands of messages asking me to go deeper and to explain. That's what I'm about to do. I told you guys how I use **ChatGPT** as a life coach or a thought partner. What I did was upload all types of information: my personal finances, my net worth, my goals, books I like, issues going on in my personal life and businesses. I uploaded so much information that the output is a GPT I can ask about issues I'm having in my life — for example, “How should I respond to this email?” or “What's the right decision knowing my goals for the future?” I worked with **HubSpot** to put together a step-by-step process showing the audience the software I used and the information I fed it. **ChatGPT** asked me all sorts of questions, so it's super easy for you to use. Like I said, I use this 10 to 20 times a day — it has literally changed my life. If you want that, it's **free**. There's a link below: just click it, enter your email, and we will send you everything you need to set this up in about twenty minutes. I’ll also show you how I use it again, 10 to 20 times a day. Alright, check it out — the link is below in the description. Back to the episode. I usually use **"Projects"**, and I have, let's say, a health project. I'll upload tons of books or blood work, and I'm hoping it's going to pull from all those items in my project. Is that true?
Dharmesh Shah
That is true. This is a perfect segue because this is the next big unlock. The number one thing to understand in our heads is there's this thing called the **context window**. Here's why it matters. We're going to pop that on the stack, push down the stack, and come back to it. There are two key things to remember: 1. **It doesn't know what it's never been trained on.** If you ask it something that only you—Sam—have in your files or your email, and the LLM was never trained on those sources, it won't know those things. No matter how smart it seems, it's just information it has never seen. 2. **Its knowledge is frozen at the time training completed.** For example, if your website for **Hampton** was included in the public internet during training, the model might know it. But the training happened at a particular point in time. Once training finished, the model's knowledge was fixed. If the website changes after that, the model won't know about those updates. So, the two takeaways are: it doesn't know what it doesn't know, and the things it did know were frozen at a particular point in time. </FormattedResponse>
Dharmesh Shah
In time, right—so it hasn't seen new information. Those are roughly large limitations. Especially if you're going to use it for business or personal use, it's like: I have a bunch of stuff I want it to be able to answer questions about, whether inside my company or inside my own personal life. How do I get it to do that? Here's the hack. This was a brilliant discovery. What they figured out is: let's say you have 100,000 documents that were never on the internet. They're in your company—employee hiring practices, your model, how we do compensation, all of it. Obviously you can't ask questions about those 100,000 documents straight to **ChatGPT** because it doesn't know anything about them—it's never seen those documents. We talked about this two episodes ago: **vector embeddings** and **RAG (retrieval-augmented generation)**. I recommend you folks go listen to that episode—it's a fun one. To summarize: what you can do is put those 100,000 documents in a special database called a **vector store** (a vector database). Then, when someone asks a question, you query the **vector store**—not the **LM**—and say, "Give me the five documents out of the 100,000 that are most likely to answer this question based on the meaning of the question, not keywords." That's what the **semantic search** in the vector store does. It comes back with, say, five documents. As it turns out, those five documents fit inside the **context window**. So instead of training the model on all 100,000 documents (which would not be practical and might expose sensitive data), you give the model the five documents it actually needs in the context window. As you can imagine, it does an exceptionally good job answering the question when it has those five relevant documents in context. To use a metaphor: it's like hiring a really, really good intern with a PhD in everything. They read all the publicly accessible internet knowledge—they know everything that was publicly available—but on their first day they don't know anything about your business. Now you can hand them five internal documents and say, "Read these and answer my question." They can do that.
Shaan Puri
I like that analogy: the **intern with the PhD and everything**, because that's so much how it is. It's as helpful and available as an intern, yes, but it's as knowledgeable as somebody with a PhD in everything. And then, like you said, another analogy for that is a store. You have shelf space, which is kind of limited, but they do have a back. You can always send the employee to the back and see if they can find it for you. That's kind of what you're saying: **"put it in the database"** — they can go fetch the specific thing you're asking for because you gave it access to the back. You gave it a badge that lets it go in there.
Sam Parr
Have you uploaded everything? First of all, I want to know what your *ChatGPT* looks like. I want to know how you use it — I just want you to screen share and show me exactly what you do. Also, have you uploaded your entire life? Have you uploaded all of *HubSpot* to ChatGPT so you could just ask it any question?
Dharmesh Shah
Yeah, multiple times, right? So...</FormattedResponse>
Sam Parr
And what *format*? Tell me how you did that.
Dharmesh Shah
So I did — OpenAI calls this an **embeddings algorithm**. It takes any piece of text — a document, an email, whatever — and creates a vector embedding in a *high-dimensional space*. In three-dimensional physical space we think of points using the x, y, and z axes (this is where a point is in space). In high-dimensional space you might have 100 dimensions or 1,000 dimensions. You can describe each document as a point in that space [a vector embedding]. What I've done: in the early GPT world, the number of dimensions you had access to was roughly 100–200 dimensions, so you would lose a lot of the meaning of a document. The models would sort of capture the meaning, but imperfectly. Then we moved to around 1,000 dimensions, which allowed much more accurate representation and capture of documents of arbitrary length and made it easier to find them given a prompt or search query. Recently, within the last year, OpenAI's latest embeddings algorithm uses **3,072 dimensions**, I think.
Shaan Puri
But where do you do this? Do you just literally upload it as a project? Did you have to do an **API** connection? What did you—how do you actually [do it]?
Dharmesh Shah
Do this: I made an **API** connection. Right. In fact, I'm running the least... let me see where it is now.
Shaan Puri
And anyone could do this, or do you have *special access* because you're friends?</FormattedResponse>
Dharmesh Shah
No — anyone can do this. The API for the **embeddings model** has two versions: a 3,000‑dimension version and a 1,000‑dimension version. These are the results of that.
Sam Parr
"Are you driving a NASCAR and I'm driving a scooter? Is that the difference? For example, I would download my company's financials, upload them, and then explain what my company does. But the way you do it is a lot different. **Are we talking a massive gap in results that you get versus what I get?**"
Dharmesh Shah
Yes. The short answer is **yes**, and the reason is... I do that as well. In terms of how I prompt the computer, I try to provide it context — that's why it's called the *context window*. You try to provide the LLM context for what you're asking it to do. The difference is that, and by the way this is kinda rich: I'm working on a nice weekend project right now that takes email. You would be amazed. If you did nothing else right now except say, "I'm going to take all of my emails I've ever written that are still stored, give them to a vector store, use an embeddings algorithm, and then use ChatGPT to help answer questions," the responses are shocking. For example: if I want it to "give me a timeline for when we first started using Hub to name products, how it came about, or what the winning arguments were against doing that," it's astonishing how good the answers are when you give it access to that kind of rich data.
Shaan Puri
Somebody needs to create a $10-a-month, single website that's like, "Hey—make your **ChatGPT** smarter." It's a site where you can connect your **Gmail**, connect your **Slack**, connect everything. I would pay them happily $20 a month to just set this up for me so that my ChatGPT gets the extra "pill" that says, "You now have access to my data." Is this because—because you're talking about, like, "I have the **API** to the **vector embeddings**," and, like, "well, I have the **flux capacitor** too but I don't know what to do with it," right? I need a button on a website with a **Stripe** payment button that I could just use to connect this stuff. Is it not? </FormattedResponse>
Sam Parr
Is there a caveman version of this?
Dharmesh Shah
There are tools out there, and there are startups working on it. **There are two pieces of good news.** One is that startups are working on it. The challenge here is not that they're doing a bad job. The challenge actually comes down to this: if a startup came to you and said, "We just started last week, but we've got this thing — it really works. In fact, Darmesh may be an investor," how willing would you be to hand over literally your entire life and everything that's in your email to that startup? Part of the challenge we have is access control. Let's say you're using **Gmail**, which a lot of us use. When you provide the keys to your Gmail account to a third party, there is no real granularity. You can say, "I want to read the metadata" — that's like level one. Level two is, "I want to read my full email." Level three is, "I want to be able to write and delete emails on my behalf." But if you wanted to read the actual body of the email, you can't say, "I only want to read messages that are from HubSpot.com," or, "I want to ignore all messages from my wife and my family." There's no way to control that, so you sort of have to trust.
Sam Parr
"Is there any product that you would trust right now, or that you can recommend that guys like Sean and me should use as **ChatGPT** add-ons or accelerators?"
Dharmesh Shah
No — not that I don't trust them, but I wouldn't trust really anyone right now with that. One of the reasons I run it locally, even though I know these things are out there, is because I predict what's going to happen: the major players will offer this. You can already see it happening. For example, you have the ability to create custom GPTs in **OpenAI**, and do projects in **Claude**. You have **Google Gemini**, which is essentially a small, baby version of this — it lets you upload 10 or 100 documents and then ask questions against them. What it's really doing behind the scenes is creating a vector store; that's effectively what's happening. My expectation is that all the major companies will have a variation of this, starting with Google. They should be first because they already have the data. There is absolutely zero reason why Google Gemini does not let you have a Q&A with your own email account. That's insanely stupid — I'll just go ahead and say it. Something's not right with the world when they already have the data, they have the algorithm, and they have **Gemini 2.5 Pro**, which is an exceptionally good model. So you have all the pieces, but they have not yet delivered — but I hope it's [unclear/inaudible]. </FormattedResponse>
Sam Parr
Then tell me — Sean and I are *early adopters*, but neither of us are technical. What can we do? I wanna get it on this baby.
Dharmesh Shah
Alright, so give me **two weeks**. Here’s what we—so that’s one thing I do: **trust**, and I trust myself. I’m an honest guy. I’ll give you this internal app that I’m building and let you connect your Gmail to it. It’ll run for a day or two or something like that, and then you will be amazed. You will be able to ask questions. By the way, the thing I’m working on now is once you have this capability—step one is just being able to do **Q&A**. Step two... imagine fast-forwarding: it has access to all of your history. Imagine you’re able to say (I’m not doing this, by the way, but if I were): > “I want to write a book about HubSpot and all the lessons learned and everything. It’s all in my email. Do the best possible job you can writing a book. If you have questions along the way, ask me, but other than that, write the book.” I think you would be able to write the book.
Shaan Puri
Wow. What else are you doing with AI? Give me your day-to-day. For example, the CEO of Microsoft had this great line: "I think with AI, then I work with my coworkers." That really shifted the way I work. I used to brainstorm or have meetings to talk about stuff with my coworkers, which is honestly a little disappointing. I felt like I was the one bringing the energy, the ideas, and the questions, and I was hoping that they'd... But just sparring with AI first, then taking the distilled thoughts to my team — like, "here's how we're going to execute" — has been way better. That one sentence shifted the way I was doing it. How are you kind of using this stuff?
Dharmesh Shah
Yeah, so a couple of things. I'll start at a high level and then drill in a little bit. What we're used to with ChatGPT is the early evolution. Most people use it because it's called **"generative AI."** You use it to *generate* things — generate a blog post, generate an image, generate a video, generate audio. That's the generation aspect, and it's part of what it's good at. Then you get into the summarization and synthesis side. It can take a large body of text — a blog post or an academic paper — and summarize it in a specific way, for example, "so a seven-year-old would understand." That's the second step. **Step three** — and we'll get into how this is now possible — is that you can effectively take action: have the language model (LM) actually do things for you. I broadly put this in the automation bucket. I can automate tasks I previously did manually. **Step four** is orchestration. Can I have it manage a set of AI agents and just do everything for me? I give it a high-level goal, it has access to an army of agents good at different things, and I don't want to know the details — I just want it to go do this thing for me. That's where we are on the slope of the curve. The first three things are possible today and work well. It can generate — blog posts, high-quality writing, great images (including images with text), and better video with higher fidelity and character cohesion. Sean, the vision you had three years ago about creating the next Disney or the next media company — you have the tools now, my friend, to finally start to approach that. Moving into synthesis and analysis enables deep research features. For example: "Take the entirety of what Sean has written about copywriting and write a book just for me that summarizes all of that in ways I enjoy, because I like analogies and jokes." Write a custom version of Sean Pourri's book on copywriting — that kind of synthesis would be super interesting. Finally, automation is now possible. Agent.ai is one of those tools; there are others that let you say, "I want to take this workflow or thing that I do, and I want you to just do it for me."
Shaan Puri
"Give us a *specific*—what's a *specific* automation that you view [as] useful, helpful, and that saves you time?"
Dharmesh Shah
I'll tell you a couple. One is around domain names. I have an idea for a domain name, and I'm going to type words in and these things exist. I'll tell you the manual flow that I used to go through. First, I brainstorm myself and come up with possible words and various other words that whoever hears the things will suggest. Then I'll say, "Okay, which domains are available?" Almost always, absolutely zero of the good ones that pop into my mind are freely available to just register—no one's registered them before. Okay, fine. Next, I'll ask, "Which ones are available for sale? What's the price tag? Is that a fair approximation of the value? Is it below market or above market?" We don't know, because there's no *Zillow* for domain names yet. So I created something that automates all of that. It says: "Oh, you have this particular idea for this concept or this business—whatever it is—here are names, here are the actual price points, here are the ones I think are below market value or above market value. Tell me which ones you want to register."
Sam Parr
That's in ChatGPT.
Dharmesh Shah
No — it's in **agent.ai** where it lives right now. But there's a connector between agent.ai and **ChatGPT** through this thing called **MCP**, which you'll hear about a bunch if you haven't already. One thing I want to get out there so we keep connecting the dots: what I want everyone to have is this framework in their head. We talked about large language models that can generate things. We talked about the context window. We talked about faking out the context window by saying, "Oh, we can do this vector database and bring in the right five documents and stuff them into the context window." Here's the other big breakthrough that's happened recently — within the last year to year and a half — called **tool calling**. Tool calling is a really brilliant idea. The concept is: the LLM was trained on a certain number of things, but imagine if we had an intern who could go look things up the intern wasn't trained on. You wouldn't give the intern unlimited internet access in a naive way; you'd give the intern organized access to tools. If I ask you something you weren't trained on, "go look it up" — that would be the first-day instruction. As it turned out, the LLM "intern" didn't have access to the internet. All it had was whatever notes it happened to take during training. What tool calling allows — and this is a bit odd because of how LMs are architected — is to extend capability without changing the model architecture. The LM is built to take a context window and produce output; you can't reprogram that architecture. Now, instead, we give it access to tools via the context window. Here's the hack they came up with: in the instructions we give it in the context window, we say, "You have access to these four tools." It doesn't actually have those tools; we ask it to *pretend* it has access to them. For example: one tool is the internet — type in a query and get results. Another tool is a calculator — give it a mathematical expression and it returns an answer. You can define any number of tools. What happens behind the scenes is this: ChatGPT (the interface interacting with the LM; you're not talking to the LM directly) gets a prompt and says, "By the way, LM, pretend you have access to these tools, and anytime you need them, tell me when you want to use one of them and include that in your output." We give it a query like, "I want to look up the historical stock valuation for HubSpot and when it changed — is there any correlation to the weather? Is it seasonal?" That data isn't in the LM's training set. Here's the cool part: the LM, following the instruction to pretend it has access to these tools (say, a historical stock price lookup), will produce output that effectively requests the application to invoke that tool. The LM's output says, "Please invoke that tool you told me I had access to and look up this result. Search the internet for X; what was the weather? Do this for the stock price." The ChatGPT application then fills the context window with the requested external data and passes it back to the LM. The LM thus *effectively* has access to those tools even though it never directly accessed the internet or stock market. This all happens behind the scenes. The massive unlock is: everything can be a tool. You don't need to build a vector store of all possible stock prices from the dawn of time — you could, but it'd be outdated immediately. Instead, give the LM a set of powerful tools, including browser access to the internet. That's like a 10,000- or 100,000-times increase in the intern's capability. That's where our thinking — and the world — should be headed: what tools can we give the LM access to that amplify its ability while causing zero change to the model architecture? Literally, it doesn't have to know anything about these tools. "I just want you to pretend that you have access to these tools." It doesn't need to know how to talk to those tools, it doesn't need to know APIs, it doesn't need any of that stuff. </FormattedResponse>
Sam Parr
Do you think that— I mean, is it all mind-blowing? You have an interesting perspective. Because, you know, I think three episodes ago that you're on, you created this thing called *Wordle*. Was it *Wordle*?
Dharmesh Shah
Word play.</FormattedResponse>
Sam Parr
"Wordplay that costs about $80 a month. It was just like a puzzle that you do with your son — it was amazing. But now you have new projects: you have **Agent AI**, you have a few other things, and you still run a $30,000,000,000 company [about $30 billion]. Do you think that the majority of value creation — like, am I going to — is my stock portfolio going to go up because I own a basket of tech stocks? Or is the best way to capitalize, as an outsider, to start a company? Or is it investing in new startups that are using **AI**, or in 'AI for startups'?"
Dharmesh Shah
It's a good question. I'm neither an economist nor a stock analyst, but I will say this: the thing I'm most excited about with AI — and I actually said exactly this in a talk I gave well before GPT on the Inbound stage — is that it's not "you versus AI." That's not the mental model you should have. The right mental frame is **"you to the power of AI."** AI is an amplifier of your capability. It will unlock things and let you do things you were never able to do before. As a result, it's going to increase your value, not decrease it. But for that to be true, you actually have to use it. You have to learn it and experiment with it. The only real way to get a feel for what it can and can't do is to do it. Here's a very simple thing everyone should do. I do this personally: anytime you're going to sit down at a computer and do research or work on something, give ChatGPT (or your AI tool of choice) a shot. Try to describe the problem to it. Pretend you have access to an intern with a PhD in everything. Maybe it doesn't know anything about you at first — fine, tell it a few things about yourself — but imagine you have access to this all-knowing intern and give it a crack at solving the problem you're about to spend time on. What you'll invariably find is two things. First, you'll be surprised by the number of times it actually comes up with a helpful response you would never have expected. How can it do that? It's because it has a PhD in everything. Second, you'll learn its limitations. (We'll talk about reasoning and whether models are actually doing that or not if we have time.) So my advice is: use ChatGPT every day if you're a knowledge worker. Actually, it doesn't matter what your job is — you could be a sommelier at a restaurant, and you should be using ChatGPT every day to make yourself better at whatever you do. That might be the introduction of an **orthogonal skill**. By "orthogonal" I mean a line that intersects another at 90 degrees. The common usage is that orthogonal concepts are unrelated or completely different — like the x-axis and the y-axis being independent of each other. That's what I mean when I say orthogonal concepts, skills, or ideas.
Shaan Puri
"Is there anything you disagree with that's kind of the consensus? A lot of these [people] are talking about, like, 'AI is gonna change everything; it's super smart; agents are coming; they can do some stuff now, more stuff later.' These are all probably right, but they're also consensus. I'm just curious: is there anything you disagree with that you hear out there that drives you nuts? Are there things people keep saying where you just think, 'That's either wrong, it's overrated, it's the wrong timeline, it's the wrong frame,' or whatever? Is there anything you've heard out there that you disagree with?"
Dharmesh Shah
I've heard two variations I disagree with. One is the idea that **"it's just autocorrect"** — it's not really thinking. I've spent a lot of time trying to talk people out of this. It's a matter of what you think "thinking" is. If a system produces the right output that we would judge requires thought, then saying it's not thinking is flawed reasoning. This view often comes from very smart experts who say, **"stochastic parrot."** By that they mean it's a probability-driven, pattern-matching system trained on internet text — not human intelligence. I agree with the phrasing that it's not like human intelligence, but that does not mean all it's doing is stochastically mimicking everything it's read before. To do what it does, it exhibits a form of creativity different from what we normally experience. That's kind of thing number one I disagree with. Thing number two is the belief that scaling laws will continue forever — that if we keep throwing more compute and adding more knobs, the system will get smarter and smarter indefinitely. I think there will be a limit to that at some point.
Dharmesh Shah
It's like nothing goes on forever. It's going to asymptotically move towards— we're going to have to come up with new algorithms. GPT can't be the *be-all and end-all* of everything; there will be a new way discovered. And I did say this—other people have said it too—the **best way** to think about AI right now, as you use it, is to truly find the frontier of what it's incapable of. It's like, "Okay, it can sort of do this thing, but not very well." If that's how you would describe its response, you are exactly where you need to be. If it can sort of do it now—if you have to squint a little and say, "Well, it's kind of something"—then wait six months or a year. That's the beauty of an **exponential curve**: it gets so much better, so fast, that if it can sort of do it now, it will be able to do it, and then it'll be able to do it really well. That's the inevitable sequence of events that's going to happen.
Shaan Puri
Have you heard this about startups? There's kind of this belief among the *smart money* in startups: the right startup to build is basically the thing that AI kind of can't do right now. That's the company to start today. You just have to stay alive long enough—give it the 12 to 18-month runway it needs for the thing to go from "didn't really work very well" to "oh my god, this is amazing." Meanwhile, you've been building your brand, your company, your mission, and your customer base all along the way. You're basically just betting you're going to be able to surf the improvement of...
Dharmesh Shah
The model—the way that's—that's how.</FormattedResponse>
Sam Parr
I feel about my company: my company is not related to us at all, but in terms of our operations, things are very manual. I'm like, "Oh my god — once I'm able to finally implement **AI**, when it can work for this purpose, my **profit margins** are going to go through the roof." I mean, that's how I feel about it. Which isn't entirely related to that, Sean, but a little bit.
Dharmesh Shah
One thing I'll plant out there: since we like talking about ideas at a macro level, here's an entirely new pool of ideas that I think are now available on a trend that I think is inevitable — as agents get better and better. Right now, most of us when we use AI, use ChatGPT as a tool, which is great. Over time, you need to shift your thinking and think of them as teammates. Think of them as that intern that just got hired. As a result of that... let’s stipulate that I'm right. The only question is how long it will take. We're going to have effectively **digital teammates** that are part of all of our teams. Someday every company will have a **hybrid team** consisting of carbon-based life forms and these digital AI agents. The way that's going to happen is not that one day we wake up and every organization is suddenly mixing them. It will be a gradual introduction: "Oh, I have this one task that agent is better at, it's reliable enough, and the risk is low enough, so I'm going to have that agent do it." We already see elements of that. As a result of that gradual infusion and adoption, the opportunities that get created are: how do I help the world accomplish the end state I know is going to come? Here are concrete examples. If Sam were to hire a new employee tomorrow, here's what you would do: you'd onboard that employee, spend a couple of days telling them about the business. Whoever's managing that employee — maybe a direct report of yours — will have weekly or biweekly one-on-ones. That one-on-one consists of looking at the work they did: "You did this over here," whether it's copy editing or whatever the role happens to be. You're going to give them feedback. That's what you do for a human worker. All of those things have a literal analog in the agent world. Right now, we're hiring these agents and expecting them to do magic — like hiring an exceptionally smart, PhD-level employee and expecting them to perform with no training, no onboarding, no feedback, no one-on-one. Your results will be crap because you did not make the investment in getting that agent up to speed. The big unlock is: whether you're in HR or another function, figure out what employee training looks like for digital workers. What do performance reviews look like for digital workers? How do we recruit for digital workers? What are all the mechanisms that need to exist? What is a manager of the future? What new roles will be created as a result of having these hybrid teams? For example, maybe we'll need someone who's an "agentic manager" — a human who knows all the agents on their team, has built the skill set to recruit for their team, run performance reviews, and do all of that for agents or hybrid teams, not just purely human ones. We're going to need the software, the onboarding, the training, the playbooks, and all of it to adopt. It's going to take years; it's not happening overnight.
Sam Parr
Two years ago I asked you, "Is it going to be horrible, or is this going to be amazing?" And you said this: "I saw this with the internet — nothing is as extreme as the most extreme predictions." I listened to you, and I trusted you then. Now, knowing what I know, I'm actually more fearful than I was a couple years ago. I'm thinking, "Oh — this is actually going to put a lot of people out of work." It's maybe not *good or bad*, but things are going to change drastically, more than I thought. I don't remember how I phrased the question before, but: is this going to change the future more than you thought two years ago, or less than you thought two years ago? Has your opinion on that changed?
Dharmesh Shah
I still think they're going to be unrecognizable. My kind of macro-level sense — and this is maybe just my inherent optimism about things — is that it's going to be kind of a *net positive for humanity*. This is the other thing that lots of people would disagree with me on: is this an existential crisis to the species? I haven't said this before, but I'm going to see how it sounds as the words leave my mouth—I’m probably going to regret it—but in a way we are actually, and Sami, Sean, you said this earlier, we're sort of producing a new species, right? So that's like saying, okay, well Homo sapiens as they exist absent AI is likely not going to exist. The way we know the species as it exists today, where we have a single brain and natural form, four appendages or whatever, maybe that's going to be different. But I think of that as an *extension of humanity*, not the obliteration of humanity. That's *Human 2.0*—the way we kind of think of the species right now. I think things are still moving very, very fast. This is why I think humans have issues with exponential curves—we're just not used to them. When something is kind of doubling every n months, it's hard to wrap our brains around how fast this stuff can move. Things that we thought were like the things we have today—Sam, if we had just described them to someone a year and a half ago—they'd be like, “ah, well ChatGPT is cool or whatever, but it's never going to be able to do that.” And now, those are like par for the course. We can do things with them that we couldn't imagine. We're literally like, “oh, there's no way, no way.” It's like, “yeah, it's good at text and stuff like that because it's been trained on text.” Now it can do images—well, it can do images—but video is at 30 frames a second. That's generating 30 images per second of video. All of that—it's like, “but diffusion models, the way they work, you get a different image every time, so how are you going to create a video, because it requires the same character and the same setting in subsequent frames?” That's not how image models work. And we solved all of those things. Now we have character cohesion, setting cohesion, video generation. So my answer is: it's not exact, but it's close. This is what exponential advancement looks like. I'm still of the belief that we're going to have more net positive. That is not to say that in the interim there isn't going to be pain. There are two cautionary words I will put out there. **First:** in the interim, anyone that tells you there is not going to be job dislocation—that there are not going to be roles that could be completely obliterated—is lying to you. That is going to happen. It's already happening. There is no world in which that does not occur. **Second:** because of the architecture of how LLMs currently work—maybe they'll figure out a way to fix this—they produce hallucinations. That's just a fancy way of saying they make things up. That's sort of okay, but not okay, because the model doesn't know it's making it up. Because of the way the architecture works, it's like the intern that thinks it's been exposed to all there is to know in the world. It says, “I know all the things. You ask me a question, I know all the things, so I'm going to tell you the thing that I know.” But you didn't know this, and what it said is actually factually, provably, demonstrably wrong, and it has aptly zero lack of confidence in its output. > “I know all the things. You ask me a question, I know all the things, so I'm going to tell you the thing that I know.” That is fine for some things—if you're writing a short fiction story or something like that. It's not great at all for other things, like health care-related topics, where you need predictable, accurate responses. We need to be aware of the limitations when we're doing research and things like that. The problem is when relatively—I'll say naive, and I don't mean that in a disparaging way—people who are naive to a subject area ask ChatGPT for things where it can't judge the response. We're sort of taking it on faith that it's ChatGPT. Darmesh said, “it's got a PhD in everything,” so of course it's going to be right. Well, no. It's often not right. It's up to us to figure out what our risk tolerance is: what is it okay for it to be wrong about, and how would I test it for my domain, for my particular use cases?
Shaan Puri
What do you think about this situation where **Zuck [Mark Zuckerberg]** is throwing the bag at every researcher — $100 million signing bonuses, even more than that in compensation — and he's poaching basically his own dream team? He's like, "Okay, you're not going to... I can't acquire the company. Well, why don't I go get all the players? If you can keep the team, I'll keep the players." And he's going after them with these crazy nine-figure offers.
Sam Parr
A **$100,000,000** signing bonus and **$300,000,000** over four years — I think that's what I saw. Is that true?
Shaan Puri
I think that was the higher end. Some people have said there are even billion-dollar offers to certain people out there—these are like job offers. **Dharmesh**, were you shocked by this? My reaction the first time I heard it was, "That's bullshit." Then I thought, *wait—the source is Sam Altman; why would he say that?* Then I thought, "Okay, that's insane." An hour later I was like, "That's actually genius," because for a total of $3 billion or so he can acquire the equivalent of one of these labs that's valued at $30 billion, $40 billion, $50 billion, or $200 billion. What a power play. I know, obviously you're an investor in **OpenAI**—maybe you don't like this or maybe you have a different bias—but I'm just speaking from one leader of a tech company to another: what's your view of this move? I think it's one of the crazier moves.
Dharmesh Shah
If I had to use one word, I would say *diabolical* — not "stupid," not "silly," but *diabolical*. Here's why. This is about the grand scheme of things. It's not just, "Can we use this technology and build a better product that will then drive $X billion of revenue through whatever business model we happen to have?" There's a meta dynamic at play: whoever gets to this first will be able to produce companies with billions of dollars of revenue. It's like finding the secret to the universe — the mystery of life. Whoever wins and gets there first will be able to use the technology internally for a while and essentially run the table for as long as they want. It has incalculable value; the upside is so high that even a marginal increase in probability matters. If you can increase your probability even by a small amount, and you have the cash, why wouldn't you do it?
Shaan Puri
Do you think—do you think it'll work? Do you think this tactic will work for him? Do you think he will be able to build a *super team*? Is he just going to get a bunch of engineers who now have yachts and don't work? What's going to happen when you give somebody $100 million offers? You put together this—smash together—this team. I think he's got a hit list of 50 targets, and I think, you know, something like 19 or 20 of them have come on board already. Yep. What's your prediction of how this plays out? </FormattedResponse>
Dharmesh Shah
It feels a little bit like a *Hail Mary pass*, right? That's okay — they're going to take this. It's like, "Okay, well, there's not a whole lot of things we can do. The chips are down." I'm going to mix metaphors now, too.
Sam Parr
But... but... but that... that works sometimes.
Dharmesh Shah
It works sometimes. See, that's exactly why people do it: it's like, "Okay, what other option do we have, right? Everything else hasn't worked yet, so let's try this thing." I think the challenge is... I think it's a diabolically smart move. I'm not going to use the word *ethics* or anything like that, but here's the challenge: if we were having this conversation—call it two years ago—**OpenAI** was so far ahead in terms of the underlying algorithm. This was even before ChatGPT hit the kind of revenue curve it has now. Just raw—the underlying **GPT** algorithm was so good that they were so far ahead it was actually inconceivable for folks, including me, that others would catch up. The thinking was: sure, others will make progress and get closer, but **OpenAI** would keep working and stay far ahead for a long, long time. That has proven not to be true. We've seen open-source models come out, and we've seen other commercial models appear. There's **Anthropic**, **Claude**, and by most measures these are comparable large language models—within one standard deviation. They're pretty good, and sometimes they're better at some things and worse at others. It's not a "single horse race" anymore. So I'm a little dubious that even if you pulled all these people together—that approach really worked for **OpenAI** in the true sense of the word. They weren't able to create this kind of magical thing that locked everything down, where others couldn't reproduce it. Maybe they end up doing it somewhere else, but I think there are more smart people out there now and the technology has caught up. **DeepMind** proved that you could actually— and they did have a real innovation in terms of reasoning models and things like that versus the early-generation large language models. So the jury's still out.
Sam Parr
How much better is a **$300,000,000**-a-year engineer compared to, say, a **$100,000,000**-a-year engineer or a **$20,000,000**-a-year engineer? I followed some of these guys on **Twitter** — they're fantastic follows. Do you think their **IQ** is just so much better, or is it because they've had more experience? Is it really because they saw how **OpenAI** works and want that experience? Is this, like, espionage? How good could a **$100,000,000**- or a **$300,000,000**-a-year engineer be?
Dharmesh Shah
Well, that's the thing though: this is software, right? So this is a world of about **95% margins**. I think part of the value is, yes, they're super smart—but even high human IQ asymptotically approaches a certain ceiling. You take the smartest people in the world, however you want to measure IQ, and that doesn't fully explain the value. It's not that they've seen the inside of OpenAI and have trade secrets in their heads they can just carry over—like, "here's how we did it over there," or "here's how we ran evals," or "here's how we ran the engineering process." They'll have some of that, because we always carry some amount of experience in our heads. But I think the primary vector of value is that they have demonstrated the ability to *see around corners* and see into the future. They believed in this thing when almost no one else did. They saw where it was headed and were chipping away at it. That's much rarer than you would think: for really smart people to do this seemingly foolish thing—"you're going to do what now?"—and persist. We're still asking ourselves a variation of the same question we would have asked three years ago. Except now we have ChatGPT and the things that come with it. We're still like, "you say we're going to have these 'digital teammates' that can do all these things," and yet they can't even do this simple thing, right? We sort of... </FormattedResponse>
Shaan Puri
Right.
Dharmesh Shah
Keep elevating our expectations and what we believe is or is not possible. They sort of know what's possible, and they almost think of what many of us would consider *impossible* as actually being **inevitable**.
Sam Parr
Have you guys—has *HubSpot*—have you made any of these offers?
Dharmesh Shah
I don't think so. But that's not the game we're in, right? We're not in that league. We're not trying to build a frontier model. We're not trying to invent **AGI**. We're at the *application layer* of the stack, so we want to benefit from it, right? You know, in my entrepreneurial career I have not been the guy at the "center of the universe" — not the company-centered-the-universe sort of person.
Sam Parr
You're not like, "Oh man, I met this person—like, we need to offer an **NBA contract** in order to secure this guy."
Dharmesh Shah
No, and there's a reason for this, right? It's like, for the kinds of problems we're solving... what's the—there's a sports term about the best alternative to the player or something like that—*replacement cost*?
Shaan Puri
**Wins Above Replacement** is the metric they use in sports.
Dharmesh Shah
So yeah, it's just not worth it given our business model and given what we do. I have one last thing on the kind of **AI** front. This is one of the things—answering your question, Sean—where I disagree with some folks. There's a group of very smart people who say, "AI is going to lead to a reduction in creativity broadly speaking," because you'll just have AI do the thing—why learn to do it yourself? I have a 14-year-old, so it's like: if he just uses AI to write his essays and do his homework, it could reduce his creativity. I understand that line of reasoning: if you just have it do the thing, you won't develop the skill. But I think those folks are missing something. Creativity, in the literal sense, is: I have an idea in my head and I'm going to express it in some creative form—music, art, whatever. The problem right now is that our ability to manifest those ideas is limited by our emerging skill set. Sean can have a song in his head—he may be composing things mentally—but until he learns the mechanics of how to actually play an instrument, there's no real way to manifest it. We can't tap into his brain and do that. To me, AI actually increases creativity because it raises the percentage of ideas people have in their heads that they can then manifest, regardless of their current skills. I love that. My son is a big Japanese culture fan—big into manga and anime—so he's an aspiring author someday. What he can do now, and has been able to do for years, is take his ideas and realize them without waiting a dozen years for formal training. He likes fantasy fiction and has had ideas for stories, but he lacked the writing skills—he doesn't know about character development or those techniques. What he uses ChatGPT for is a 2,000-word prompt that describes his fictional world: here are the characters, here's the power structure, here are the powers people have, here's what you can and can't do. Then he tests the world by turning it into a role-playing game: "Okay, I'm going to jump into the world now; you, ChatGPT, tell me what happens." He runs through scenarios—"Oh, this happened; now I'm going to do this; now you've got this power"—and it pressure-tests his world. That's an expression of his creativity, because the world was sitting in his head but now he can actually share it with friends and maybe turn it into a book someday. He'll take those ideas and, in the meantime, hopefully develop some foundational skills. He doesn't have to wait years of writing education before he can take an idea and make something of it. As a child, he has lots of creativity, but as a practitioner, most of the things he would love to manifest—drawing, writing, etc.—require skills he doesn't yet have. I think that's what AI can help us elevate. Again, we have to use it responsibly, but it should be able to elevate our skills.
Shaan Puri
I want to show you guys an example of this real quick. A couple weeks ago I had this idea of creating a game using only **AI**. I don't know if you guys ever played the *Monkey Island* games. When I was a kid I played *Monkey Island* — it was an incredible game. It's basically about a guy who wants to be a pirate. It's very funny and has that 8‑bit art style. So I created a version of that called *Escape from Silicon Valley*. I didn't create the whole game; I created the art. Check this out. I went into AI and basically started creating the game art. The story is basically: deep in San Francisco, the year is 2048. The block is starting his third term in office. Nancy Pelosi passes away — the richest woman on earth. And then Elon is promising that self‑driving cars are coming really, really soon… for real this time. Here you are: you're this character in the OpenAI office, and basically the idea...
Dharmesh Shah
Is look at.
Sam Parr
"What's that? Look at the Charlie bar."
Shaan Puri
Yeah, exactly. I was putting in some references to, like, you know, stuff that I thought would be cool. That is...
Sam Parr
"So cool. What did you use to make *those images*?"
Shaan Puri
So that right there was just **ChatGPT** — **GPT** and the *Journey Mix*. I tried using, you know, **Scenario** and a couple other game-specific tools. "Check this out": I created all these tech-like characters. So it's like I created Zuck, Palmer Luckey, Chamath, and Elizabeth Holmes.
Sam Parr
And jail. Awesome.
Shaan Puri
And I had it basically write the scenes for the levels with me — write the dialogue with me, create the character art. *Dude, that looks sick.* "Why didn't you do that?" Well, because I did the fun part in the first two weeks, where I was like, "Oh — the concept, the levels, the character art, the music, seeing what **AI** could do." But then, to actually make the game, the **AI** can't do that. So I was like, "Oh, now I need to..." I mean, people who build games spend years building them. It's like this is a minimum six to twelve months thing — doing this very, very arbitrary project. But I still love the idea, and I'm going to package up the whole idea.
Sam Parr
"**Darmesh**, last question — just really quick: where do you hang out on the internet that we, and the listeners, can hang out to stay on top of some of this stuff? Are there a reputable handful of people on Twitter to follow, or reputable websites or places to hang out?"
Dharmesh Shah
That's interesting. I spend most of my time on **YouTube**, as it turns out, and I tend to give into the vibe, so to speak, and let the algorithm figure out what things I might enjoy. It gets it right sometimes and wrong sometimes, so it's a mix of things. But the person I think you should look at if you want to get deeper into understanding **AI** is a guy named **Andrej Karpathy**. I don't know if you've come across him—just search for "Karpathy."
Sam Parr
Dude, you don't want to know how I know. I get so many ads that say things like " **Andre Karpathy** said this is the best product" or " **Andre Karpathy** showed me how to do this." Now I'm going to show you... I don't even know who Andre is, other than that ads run his name to promote him.
Dharmesh Shah
Yeah. I mean, he's one of the two true **OGs in AI**. He has that orthogonal skill—one of them. I think he's got, like, nine; he's probably a nine-tool player of some sort. He's able to really simplify complicated things without making you feel stupid. He's not talking down to you. He's like, "Okay, here's how we're going to do this: we're going to build it brick by brick, and you're going to understand, at the end of this hour and a half, how X works." He's amazing. </FormattedResponse>
Sam Parr
That would be one. So, him — any other YouTubers, Twitter people, or blogs?
Dharmesh Shah
On the business side, actually, **Aaron Levie** from **Box** is very, very thoughtful about fear in software and business and the AI implications there. I think he's really good. **Hiten Shah**, who you both know — now at **Dropbox** through the acquisition — has been on fire lately on LinkedIn. He's one I would go back to, especially over the last three or four months, and read all the stuff he's written. I think he's on... yeah.
Shaan Puri
Those are awesome, Darmesh—thanks for coming on. Thanks for teaching us. You're *one of my favorite teachers and entertainers*, so thank you for coming on, man.
Dharmesh Shah
My pleasure. It was good to see you guys — it was fun.</FormattedResponse>
Sam Parr
Likewise. Thank you. That's it — that's the pod.