Lenny's Podcast: Inside OpenAI | Logan Kilpatrick [Summary + Transcript]
Fireside by Fireflies Podcast transcripts

Lenny's Podcast: Inside OpenAI | Logan Kilpatrick [Summary + Transcript]

Fireside by Fireflies
Fireside by Fireflies

In the recent episode of Lenny's podcast with Lenny Rachitsky, Logan Kilpatrick, Head of Developer Relations at OpenAI, unveiled the company's ambitious roadmap beyond ChatGPT, DALL-E, and Sora.

Here are the key takeaways:

Inside OpenAI, Lenny's podcast summary powered by Fireflies.ai

Outline
  • Chapter 1: Introduction (00:00)
  • Chapter 2: Chat GPT and OpenAI (00:36)
  • Chapter 3: The Impact of AI (01:10)
  • Chapter 4: OpenAI's Internal Operations (01:25)
  • Chapter 5: A word from sponsors (01:56)
  • Chapter 6: The Importance of High Agency and Urgency (08:20)
  • Chapter 7: The Role of TL Draw in AI (08:41)
  • Chapter 8: Chatbot Applications and Limitations (09:56)
  • Chapter 9: The Future of OpenAI and AI Applications (10:30)
  • Chapter 10: Efficiency Gains with AI Tools (13:42)
  • Chapter 11: The Power of GPTs (15:07)
  • Chapter 12: GPTs for Planning and Venture Capital (16:08)
  • Chapter 13: Prompt Engineering and Its Importance (18:19)
  • Chapter 14: The Future of GPTs (22:18)
  • Chapter 15: The Efficiency of AI Tools (26:43)
  • Chapter 16: The Role of Context in Questioning AI (33:00)
  • Chapter 17: OpenAI's Roadmapping and Prioritization (35:07)
  • Chapter 18: Measuring Success for OpenAI Product Launches (39:04)
  • Chapter 19: OpenAI's Principles and Decision Making Process (41:35)
  • Chapter 20: Upcoming Launches and Future Plans (52:32)
  • Chapter 21: Lightning Round Q&A (59:33)
  • Chapter 22: Conclusion (1:06:57)

Notes
  • Importance of high agency and urgency: Logan Kilpatrick emphasized these characteristics as crucial for potential hires at OpenAI.
  • Prompt Engineering: Detailed discussion on how carefully crafting prompts can guide AI models to generate more accurate and contextually relevant responses.
  • OpenAI's Dramatic Weekend: Brief mention of a pivotal weekend at OpenAI, details of which were not discussed in depth.
  • Harvard Business School Study on Efficiency: Reference to a study that provides insights on enhancing efficiency within a company.
  • Optimizing Prompting: Advice on how to improve prompt writing for GPT and other OpenAI APIs; providing more specific instructions and refining prompts iteratively.
  • Importance of Contextual Information: Leveraging contextual information when asking questions to AI models for more personalized and informed interaction.
  • Sharing Conversations and Experiences with Chat GPT: Discussion on the possibilities of sharing conversations and experiences using Chat GPT.
  • Future of OpenAI: Vision for the future includes more interface options, agents that perform tasks, and a shift towards more template-oriented, use-case specific modalities.
  • Building for GPT-5: Advice to build products considering the potential capabilities of GPT-5, not just the current limitations of GPT-4.
  • OpenAI's B2B offerings: Discussion on the benefits of OpenAI's B2B offerings and the importance of sharing prompt templates and GPTs internally.
  • Actionable Advice: Encouragement to try out AI tools for their problem-solving capabilities, with an invitation to reach out to OpenAI's team for assistance.
Lenny's podcast with Lenny Rachitsky, Logan Kilpatrick, Head of Developer Relations at OpenAI - Fireflies summary
Summary by Fireflies

Want to know the full picture? Find the accurate transcript of this podcast below:

Inside Open AI Lenny's podcast transcript

00:00
Logan Kilpatrick

Finding people who are high agency and work with urgency. If I was hiring five people today, like, those are like some of the top two characteristics that I would look for in people because you can take on the world. If you have people who have high agency and like not needing to get 50 people's different consensus, they hear something from our customers about a challenge that they're having and like they're already pushing on what the solution for them is and not waiting for all the other things to happen, that people just go and do it and solve the problem. And I love that. It's so fun to be able to be a part of those situations.

00:36
Lenny Rachitsky

Today my guest is Logan Kilpatrick. Logan is head of developer relations at OpenAI, where he supports developers building on OpenAI's APIs and Chat GPT. Before OpenAI, Logan was a machine learning engineer at Apple and advised NASA on their open source policy. If you can believe it, Chat GPT launched just over a year ago and transformed the way that we think about AI and what it means for our products and our lives. Logan has been at the front lines of this change, and every day is helping developers and companies figure out how to leverage these new AI superpowers. In our conversation, we dig into examples of how people are using Chat GPT and the new GPTs and other OpenAI APIs in their work and their life. Logan shares some really interesting advice on how to get better at prompt engineering.

01:25
Lenny Rachitsky

We also get into how OpenAI operates internally, how they ship so quickly, and the two key attributes they look for in the people that they hire. Plus, where Logan sees the biggest opportunities for new products and new startups building on their APIs. We also get a little bit into the very dramatic weekend that OpenAI had with the board and Sam Altman and all of that and so much more. A huge thank you to Dan Shipper and Dennis Yang for some great questions suggestions. With that, I bring you Logan Kilpatrick after a short word from our sponsors. This episode is brought to you by Hex. If you're a data person, you probably have to jump between different tools to run queries, build visualizations, write Python, and send around a lot of screenshots and CSV files. Hex brings everything together.

Read the full transcript

02:11
Lenny Rachitsky
Its powerful notebook UI lets you analyze data in SQL, Python, or no code in any combination, and work together with live multiplayer and version control. And now Hex's AI tools can generate queries and code, create visualizations, and even kickstart a whole analysis for you all from natural language prompts it's like having analytics copilot built right into where you're already doing your work. Then, when you're ready to share, you can use Hex's drag and drop app builder to configure beautiful reports or dashboards that anyone can use. Join the hundreds of data teams like Notion alltrails loom, Mixpanel and Algolia using Hex every day to make their work more impactful. Sign up today at Hex tech Lenny to get a 60 day free trial of the Hex team plan. That's Hex tech Lenny. This episode is brought to you by Whimsical, the iterative product workspace.

03:04
Lenny Rachitsky
Whimsical helps product managers build clarity and shared understandings faster with tools designed for solving product challenges. With whimsical, you can easily explore new concepts using drag and drop, wireframe and diagram components, create rich product briefs that show and sell your thinking, and keep your team aligned with one source of truth for all of your build requirements. Whimsical also has a library of easy to use templates from product leaders like myself, including a project proposal, one pager, and a go to market worksheet. Give them a try and see how fast and easy it is to build clarity with whimsical sign up@whimsical.com Lenny for 20% off a whimsical pro plan, that's whimsical.com Lenny Logan, thank you so much for being here, and welcome to the podcast.

03:55
Logan Kilpatrick
Thanks for having me, Lenny. I'm super excited.

03:57
Lenny Rachitsky
I want to start with the elephant in the room, which I think the elephant is actually leaving the room because I think this is months ago at this point, but I'm still just really curious, what was it like on the inside of OpenAI during the very dramatic weekend with the board and Sam and all those things? What was it like? And is there a story maybe you could share that maybe people haven't heard about what it was like on the inside of what was going on?

04:20
Logan Kilpatrick
Yeah, it was definitely a very stressful Thanksgiving week. I think in broad context, OpenAI had been pushing for a really long time since Chachi Bt came out, and that was supposed to be one of the first weeks that the whole company had taken time away to actually reset and have a break. So very selfishly, I was super excited, spent time with my family, all that stuff. And then Friday afternoon we got the message that all of the changes were happening. And I think it was super shocking because I think, and this is a perspective a lot of folks share everybody has had and continues to have such deep trust in Sam and Greg and our leadership team that it was just very surprising. And we're also, as far as company cultures go, very transparent and very open.

05:08
Logan Kilpatrick
So when there's problems or there's things going on, we tend to hear about them. And again, it was the first time that a lot of us had heard some of the things that were happening between the board and the leadership team. So very surprising. I think my sort of being someone who's not based in San Francisco, I was, again, very selfishly, kind of happy that it happened over the Thanksgiving break, because a lot of folks actually had gone home to different places. So it felt like I had a little bit of comfort knowing I wasn't the only one not in San Francisco, because everybody was meeting up in person to do a bunch of stuff and be together during that time. So it was nice to know that there was a few other folks who were sort of out of the loop with me.

05:51
Logan Kilpatrick
I think the thing that surprised me the most was just how quickly everybody got back to business. I flew to San Francisco the next week after Thanksgiving, which I wasn't planning to do, to be with the team in person. And seeing, literally Monday morning, I was kind of walking into the office, being expecting, I don't know, something weird to be going on or happening. And really, it was like people laser focused and back to work. And I think that speaks to the caliber of our team and everybody who's just so excited about building towards the mission that we're building towards. So I think that was the most surprising thing of the whole incident.

06:30
Logan Kilpatrick
I think a lot of companies would have had the potential to truly be derailed for some nontrivial amount of time by this, and everybody was just right back to it, which I love.

06:39
Lenny Rachitsky
I feel like it also maybe brought the team closer together. It feels like it was a kind of traumatic experience that may bring folks together because it was something they all shared. Is there anything along those lines that, like, wow, things are a little different now?

06:52
Logan Kilpatrick
One of my takeaways was, I'm actually very grateful that this happened when it happened. I think today the stakes are still relatively high. People have built their businesses on top of OpenAI. We have tons of customers who love chattyput. So if something bad happens to us, we definitely impact our customers, but sort of on the world scale. Somebody else will build a model if Vulcan AI disappeared and continue towards this progress of general intelligence. I think fast forward, like, five or ten years, if something like this would have happened and we sort of hadn't gone through the hopeful upcoming board transformation and sort of all those changes that are going to happen, I think it would have been a little bit or potentially much worse of an outcome. So I'm glad that things happened when the stakes are a little bit lower.

07:39
Logan Kilpatrick
And I totally agree with you. It's like the team has been growing so rapidly over the last year since I joined that it's been crazy to think about how many new folks there are. And I really think that this really brought people together because most folks, historically, many of the folks when I joined, what kind of banded us all together was like the launch of JGBT, the launch of GBT four. And for folks who weren't around for some of those launches, it was perhaps Dev Day. For folks who weren't around for Dev Day, it was probably this event. So I think we've had these events that have really brought the company together across functionally. So hopefully all the future ones will be like really exciting things like GBD five whenever that comes and stuff like that.

08:20
Lenny Rachitsky
Awesome. We're going to talk about GPt five going in a totally different direction. What is the most mind blowing or surprising thing that you've seen AI do recently?

08:30
Logan Kilpatrick
The things that are getting me most excited are these new interfaces around AI, like the Rabbit R one. I don't know if you've seen that, but it's a consumer hardware device. This company called TL draw. I don't know if you've seen TL draw.

08:44
Lenny Rachitsky
I think you sketch something and then it makes it as a website.

08:47
Logan Kilpatrick
Yeah. And that's only like a small piece of what TLDraw is actually working on. But there's all of these new interfaces to interact with AI, and I think I was having a conversation with the TLDraw folks a couple of days ago, really blows my mind to think about how chat is the predominant way that folks are using AI today. And I actually think, and this is my bullcase for the folks at TL draw. I'm super excited for them to build what they're building, but they're sort of building this infinite canvas experience.

09:15
Logan Kilpatrick
And you can imagine how as you're interacting with an AI on a daily basis, you might want to jump over to your infinite canvas, which the AI has sort of filled in all the details, and you might see a reference to a file and to a video and all of these different things. And it's such a cool way. It actually makes a lot more sense for us as humans to see stuff in that type of format than I think, just listing out a bunch of stuff in chat. So I'm really excited to see more people. I think 2024 is the year of multimodal AI, but it's also the year that people really push the boundaries of some of these new UX paradigms around AI.

09:52
Lenny Rachitsky
It's funny, I feel like chat bots as a pm for many years. It feels like every brainstorming session we had about new features, it's like, hey, we should build a chat bot to solve this problem. It's like the perennial, like oh, chatbot. Of course someone's going to suggest we do a chatbot, and now they're actually useful and working and everyone's building chatbots, a lot of them, based on OpenAI APIs. There's not really a question there. But maybe the question I was going to get to this later is just when people are thinking about building a product like say TlDraw, what should they think about where OpenAI is not going to go? Versus like, here's what OpenAI is going to do for us. We shouldn't worry about them building a version of TLD draw in the future.

10:30
Lenny Rachitsky
What's the kind of the way to think about where you won't be disrupted essentially by OpenAI knowing also they may change their mind?

10:36
Logan Kilpatrick
That's a great question. I think we're deeply focused on these very general use cases, like the general reasoning capabilities, the general coding, the general writing abilities. I think where you start to get into some of these varied vertical applications, and I think a great example of this is actually like Harvey. I don't know if you've seen Harvey, but it's this legal AI use case where they're building custom models and tools to help lawyers and people at legal firms and stuff like that. And that's a great example of like our models are probably never going to be as capable as some of the things that Harvey's doing, because our goal and our mission is really to solve this very general use case. And then people can do things like fine tuning and build all their own custom UI and product features on top of that.

11:17
Logan Kilpatrick
And I think that's the, I have a lot of empathy and a lot of excitement for people who are building these very general products today. I talked to a lot of developers who are building just general purpose assistants and general purpose agents and stuff like that, and I think it's cool and it's a good idea. I think the challenge for them is they are going to end up directly competing against us in those spaces. And I think there's enough room for a lot of people to be successful. But to me, you shouldn't be surprised when we end up launching some general purpose agent product, because again, we're sort of building that with GPTS today versus we're not going to launch some of these very verticalized products. We're not going to launch an AI sales agent. That's just not what we're building towards.

12:03
Logan Kilpatrick
And companies who are and have some domain specific knowledge, and they're really excited about that problem space, they can go into that and leverage our models and end up continuing to be on the cutting edge without having to do all that r and d effort themselves.

12:16
Lenny Rachitsky
Got it. So the advice I'm hearing is get specific about use cases, and that could be either models that are tuned to be especially useful for a use case like sales, or make an interface, or experience solving a more specific problem.

12:30
Logan Kilpatrick
And I think if you're going to try and solve this very general, if you're going to try to build the next general assistant to compete with something like Chet GBT, it has to be so radically different. People have to really be like, wow, this is solving these ten problems that I have with Chet, and therefore I'm going to go and try your new thing. Otherwise we're just putting a ton of engineering effort and research effort into making that like an incredible product. And it's just going to be like the normal challenges of building companies. It's just hard to compete against something like that.

12:59
Lenny Rachitsky
Awesome. Okay, that's great. I was going to get that later, but I'm glad we touched on that. I imagine that's on the minds of many developers and founders, kind of along the same lines. There's a lot of talk about how Chat GPT and GPTs and many of the tools you guys offer are going to make a company much more efficient. They don't need as many engineers, data scientists, PMS, things like that. But I think it's also hard for companies to think about what should we actually like? What can we actually do to make our company more efficient?

13:25
Lenny Rachitsky
I'm curious if there's any examples that you can share of how companies have taken built to, say, a GPT internally to do something so that they don't have to spend engineering hours on it, or generally just used OpenAI tooling to make their business internally more efficient.

13:42
Logan Kilpatrick
Yeah, that's a great question. I wonder if you can put this in the show notes or something like that, but there's a really great Harvard business school study about, and I forgot which consulting firm they did it with. Maybe it was like Boston Consulting or something like that, but it might have been one of the other ones. And they talk about the order of magnitude of efficiency gain for those folks who were using AI tools. And I think it was chat GBT specifically in those use cases that they were using comparatively against folks who aren't using AI. I'm really excited also, just like as this more time passes between the release of this technology for us to get more empirical studies, because I feel this for myself as somebody who's an engineer today.

14:22
Logan Kilpatrick
I use chat GBT, and I can ship things way faster than I would be able to. I don't have any good metrics for myself to put a specific number on it, but I'm guessing people are working on those studies right now. I think engineering is actually one of the highest leveraged things that you could be using AI to do today, and really unlocking probably on the order of at least a 50% improvement, especially for some of the lower hanging fruit software engineering tasks. The models are just so capable at doing that work, and it's crazy to think, and I'm guessing actually, GitHub probably has a bunch of really great studies they published around like copilots, and you could use those as analogy for what people are getting from Chat GPT as well. But those are probably like the highest leveraged things.

15:07
Logan Kilpatrick
And I think now with GPTs, people are able to go in and solve some of these more tactical problems. I think one of the general challenges with Chat GPT is it gives a decent answer for a lot of different use cases. But oftentimes it's not particular enough to the voice of your company or the nuance of the work that you're doing. And I think now with GPTs and people who are using the teams in chat GBT and enterprise in chat GBT, they can actually build those things, incorporate the nuance of their own company, and make solving those tasks much more domain specific. So we literally just launched GPTs a couple of months ago.

15:44
Logan Kilpatrick
So I don't think there's been any good public success stories, but I'm guessing that success is happening right now at companies, and hopefully we'll hear more about that in the months to come as folks get super excited about sharing those case studies.

15:58
Lenny Rachitsky
I'll share an example. So I have this good friend, his name is Dennis Yang. He works at Chime, and he told me about two things that they're doing at chime that seem to be providing value. One is he built a GPT that helps write ads for Facebook and Google just gives you ideas for ads to run. And so that takes a little load off the marketing team or the growth team. And then he built another GPT that delivers experiment results, kind of like a data scientist with like, here's the result of this experiment. And then you could talk to it and ask for like, hey, how much longer do you think we should run this for? Or what might this imply about our product and things like that? And I think it's really, like you said, is there anything else that comes to mind?

16:37
Lenny Rachitsky
Just like things you've heard people do, just like, wow, that was a really smart way of. So I get there's like engineering copilady type tooling. Is there anything else that comes to mind? Just to give people a little inspiration of like, wow, that's an interesting way. I should be thinking about using some of these tools.

16:50
Logan Kilpatrick
I've seen some interesting GPTs around the planning use cases like, you want to do like OKR planning for your team or something like that. I just actually saw somebody tweet it literally yesterday. I've seen some cool venture capital ones of doing diligence on a deal flow, which is kind of interesting and getting some different perspectives. I think all of those horizontal use cases where you can bring in a different personality and get perspective on different things, I think is really cool. I've personally used a private GBT that I use myself that helps with some of the planning stuff for different quarters and just making sure that I'm being consistent in how I'm framing things, driving back to individual metrics, stuff that when people do planning, they often miss and are bad at.

17:38
Logan Kilpatrick
And it's been super helpful for me to have a GPT to force me to think about some of those things.

17:43
Lenny Rachitsky
Wait, can you talk more about this? What does this GPT do for you and what do you feed it?

17:48
Logan Kilpatrick
Yeah, I forgot what article I found online, but it was like some article that was talking about what are the best ways to set yourself up for success and planning? And I took a bunch of the, like, I'll see if I can make it public after this and send you a link, but took a bunch of the examples from that and went in and put some of those suggestions into the GBT. And then now when I do any of my planning of like, I want to build this thing, I put it through and have it generate a timeline, generate all the specifics of what are the metrics and success that I'm looking for who might be some important cross functional stakeholders to include in the planning process, all that stuff, and it's been helpful.

18:25
Lenny Rachitsky
Wow, that is very cool. That would be awesome if you made it public. And if we do, we'll link to it and we'll make it the number one most popular GPT in the store. I love it going in a slightly different direction. There's this whole genre of prompt engineering. It feels like it's one of these really emerging skills. I actually saw a startup hiring a prompt engine near one of the startups I've invested in, and I think that's going to blow a lot of people's minds that there's this new job that's emerging. And I know the idea is this won't last forever, that in theory, AI will be so smart, you don't need to really think about how to be smart about asking it for things you would need it to do.

18:59
Lenny Rachitsky
But can you just describe this idea of what is prompt engineering, this term that people might be hearing? And then even more interestingly, just like, what advice do you have for people to get better at writing prompts for, say, chat, GPT, or through the API in general?

19:12
Logan Kilpatrick
Yeah, this is such an interesting space, and I think it's like another space where I'm excited for people to do more scientific, empirical studies about because there's so much gut feeling best practices that maybe aren't actually true in a certain ways. I think the reason that prompt engineering exists and comes up at all is because the models are so inclined, because of the way that they're trained to give you just answer to the question that you asked. Crap in, crap out. If you ask a pretty basic question, you're going to get a pretty basic response. I actually think the same thing is true for humans, that you can think of a great example of this. When I go to another human and I ask, how's your day going? They say it's going pretty good.

19:53
Logan Kilpatrick
Literally absolutely zero detail, no nuance, not very interesting at all. Versus, again, if you have some context with the person, if you have a personal relationship with them, and I ask you, hey, Lenny, how's your day going? How did the last podcast go? Et cetera, et cetera. You just have a little bit more context and agency to go and answer my question. I think this is like prompt engineering. My whole position on this is like prompt engineering is a very human thing. When we want to get some value out of a human, we do this prompt engineering. We try to effectively communicate with that human in order to get the best output. And the same thing is true of models.

20:29
Logan Kilpatrick
And I think it's like, again, because we're using a system that appears to be really smart, we assume that it has all this context, but it's really like imagine a human level intelligence, but literally no context. It has no idea what you're going to ask it. It's never met you before. It has no idea who you are, what you do, what your goals are. And it's the reason that you get super generic responses sometimes, is because people forget they need to put that context in the model. So I think the thing that is going to help solve this problem, and we already kind of do this in the context of Dali.

21:03
Logan Kilpatrick
So when you go to the image generation model that we have, Dali, and you say, I want a picture of a turtle, what it does is it actually takes that description, it says, I want a picture of a turtle. And it changes it into this high fidelity, like generate a picture of a turtle with a shell with a green background and lily pads in the water and all this other. It adds all this fidelity because that's the way that the model is trained. It's trained on examples with super high fidelity. This will happen with text models. You can imagine a world where you go into chat DBD and you say, write me a blog post about AI.

21:39
Logan Kilpatrick
It automatically will go and be like, let me generate a much higher fidelity description of what this person really wants, which is generate me a blog post about AI that talks about the trade offs between these different techniques. And some example use cases and references, some of the latest papers, and it does all that for you. And then you as the user will hopefully be able to be like, yes, this is kind of what I wanted. Let me edit this. Let me edit this here. And again, the inherent problem is we're lazy as humans. We don't want to type all, we don't really want to type what we mean. And I think AI systems are actually going to help solve some of that problem.

22:12
Lenny Rachitsky
So until that day, what can people do better when they're prompting, say, Chat GPT? And I'll give you an example. Tim Ferriss suggested this really good idea that I've been stealing, which is when you're preparing for an interview, go to Chat GPT. And so I did this for you. I was like, hey, I'm interviewing Logan Kilpatrick, he's head of developer relations at OpenAI on my podcast. Give me ten questions to ask him. In the style of Tyler Cowan, who I think is the best interviewer. He's so good at just like very pointed, original questions. So what advice would you have for me to improve on that prompt to have better results? Because the questions were like, fine, they're great. They're interesting enough, but they weren't like, holy shit, these are incredible. So I guess what advice would you give me in that example?

22:57
Logan Kilpatrick
Yeah, that's a great example where thinking in context of who it is that you're asking questions about, I'm probably not somebody who has enough information about me on the Internet where the model actually has been trained and knows the nuances of my background. I think there's probably much more famous guests where it might be that there's enough context on the Internet to answer the questions. You actually have to do some of that work. You need to say, if you're using browse with Bing, for example, you could say, here's a link to Logan's blog and some of the things that he's talked about. Here's a link to his Twitter. Go through some of his tweets, go through some of his blogs and see what his interesting perspectives are that we might want to surface on the blog or something like that.

23:36
Logan Kilpatrick
And again, giving the model enough context to answer the question, I think, again, that prompt actually might work really well for somebody who has it. Like if you were interviewing like Tom Cruise or something like that, somebody who has a lot of information about them on the Internet, it probably works a little bit better.

23:51
Lenny Rachitsky
So the advice there is just give more context. It doesn't tell you, hey, I don't actually know that much about Logan, so give me some more information. It's just like, here we go.

23:58
Logan Kilpatrick
Here's a bunch of good to like. It so deeply wants to answer your question. It doesn't care that it doesn't have enough context. It's like the most eager person in the world you could imagine to answer the question. And without that context, it's just hard to do, to give up anything of value. If we got t shirts printed, they should say, like, context is all you need. Context is the only thing that matters. It's such an important piece of getting a language model to do anything for you.

24:26
Lenny Rachitsky
Any other tips? Just as people are sitting there, maybe they have chachi pc open right now as they're crafting a prompt. Is there anything else that you'd say would help them have better results?

24:37
Logan Kilpatrick
We actually have a prompt engineering guide which folks should go and check out. And it has some of these examples. It depends on sort of the order of magnitude of how much performance increase you can get. There's a lot of really small, silly things, like adding a smiley face increases the performance of the model. I'm sure folks have seen a lot of these silly examples, like telling the model to take a break and then answer the question, all these kinds of things. And again, if you think about it's because the corpus of information that's trained these models is the same things that humans have sent back and forth to each other. So, like you telling a human, like, when I go take a break and then I come back to work, I'm fresher and I'm able to answer questions better and do work better.

25:20
Logan Kilpatrick
So very similar things are true for these models. And again, when I see a smiley face at the end of someone message, I feel empowered that this is going to be a positive interaction and I should be more inclined to give them a great answer and spend more effort on the thing that they asked me for.

25:34
Lenny Rachitsky
Wow, wait, so that's a real thing? If you had a smiley face, it might give you better results.

25:39
Logan Kilpatrick
Again, the challenge with all this stuff is it's very nuanced and it's a small jump in performance. You could imagine on the order of 1%, which for a few sentence answer might not even be a discernible difference. Again, if you're generating an entire saga of text, the smiley face could actually make a material difference for you. But for something small and tactical, it might not.

26:03
Lenny Rachitsky
Okay, good tip. Amazing. Okay, we've talked about gpts. I think maybe it might be helpful to describe what is this new thing that you guys launched gpTs. And I'm curious just how it's going, because this is a really big change and element of OpenAI now with this idea that you could build your own kind of mini, and I'm almost explaining it, your mini open Chat GPT. And then people can, I think you can pay for it, right? Like you can charge for your own GPT or is it all free right now?

26:30
Logan Kilpatrick
It's all free right now.

26:30
Lenny Rachitsky
Okay. It's all free. Okay. In the future, I imagine people will be able to charge. So there's this whole store now, basically, it's the whole App Store that you guys have launched. How's it going? What's happening? What surprised you there? What should people know?

26:42
Logan Kilpatrick
Yeah, it's going great. And again, historically, the thing that you would have to do, let's say, for example, you have a really cool chat to be to use case, what you would have to do to share it with somebody else is actually go in and start the conversation with the model, prompt it to do the things that you wanted to, and then you would share that link with somebody else before the action has actually happened and be like here. Now you can essentially finish this conversation with chat GBT that I started. So GBTs kind of changes this, where you take all that important context. You put it into the model to begin with, and then people can go and chat with essentially a custom version of chat GBT.

27:19
Logan Kilpatrick
And the thing that's really interesting is you can upload files, you can give it custom instructions, you can add all these different tools like a code interpreter is built in, which allows you to do math. Essentially you have browsing built in, image generation built in. You can also for more advanced use cases, if you're a developer, you can connect it to external APIs. So you can connect it to the notion API or Gmail or all these different things and have it actually take actions on your behalf. So there's so many cool things that people are unlocking. And what's been most exciting to me actually is the non developer Persona is now empowered to go and solve these really more challenging problems by giving the model enough context on what that problem is to be able to solve it.

28:02
Logan Kilpatrick
Going back to context is all you need. This is very true in the context of gpts. And if you give it enough context, you can solve much more interesting problems. There's so many things that I'm excited about with this. I think monetization when it comes to the store later this quarter, I think is going to be extremely exciting. Like when people can get paid based on who's using their gpts. That's going to be a huge unlock and open a lot of people's eyes to the opportunity here. I also think continuing to push on, making more capabilities accessible to gpts for people who can't code is really exciting. Even for me as someone who is a software engineer, it's not super easy to connect the notion API or the Gmail API to my GBT.

28:45
Logan Kilpatrick
And really I'd love to just be able to one click sign in with Gmail. Then all of a sudden it's like my Gmail is accessible or someone else can sign in with their Gmail and make it accessible. So I think over time all those types of things will come, but today it's really like custom prompts is essentially one of the biggest value adds with GPTS.

29:03
Lenny Rachitsky
Awesome. I have it pulled up here on different monitor and canva has the top GPT currently, and I was trying to play with it as you're chatting, just to see. I was going to make a big banner that said, it's the context, stupid, and I'm not doing some right, but I'm not paying that much attention to it because we're talking. But this is very cool. Just maybe a final question there. Is there a GPT that you saw someone built that was like, wow, that's amazing. That's so cool. Something that surprised you. And I'll share one that was really cool. But is there anything that comes to mind?

29:32
Logan Kilpatrick
I think my instinct is all of the stuff that Zapier has done with GPTS is like the most useful stuff that you could imagine. You can go so far with what, I don't know how it's packaged for Zapier's GPT right now, but you can actually, as a third party developer, integrate Zapier without knowing how to code into your GPT. So they're pushing a lot of this stuff. And then basically all 5000 connections that are possible with Zapier today, you can bring into your GPT and essentially enable it to do anything. So I'm incredibly excited for Zapier and for people who are building with them, because there's so many things that you can unlock using that platform. So I think that's probably the most exciting thing to me for people who aren't developers.

30:20
Lenny Rachitsky
Awesome. Zapier is always in there, getting there, connecting things.

30:23
Logan Kilpatrick
Yeah, they're great.

30:24
Lenny Rachitsky
So the one that I had in mind, so I had a budy of mine, Siki, who's the CEO of a company called Runway, built this thing called Universal Primer, which helps you learn. It's described as learn everything about anything. And it basically, I think, is kind of this socratic method of helping you learn stuff. So it's like, explain how transformers work in LLMS, and then it just kind of goes through stuff and then asks you questions, I think, and kind of helps you learn new concepts. And I think it's the number two education GPT.

30:51
Logan Kilpatrick
I love that. Seeky is incredible.

30:53
Lenny Rachitsky
Yes, that's true. Let me tell you about a product called Arcade. Arcade is an interactive demo platform that enables teams to create polished on brand demos in minutes. Telling the story of your product is hard, and customers want you to show them your product, not just talk about it or gate it. That's why product four teams such as Atlassian, Carta and retool use arcade to tell better stories within their homepages, product change logs, emails and documentation. But don't just take my word for it. Quantum metric, the leading digital analytics platform, created an interactive product tour library to drive more prospects. With Arcade, they achieved a two x higher conversion rate for demos and saw five times more engagement than videos. On top of that, they built a demo ten times faster than before.

31:41
Lenny Rachitsky
Creating a product demo has never been easier with browser based recording. Arcade is the no code solution for building personalized demos. At scale. Arcade offers product customization options, designer approved editing tools, and rich insights about how your viewers engage every step of the way. Ready to tell more engaging product stories that drive results? Head to arcade software Lenny and get 50% off your first three months. That's arcade software Lenny. I want to talk about just what it's like to work at OpenAI and how the product team operates and how the company operates. So you worked at your two previous companies were Apple and NASA, which are not known for moving fast. And now you're OpenAI, which is known for moving very fast, maybe too fast for some people's taste, as we saw with the whole board thing.

32:30
Lenny Rachitsky
And so what I'm curious is just what is it that OpenAI does so well that allows them to build and ship so quickly? And it's such high a bar. Is there a process or a way of working that you've seen that you think other companies should try to move more quickly and ship better?

32:48
Logan Kilpatrick
Know, there's so many interesting trade offs and all of this tension around how quickly companies can move. I think for like, again, if you think about Apple as an example, you think about NASA as an example, just like older institutions, lots of over time, the tendency is things slow down. There's additional checks and balances that are put in place, which sort of drag things down a little bit. So we're young and like a new company, so we don't have a lot of that institutional legacy barriers that have been put in place. I think the biggest thing, and there's a good Sam tweet somewhere in the ether about this from, I think, 2022 or something like that. But finding people who are high agency and work with urgency is like one of the most.

33:35
Logan Kilpatrick
If I was hiring five people today, those are some of the top two characteristics that I would look for in people, because you can take on the world. If you have people who have high agency and not needing to either get 50 people's different consensus, because you have people who you trust with high agency and they can just go and do the thing I think is one of the most. It is the most important thing. I'm pretty sure if you were to distill it down. And I see this in folks that I work with. Folks are so high agency, they see a problem and they go and tackle it.

34:10
Logan Kilpatrick
They hear something from our customers about a challenge that they're having and they're already pushing on what the solution for them is and not waiting for all the other things to happen that I think traditional companies are sort of stuck behind because they're like, oh, let's check with all these seven different departments to try to get feedback on this. People just go and do it and solve the problem. And I love that. It's so fun to be able to be a part of those situations.

34:35
Lenny Rachitsky
That is so cool. I really like these two characteristics because I haven't heard this before as maybe the two most important things. You guys look for high agency, high urgency to give people a clear sense of what these actually look like when you're hiring. You shared maybe this example of customer service, someone hearing a bug and then going to fix it. Is there anything else that can illustrate what that looks like? High agency and then similar question on urgency other than just like move ship ship.

35:01
Logan Kilpatrick
I think the assistance API that we released for Dev day, we continued to get this feedback from developers that people wanted these higher levels of abstraction on top of our existing APIs. And a bunch of folks on the team just came together and were like, hey, let's put together what the plan would look like to build something like this. And then very quickly came together and actually built the actual API that now powers so many people's assistant applications that are out there. And I think that's a great example of it wasn't like this top down, like, oh, someone's sitting there being like, oh, let's do these five things. And then like, okay, team, go and do that. It's like people really seeing these problems that are coming up and knowing that they can come together as a team and solve these problems really quickly.

35:46
Logan Kilpatrick
And I think the assistance API, and there's like a 1001 other examples of teams taking agency and doing this. But I think that's a great one at the top of my head that.

35:56
Lenny Rachitsky
Makes me want to ask just how does planning work at OpenAI? So in this example, it's just like, hey, we think we need to build this. Let's just go and build it. I imagine there's still a roadmap and priorities and goals and things that team had. How does roadmapping and prioritization and all of that generally work to allow for something like that?

36:13
Logan Kilpatrick
I think this is one of the more challenging pieces at OpenAI. There's so many. Everyone wants everything from us, and today, especially in the world of chatgbt and how large and well used our API is, people will just come to us and say, hey, we want all of these things. I think there's a bunch of core guiding principles that we look at. One going back to the mission, is this actually going to help us get to AGI? So there's a huge focus on. There's this potential shiny reward right in front of us which is optimized user engagement or whatever it is. And is that really the thing? Maybe the answer is yes. Maybe that is what is going to help us get to AgI sooner.

36:56
Logan Kilpatrick
But looking at it through that lens, I think is always the first step of deciding any of these problems. I think on the developer side there's also these core tenets of reliability. Hey, it would be awesome if we had additional APIs that did all these cool things like new endpoints, new modalities, new abstractions. But are we giving customers a robust and reliable experience on our API? And that's often the first question. And I think there have been times where we've fallen short on that and there was a bunch of other things that we've been thinking about doing and really bringing the focus and priority back to that reliability piece. Because at the end of the day, nobody cares if you have something great if they can't use it robust and reliably.

37:37
Logan Kilpatrick
So there's these core tenets and I think again, we have very other than all the principles about how we're making the decision, I think the actual planning process is pretty standard. We come together, there's like h one, Q one goals. We all sprint on those. I think the real interesting thing is how stuff changes over time. You think we're going to do these very high level things and new models, new modalities, whatever it is. And then as time goes on, there's all of this turmoil and change. And it's interesting to have mechanisms to be like, hey, how do we update our understanding of the world and our goals as everything sort of the ground changes underneath of us as is happening in the craziness of the AI space today?

38:21
Lenny Rachitsky
It's interesting that it sounds a lot like most other companies. There's h one planning, there's Q one planning. Are there metrics and goals like that? Do you guys have okrs or anything like that? Or is it just here we're going to launch these products?

38:33
Logan Kilpatrick
I think it's like much higher level. I actually don't think OpenAI is like a big OKR company. I don't think teams do okrs today and I don't have a good understanding of why that's the case. Whether or not I don't even know if OKRs are still the industry, you're probably talking to a lot more folks about who are making those decisions. So I'm curious, is that something that you're seeing from folks? Is it still common for people to do okrs?

38:55
Lenny Rachitsky
Yeah, absolutely. Many companies use okrs, love okrs. Many companies hate okrs. I am not surprised that OpenAI is not an OKR driven company. Along those lines. I don't know how much you can share about all this stuff, but how do you measure success for things that you launch? I know there's this ultimate goal agi, is there some way to track. We're getting closer. What else do you guys look at when you launch? Say DPT store or assistance or anything that's like, cool. That was exactly what we're hoping for. Is it just adoption?

39:20
Logan Kilpatrick
Yeah, adoption is a great one. I think there's like a bunch of metrics around revenue, number of developers that are building on our platform, all those things, and a lot of these. And I don't want to dive. I'll let Sam or someone else on our leadership team go more into the details, but I think a lot of these are actual abstractions towards something else. Even if revenue is a goal, it's like revenue is not actually the goal. Revenue is a proxy for getting more compute, which is then actually what helps us get towards getting more gpus so that we can train better models and actually get to the goal. So there's all these intermediate layers where even if we say something is the goal and you hear that in a vacuum and you're like, oh, well, open AI just wants to make money.

40:02
Logan Kilpatrick
And it's like, well, really, money is the mechanism to get better models so that we can achieve our mission. And I think there's a bunch of interesting angles like that as well.

40:12
Lenny Rachitsky
I don't know if I've heard of a more ambitious vision for a company to build artificial general intelligence. I love that. I imagine many companies are like, what's our version of that? Before we leave this topic, is there anything else that you've seen OpenAI do really well that allows it to move this fast and be this successful? You talked about hiring people with higher agency and high urgency. Is there anything else that's just like, oh, that's a really good way of operating. Imagine part of it just hiring incredibly smart people. I think that's probably an unset thing, but yeah, anything else?

40:45
Logan Kilpatrick
I think there's a non trivial benefit to using Slack, and I think maybe that's controversial and maybe some people don't like Slack, but opening has such a slack heavy culture and really the instantaneous real time communication on Slack is so crucial. And I just love being able to tag in different people from different teams and get everybody coalesced. Everybody is always on Slack. So even if you're remote or you're on a different team or in a different office, so much of the company culture is ingrained in Slack and it allows us to really quickly coordinate where it's actually faster to send someone a slack message sometimes than it would be to walk over to their desk because they're on Slack and they're going to be using it.

41:27
Logan Kilpatrick
And I saw if you saw the recent Sam and Bill Gates interview, but Sam was talking about how Slack is his number one most used app on his phone. I'm like, I don't even look at the time thing on my phone works. I'm like, I don't want to know how long I'm using Slack, but I'm sure the salesforce people are looking at the numbers and they're like, this is exactly what we.

41:45
Lenny Rachitsky
So I also love Slack. I'm a big promoter of Slack. I think there's a lot of slack hate, but such a good product. I've tried so many alternatives and nothing compares. I think what's interesting about Slack for you guys is one of the, like, you don't know if someone in there is just an Agi that is not actually a person that's just there working at the company.

42:01
Logan Kilpatrick
I know they're real people. There is no Agis yet, but I think even Slack is building a bunch of really cool AI tools, which I'm excited to. And that's why there's so much cool AI progress. And at the end of the day, it's so exciting from being a consumer of all these new AI products, like Google's a great example. I'm so happy that Google is doing really cool AI stuff because I'm a Google Docs customer and I love using Google Docs to a bunch of their other products. And it's awesome that people are building such useful things around these models.

42:33
Lenny Rachitsky
How big is the OpenAI team at this point? Whatever you can share just to give people a sense of the scale.

42:37
Logan Kilpatrick
Yeah, I think the last public number was something around like 750 near the end of last year. 780 or something like that near the end of last year. And we're still growing so quickly, so I won't be the messenger to share the specific updated numbers, but the team is growing like crazy. And we're also hiring across all of our engineering teams. So folks are NPM teams. So if folks are interested, we'd love to hear from folks who are curious about joining.

43:03
Lenny Rachitsky
Maybe one last question here. So you're growing, maybe getting to a thousand people, clearly still very innovative and moving incredibly fast. Is there anything you've seen about what OpenAI does well to enable innovation and not kind of slow down new big ideas?

43:19
Logan Kilpatrick
Yeah, there's a couple of things, one of which is the actual research team who sort of seeds most of the innovation that happens at OpenAI is intentionally small. Most of the growth that OpenAI has seen is around our customer facing roles, our engineering roles, to provide the infrastructure to protect EBT and things like that. The research team is again, intentionally kept small. And there's all of this talk, and it's really interesting.

43:44
Logan Kilpatrick
I just saw this thread from one of our research folks who was talking about how in a world where you're constrained by the amount of GPU capacity that you have as a researcher, which is the case for OpenAI researchers, but also researchers everywhere else, each new researcher that you add is actually like a net productivity loss for the research group, unless that person is up leveling everyone else in such a profound way that it increases the efficiency. If you just add somebody who's going to go and tackle some completely different research direction, you now have to share your gpus with that person and everyone else is now slower on their experiments.

44:22
Logan Kilpatrick
So it's a really interesting trade off that research folks have that I don't think product folks, if I add another engineer to our API team or to some of the chat GBT teams, you can actually write more code and do more. And that's actually like a net beneficial improvement for everybody. And that's always not the case in the case of researchers, which is interesting in a GPU constrained world, which hopefully we won't always be in.

44:47
Lenny Rachitsky
I want to zoom out a bit and then there's going to be a couple of follow up questions here. Where are things heading with OpenAI? What's kind of in the near future of what people should expect from the tools that you guys are going to have and launch?

44:58
Logan Kilpatrick
Yeah, new modalities. I think chat GBT like continuing to push all of the different experiences that are going to be possible today. Chat GBT is really just like text in, text out, or I guess, like three months ago, it was just text in, text out. We started to change that with, now you can do the voice mode, and now you can generate images, and now you can take pictures. So I think continuing to expand the way in which you interface with AI through Chat GPT is coming. I think GPTS is our first step towards the agent future. Like, again today, when you use a GPT, it's really, you send a message, you get answer back almost right away, and that's kind of the end of your interaction.

45:36
Logan Kilpatrick
I think as GBTs continue to get more robust, you'll actually be able to say, hey, go and do this thing, and just let me know when you're done. I don't need the answer right now. I want you to really spend time and be thoughtful about this. And again, if you think back to all these human analogies, that's what we do with humans. I don't expect somebody, when I ask them to do something meaningful for me, to do it right away and give me the answer back right away. So I think pushing more towards those experiences is what is going to unlock so much more value for people. And I think the last thing is GPT, as this mechanism to get the next few hundred million people into chat GBT and into AI.

46:16
Logan Kilpatrick
Because I think if you have conversations with people who aren't close to the AI space, oftentimes you talk about, even if they've heard of chat GBT, a lot of people haven't heard of chat GBT. But if they have, they show up in chat GBT and they're like, I don't really know what I'm supposed to do with this blank slate. I can kind of do anything. It's not super clear how this solves my specific problems. And I think the cool thing about GPTS is you can package down, here's this one very specific problem that AI can solve for you and do it really well, and I can share that experience with you, and now you can go and try that GPT, have it actually solve the problem and be like, wow, it did this thing for me.

46:51
Logan Kilpatrick
I should probably spend the time to investigate these five other problems that I have to see if AI can also be a solution to those. So I think so many more people are going to come online and start using these tools, because very narrow, vertical tools are what's going to be, like, a huge unlock for them.

47:07
Lenny Rachitsky
So in the last case, a classic horizontal product problem where it does so many things and people don't know what exactly it should do for them. So that makes a ton of sense. Just being a lot more template oriented, use case specific, helping people onboard makes tons of sense. Common problem for so many SaaS products out there. The other ones you mentioned, which are really interesting, basically more interfaces to more easily interact with OpenAI voice, you mentioned audio and things like that makes tons of sense. And then this agents piece where the ideas, instead of just, it's a chat, it's like, hey, go to do this thing for me. Kind of along those lines. GPT five, we touched on this a bit. There's a lot of speculation about the much better version.

47:49
Lenny Rachitsky
People just have these wild expectations, I think, for where GPT is going. GPT five is going to solve all the world's problems. I know you're not going to tell me when it's launching and what it's going to do, but I heard from a friend that there's kind of this tip that when you're building products today, you should build towards a GPT five future, not based on limitations of GPT four today. So to help people do that, what should people think about that might better in a world of GPT five? Is it just like faster? It's just smarter? Is there anything else that might be like, oh, wow, I should really rethink how I'm approaching my product.

48:22
Logan Kilpatrick
If folks have looked through the GPT four technical report that we released back in March when GPT four came out, GPT four was the first model that we trained where we could reliably predict the capabilities of that model beforehand based on the amount of compute that were going to put into it. And we did, like, a scientific study to show, like, hey, this is what we predicted, and here is what the actual outcome was. So it'll be one, I think, just as somebody who's interested in technology, but interesting to see, does that continue to hold for GPT five? And hopefully we'll share some of that information whenever that model comes out. I also think you can probably draw a few observations. One of them, which is GPT four, came out. The consensus from the world is everything is different.

49:08
Logan Kilpatrick
Like, all of a sudden everything is different. This changes the world, this changes everything. And then slowly but surely, we come back to reality of like, this is a really effective tool and it's going to help solve my problems more effectively. And I think that is undoubtedly the lens in which people should look at all of these model advancements. Like GBT five is surely going to be extremely useful and solve some whole new echelon of problems. Hopefully they'll be faster, hopefully it'll better in all these ways. But fundamentally the same problems that exist in the world are still going to be the same problems. You now just have a better tool to solve those problems.

49:44
Logan Kilpatrick
And I think going back to vertical use cases, I think people who are solving very specific use cases are just now going to be able to do that much more effectively. I don't think that's going to. People have these unrealistic expectations that GBT five s is going to be like doing backflips in the background in my bedroom, while it also writes all my code for me and talks in the phone with my mom or something like that. That's not the case. It is just going to be this very effective tool, very similar to GBD four, and it's also going to become very normal very quickly. And I think that is actually a really interesting piece. If you can plan for the world where people become very used to these tools very quickly. I actually think that's like an edge.

50:26
Logan Kilpatrick
And assuming that this thing is going to absolutely change everything in many ways, I think is actually like a downside, is like the wrong mental framing to have of these tools as they come.

50:38
Lenny Rachitsky
Out kind of along these lines. You guys are investing a lot into b to b offerings. I think half the revenue, last I heard was b to b and then half is b to c. I don't know if that's true, but that's something I heard. What is it that you get if you work with OpenAI as a company, as a business? What is unlock? It's just called open AI Enterprise. What's it called and what do you get as a part of that?

51:00
Logan Kilpatrick
Yeah, so I think a lot of our b to b customers are using the API to build stuff. So I think that's one angle of it. I think if you're a Chachi BT B, two B customer, we sell teams, which is the ability to get multiple subscriptions of Chachibt, package it together. We also have an enterprise version of ChatGBT. There's a bunch of enterprise things that enterprise companies want around SSO and stuff like that related to Chachi BT Enterprise. I think the coolest thing is actually being able to share some of these prompt templates and GBTs internally. So again, you can make custom things that work really well for your company with all of the information that's relevant to solving problems at your company and share those internally.

51:40
Logan Kilpatrick
And to me that's like you want to be able to collaborate with your teammates on the cool things you create using AI. So that's a huge unlock for companies. I think those are like the two biggest value adds. There's like higher limits and stuff like that on some of those models. But I think being able to share your very domain specific applications is the most useful thing.

51:59
Lenny Rachitsky
And I think if you're a company listening and you think a lot of employees are using chat GBT, basically the simplest thing you could do is just roll it up into a business account with single sign on. And that probably saves you money and makes it easier to coordinate and administer.

52:14
Logan Kilpatrick
Yeah, there's also a bunch of security stuff too. Like if you want to control, you don't want people to use certain gbts from the GBT store because you're worried about security or privacy and stuff like that. You don't want your private data going in places. It makes a lot of sense to sign up for that so that you have a little bit more control over what's happening.

52:29
Lenny Rachitsky
Okay, got it. There's a launch happening tomorrow. I think after we're recording this, can you talk about what is new, what's coming out? I think this is going to come out a couple of weeks after recording. But just what should people know that's new, that's coming out from OpenAI and tomorrow in our time, in our world?

52:45
Logan Kilpatrick
Yeah. Updated. So there's a few different things. A couple of quick ones are updated. GPT four turbo model, updated the preview model that we released at Dev day. There's an updated version of that. It fixes this. If folks have seen online people talking about this sort of laziness phenomenon in the model, we improve on that and it fixes a lot of the cases where that was the case. So hopefully the model will be a little bit less lazy. The big thing is the third generation embeddings model. So were talking off camera before recording about all of the cool use cases for embeddings. So if folks have used embeddings before, it's essentially the technology that powers many of these question and answering with your own documentation or your own corpus of knowledge.

53:26
Logan Kilpatrick
And Lenny, you were saying you actually have a website where people can ask questions about recordings of the podcast lennybot.com.

53:35
Lenny Rachitsky
Check it out.

53:35
Logan Kilpatrick
Yeah, lennybot.com. And my assumption was that lennybot.com is actually powered by embedding. So you take all of the corpus of knowledge, you take all the recordings, your blog post, you embed them. And then when people ask questions, you can actually go in and see the similarity between the question and the corpus of knowledge, and then provide answer to somebody's question and reference an empirical fact, like something that's true from your knowledge base. And this is super useful. And people are doing a ton of this, is trying to ground these models in reality in what they know to be true. We know all the things from your podcast to be at least something that you've said before, and to be true in that sense, and we can bring them into the answer that the model is actually generating in response to a question.

54:18
Logan Kilpatrick
So that'll be super cool. And these new v three embeddings models, again, state of the art performance. The cool thing is actually the non english performance has increased super significantly. I think historically people really were only using embeddings for. It only worked really well for English. And I think now you can use it across so many new languages because it's just so much more performant across those languages. And it's like five times cheaper as well, which is wonderful. There's no better feeling than making things cheaper for people. I love it. I think now it's like you can embed, I'm pretty sure it was like 62,000 pages of text for one dollars, which is very cheap. So lots of really cool things you can do with embeddings and excited to see people embed more stuff.

55:07
Lenny Rachitsky
What a deal. Final question before we get to a very exciting lightning round. Say you're a product manager at a big company, or even a founder. What do you think are the biggest opportunities for them to leverage the tech that you guys are building? GPT four, all the other APIs. How should people be thinking about? Here's how we should really think about leveraging this power in our existing product or new product, whichever direction you want to go.

55:34
Logan Kilpatrick
Yeah, I think going back to this theme of new experiences is really exciting to me. I think consumers are just going to be, you're going to have an edge on other people if you're providing AI that's not accessible in a chat bot. People are using a ton of chat, and it's a really valuable service area. It's clearly valuable because people are using it. But I think products that move beyond this chat interface really are going to have such an advantage. And also thinking about how to take your use case to the next level. I've tried a ton of chat examples that are very basic and providing a little bit of value to me, but I'm like really, this should go much further and actually build your core experience from the ground up.

56:20
Logan Kilpatrick
I've used this product that allows you to essentially manage or view the conversations that are happening online around certain topics and stuff like that so I can go and look online. What are people saying about GPT four and what I just said out loud, what are people saying about GPT four? Is like the actual question that I have, and in a normal product experience, I have to go into a bunch of dashboards and change a bunch of filters and stuff like that. And what I really want is just ask my question, what are people doing? What are people saying about GPC four? And get answer to that question in a very data grounded way.

56:55
Logan Kilpatrick
And I've seen people solve part of this problem where they'll be like, oh, here's a few examples of what people are saying, and, well, that's not really what I want. I want this summary of what's happening. And I think it just takes a little bit more engineering effort to make that happen. But I think it's like, that is the magical unlock of like, wow, this is an incredible product that I'm going to continue to use instead of like, yeah, this is kind of useful, but I really want more.

57:20
Lenny Rachitsky
Awesome. I'll give a shout out to a product. I'm not an investor, but I know the founder called Visualectric.com, which I think is doing exactly this. It's basically tools specifically built for creatives, I think specifically graphic design to help them create imagery. So there's like Dolly, obviously, but this takes it to a whole new level where it's kind of this canvas, infinite canvas that you could just generate images, edit, tweak them and continue the array until you have the thing that you need. Visualective.

57:48
Logan Kilpatrick
I'm going to try this out. Is it similar to canva?

57:51
Lenny Rachitsky
It's even more niche, I think, for more sophisticated graphic design, I think is the use case. But I'm not a designer, so I'm not the target customer. But I will say my wife is a graphic designer. She'd never used AI tools. I showed her this and she got hooked on it. She paid for it without even telling me that she was going to become a paid customer. And she created imagery of our dog and all this art. And now it's like on our tv, the art she created is now sitting. It's like we have a frame tv and that's the image on our tv. Anyway, I love that.

58:21
Logan Kilpatrick
What was it called again?

58:22
Lenny Rachitsky
Visualelectric.com. Anyway, anything else you wanted touch on or share before we get to our very exciting lightning round, I've made.

58:32
Logan Kilpatrick
This statement a few times online and other places, but for people who have cool ideas that they should build with AI, this is the moment. There are so many cool things that need to be built for the world using AI. And again, if I or other folks on the team at open AI can be helpful in getting you over the hump of starting that journey of building something really cool, please reach out. The world needs more cool solutions using these tools and would love to hear about the awesome stuff that people are building.

59:01
Lenny Rachitsky
I would have asked you this at the end, but how would people reach out? What's the best way to actually do that?

59:06
Logan Kilpatrick
Twitter? LinkedIn? My email should be findable somewhere. I don't want to say it and then get spammed with a bunch of emails. You should be able to find my email if you need it online somewhere. But yeah, Twitter and LinkedIn is usually like the easiest place.

59:18
Lenny Rachitsky
And how do they find you on Twitter?

59:21
Logan Kilpatrick
It's just Logan Kilbatrick or I think my name shows up as Logan GPT or Logan GPT at official Logan K. Yeah.

59:28
Lenny Rachitsky
Awesome. Okay, and we'll link to in the show notes. Amazing. Well, Logan, with that we've reached our very exciting lightning round. Are you ready?

59:34
Logan Kilpatrick
I'm ready.

59:35
Lenny Rachitsky
First question, what are two or three books that you've recommended most to other people?

59:39
Logan Kilpatrick
I think the first one, and it's one that I read a long time ago and came back to recently, is the one room schoolhouse by don't want to. It's a lightning round, so I won't say too much, but incredible story. And AI is what is going to enable Stylecon's vision of a teacher per student to actually happen. So I'm really excited about that. And the other one that I always come back to is why we sleep and sleep science are so cool. If you don't care about your sleep, like it's one of the biggest up levels that you can do for yourself.

01:00:14
Lenny Rachitsky
What is a favorite recent movie or tv show that you really enjoyed?

01:00:18
Logan Kilpatrick
I'm a sucker for a good, inspirational human story. So I watched with my family recently over the holidays, this grand Tourismo movie, and it's a story about someone who, a kid from London who grew up doing Sim racing, which is like a virtual race car, and did this competition, ended up becoming like a real professional race car driver through some competition. And it's just like really cool to see someone go from driving a virtual car to driving a real car and competing in the 24 hours la ma and all that stuff.

01:00:50
Lenny Rachitsky
I used to play that game and it was a lot of fun, but I don't think I have any clue how to drive a real car race car. So that's inspiring. Do you have a favorite interview question that you'd like to ask candidates that you're interviewing?

01:01:02
Logan Kilpatrick
Yeah. I'm always curious to hear what people's like, the thing that they so strongly believe that people disagree with them on.

01:01:11
Lenny Rachitsky
What do you look for in answer that seems like, wow, that's a really good signal.

01:01:16
Logan Kilpatrick
Oftentimes it's just an entertaining question to ask in some sense, but it's interesting to see what somebody's deeply held strong belief is. I think that's not to judge whether or not I believe in that, but just curious to see why people feel that way.

01:01:35
Lenny Rachitsky
What is a favorite product that you've recently discovered that you really like on.

01:01:39
Logan Kilpatrick
The narrative of sleep? I have this really nice sleep mask from this company called, and not being paid to say this, but it's called, like, manta sleep or something like that. It's a weighted sleep mask, and it feels incredible when I. I don't know, maybe I just have a heavy head or something like that. But it feels good to wear a weighted sleep mask at night. I really appreciate it.

01:02:01
Lenny Rachitsky
I have a competing sleep mask that I highly recommend. I'm trying to find it. I've emailed people about it a couple of times in my newsletter for gift guides. Okay. My favorite is called the wawa sleep mask.

01:02:14
Logan Kilpatrick
What do you like about it?

01:02:16
Lenny Rachitsky
W-A-O-A-W. I'll link to it in the show notes. It makes a lot of room. It's like very large, and there's space for your eyes. So like, your eyelashes and whatever eyes aren't pressed on and it just fits really nicely around the head. And my wife, we both wear eye masks at night. Speaking of sleep really helps us sleep. I love it. Yeah. It doesn't have the weightness piece, so it might be worth trying, but everyone I've recommended this to is like, that changed my life. Thank you for helping me sleep better. And so we'll link to both of it. Look at that sleep mask.

01:02:48
Logan Kilpatrick
Look at us.

01:02:49
Lenny Rachitsky
Adult. Two more questions. Do you have a favorite life motto that you often come back to share with friends or family, either in work or in life?

01:02:59
Logan Kilpatrick
Yeah, I've got it. It's on a posted note that right behind my camera. And it's measure in hundreds. I love this idea of measuring things in hundreds, and it's for folks who are at the beginning of some journey. I talk to people all the time. They're like, yeah, I've tried this thing and it hasn't worked. And if your mental model is to measure in hundreds, if I measure in hundreds, the five times that you failed at something, you've failed and tried zero times. And I love that. It's like such a great reminder that everything in life is built on compounding and multiple attempts at stuff. And if you don't try enough times, you're never going to be successful at it.

01:03:38
Lenny Rachitsky
I love that. I could see why you are successful at OpenAI and why you're a good fit there. Final question. So I asked Chat GPT for a very silly question. Give me a bunch of silly questions to ask Logan Kilpatrick, head of developer relations at OpenAI, and I went through a bunch. I have three here, but I'm going to pick one. If an AI started doing stand up comedy, what do you think would be its go to joke or funny observation about humans?

01:04:06
Logan Kilpatrick
I think today, I think if you were to do this, I think the go to question would be something along, like, so an AI walks into a bar and likely because again, it's trained on some distribution of training data and that's like the most common joke that comes up. And that's probably like, I'm wondering if you came up with a joke right now, whether or not that would show up in one of the examples.

01:04:30
Lenny Rachitsky
I love it. What would be the joke, though? We need the joke. We need the punchline. I'm just joking. I know you can't come up with a.

01:04:37
Logan Kilpatrick
That's what we have. GBP relevant.

01:04:40
Lenny Rachitsky
Amazing. Logan, thank you so much for being here. Two final questions, even though you've already shared this information, but just for folks to remind them, where can folks find you if they want to reach out and ask you more questions? And how can listeners be useful to you?

01:04:55
Logan Kilpatrick
Yeah, Twitter and LinkedIn. Logan Kilpatrick or Logan GBT on, please shoot me messages. Like, I get a ton of dms from people and it's always really interesting stuff. I think the thing that I would love to have help on is if people find bugs and things that don't work well in chat GBT. I oftentimes see people be like, this thing didn't work really well. And the key, and I think we as OpenAI need to do a better job of messaging this to people. But having shared chats or actual, tangible, reproducible examples are the two things that we need in order to actually fix the problems that people have. Like the model.

01:05:33
Logan Kilpatrick
Laziness was a good example where it was kind of hard to figure out what was going on because people would be like, the model is lazier, but it's hard to figure out what were the prompts they were using, what was the examples? All that stuff. So send those examples as you come up on things that don't work well and we'll make stuff better for you.

01:05:49
Lenny Rachitsky
Amazing. And I'll also just remind people if you're listening to this and you're like, oh, okay, cool. A lot of cool ideas for OpenAI and chat. GpT what you need to do is actually just go to chat OpenAI.com and try this stuff out. There's a lot of just like theorizing, but I think once you actually start doing it, you start to see things a little differently. And at this point, every day I'm in there doing something like asking for ideas for questions, doing research on a newsletter post, and it's just like a tab I'm always coming back to.

01:06:19
Lenny Rachitsky
And I know there's a lot of people just like talking about this sort of thing and I just want to remind people, just like go sign in, play with it, ask you questions on something you're working on and just see how it goes and keep coming back to it. Is there anything else you want to share along those lines to inspire people to give this a shot?

01:06:34
Logan Kilpatrick
I love it. I think the phrase of people being worried about humans being replaced by AI, and I've seen this narrative online that it's not AI that's going to replace humans. It's like other humans that are being augmented and using AI tools that are going to be more competitive in a job market and stuff like that. So go and try these AI tools. This is the best time to learn. You're going to be more productive and empowered in your job and the things that you're excited about. So yeah, excited to see what people.

01:07:00
Lenny Rachitsky
Use chat GBT for and then you can expense your account. I think it's ten or $20 a month. A lot of companies are paying for this for you, so ask your boss if you can just have it expensed and make sure you use the latest version. Anyway. Logan, thank you again so much for being here.

01:07:15
Logan Kilpatrick
This is awesome. Budy, thanks for having me and thoughtful questions. Hopefully those weren't all from chat.

01:07:19
Lenny Rachitsky
You'd be t nope, only the last one. I did have a bunch of others I had in the belt or in the pocket. I don't know what the metaphor is. In the back pocket. That's the metaphor. But I did not get to them because we had enough great stuff. So no, that was all me. Human.

01:07:35
Logan Kilpatrick
Thank you.

01:07:36
Lenny Rachitsky
Thanks, Logan. Lenny AI lennybot.com check it out. Okay, thanks Logan. Bye everyone. Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Also, please consider giving us a rating or leaving a review, as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at Lenny'spodcast. Com. See you in the next episode.

Review any podcast in just 5 minutes.

Try Fireflies now!

Source: Inside OpenAI | Logan Kilpatrick (head of developer relations)


Try Fireflies for free