Everyone’s talking about what went down at OpenAI - but what does it mean for ethical and responsible AI, and what are the lessons for entrepreneurs on startup board governance? With the abrupt dismissal of CEO Sam Altman, questions loom over the stability and direction of one of AI's most influential entities. Let’s uncover the twists and turns in this high-stakes boardroom drama. Why was Sam Altman abruptly dismissed - only to be brought back again? What role did board member Helen Toner and the responsible AI community play in the firings? Who is on deck next as the newest members of the board - and who else should OpenAI consider bringing? What choices face Sam Altman - and key partner at Microsoft Satya Nadella - and what should these leaders do next? Finally, what can the AI and startup communities learn from how it all went down? Tom Chavez and Vivek Vaidya give us their timely takes on what happened at OpenAI. With incisive commentary and expert insights, they explore the events leading to Altman's sacking and the broader implications for AI governance. Note this episode was recorded on November 27th, 2023, and published on December 1st, 2023 so it may not reflect the most current developments at OpenAI. Read Tom Chavez’s op-ed in Tech Crunch on the need for an interdisciplinary approach to AI alignment: https://www.superset.com/feed/tom-chavez-in-tech-crunch-answering-ais-biggest-questions-requires-an-interdisciplinary-approach Listen to more episodes at www.theclosedsession.com
Everyone’s talking about what went down at OpenAI - but what does it mean for ethical and responsible AI, and what are the lessons for entrepreneurs on startup board governance? With the abrupt dismissal of CEO Sam Altman, questions loom over the stability and direction of one of AI's most influential entities. Let’s uncover the twists and turns in this high-stakes boardroom drama.
Why was Sam Altman abruptly dismissed - only to be brought back again? What role did board member Helen Toner and the responsible AI community play in the firings? Who is on deck next as the newest members of the board - and who else should OpenAI consider bringing? What choices face Sam Altman - and key partner at Microsoft Satya Nadella - and what should these leaders do next? Finally, what can the AI and startup communities learn from how it all went down?
Tom Chavez and Vivek Vaidya give us their timely takes on what happened at OpenAI. With incisive commentary and expert insights, they explore the events leading to Altman's sacking and the broader implications for AI governance. Note this episode was recorded on November 27th, 2023, and published on December 1st, 2023 so it may not reflect the most current developments at OpenAI.
Read Tom Chavez’s op-ed in Tech Crunch on the need for an interdisciplinary approach to AI alignment: https://www.superset.com/feed/tom-chavez-in-tech-crunch-answering-ais-biggest-questions-requires-an-interdisciplinary-approach
Listen to more episodes at www.theclosedsession.com
Tom: Welcome to the closed session. How to get paid in Silicon Valley with your host Tom Chavez and Vivek Vaidya. Welcome back to the closed session. My name is Tom Chavez and I am Vivek Vaidya. Here we are again. Oh my goodness. We got a hot hot topic to kick around here. We're just gonna let loose, right?
Vivek: I think so.
Tom: Because you're known for holding back.
Vivek: I am very, I'm very very careful.
Tom: Very reserved.
Vivek: Mm hmm.
Tom: I mean we're in meetings all day long and we're like god damn it. Why can't, why won't Vivek tell us what he really thinks? When are you gonna come out of your shell?
Vivek: I don't know. I'm still working on it, Tom. I need to, I need to muster up the courage and the confidence to, uh, yeah...
Tom: Is your therapist helping though? Is my question.
Vivek: Which one?
Tom: Well, time for a new one or a new lineup.
Vivek: I've tried three already.
Tom: Yeah.
Vivek: And so far.
Tom: And they all said hopeless. Nevermind.
Vivek: Yep.
Tom: I get it. I get it.
Vivek: Do you have some suggestions for me?
Tom: Yes, let's make that a subject for the next podcast. But on this podcast,
Vivek: We're going to talk about
Tom: OpenAI. Now look...
Vivek: What part of OpenAI are you going to talk about?
Tom: The debacle of the last two weeks and, what in the hell is going on with the board? What just went down these last couple of weeks? It is the strangest pile of mishegoss in board governance in recent memory. Have you seen anything kookier than this?
Vivek: Nope. I think we should stop calling it the OpenAI board. We should call it the former apology of the OpenAI board, right?
Tom: Right. Just, you know, these guys, it's like a clown car. Yeah, it's a clown car, but it's only like four clowns. Oh my goodness.
Vivek: So, so what happened?
Tom: Okay, look, we're gonna try to piece this. Let's, let's, let's give breadcrumbs.
Vivek: Mm hmm.
Tom: Just back it up a tiny, tiny, tiny bit, and then we're gonna know what, what happened. And we're just gonna theorize idly, because that's what we do around here, because we're not really...
Vivek: We don't know anything.
Tom: But you... there are some interesting clues that you can look at from the outside to help you build a theory as to what the heck was going on inside. So there's this guy named Sam Altman, CEO of OpenAI, you know, and he's been in and around Silicon Valley at a company a long, long time ago, worked his way into the YC job, keeps on, you know, showing up apparently and, and so started OpenAI in 2015.
Vivek: As a non profit.
Tom: As a non profit.
Vivek: With Elon Musk.
Tom: Okay. So there's one of the, so now we're going back, getting to the root conditions as to what the heck happened. So it's a non profit.
Vivek: Mm hmm.
Tom: Which is, and by the way, it's a nonprofit that took, I don't know, was it 13 billion worth of Microsoft's money as an investment
Vivek: Later on
Tom: Later on.
Vivek: Yep.
Tom: Right. So there's this dissonance, there's this strange kind of initial conditions, and then there's what's happened in the last year, but we all know we've all read the news, right? You can run, but you can't hide. It's on the cover of every newspaper and on the planet OpenAI's board decided to fire Sam. And the statements that we were reading were pretty bizarre, right? What did you, what were you picking up there?
Vivek: Lack of candor and transparency.
Tom: Lack of candor and transparency.
Vivek: In what, though? Nobody talked about that. Like, what was he not transparent about?
Tom: So, right, I mean, if you're gonna go for the king, you better not miss, and you better have something more specific to say than lack of candor. That's almost just a hair's breadth away from saying, that guy looked at me funny. I'm not sure I like it very much. So...
Vivek: So who was on the board?
Tom: So, um, let's see. There's, there's, uh, a guy named Ilya who was the co-founder and chief scientist at OpenAI. There's somebody named am Adam D'Angelo, who's the CEO of Quora, formerly with Facebook, Tasha McCauley, tech entrepreneur, but better known as, uh, I guess the actor, Joseph Gordon, Levitt's wife. In many circles you're known as Pallavi's husband, and I know that makes you feel...
Vivek: I'm so proud of that.
Tom: Okay. Okay. Okay. But I mean, for Tasha...
Vivek: It's so sad, no?
Tom: Hi, I'm Joseph Gordon Levitt's wife. I don't know. It's unfortunate that...
Vivek: No, but I, you know, this is where I take, I hold the media accountable.
Tom: Yeah, no, no, they take potshots.
Vivek: You know? Yeah. She's, she is...
Tom: She's her own thing.
Vivek: an accomplished woman. She's her own thing.
Tom: Yep.
Vivek: Why do you want to refer to her as Joseph Gordon Levitt's wife.
Tom: No, it's, it's, it's, it's, uh, it's lame. Helen Toner, Director at Georgetown Center for Security and Emerging Technology. Let's put a thumbtack in there because we're going to come back to Helen Toner. Board members that had been on the board and left include Reid Hoffman, who apparently left because there was a conflict with his own AI startup, Shivon Zilis, who left in 2023 to join Neuralink. Elon Musk, as you mentioned, was involved. Will Hurd, when he left to run for president in the Republic Primary. So it's an interesting cast of characters. Wow. Formally on the board
Vivek: And Sam... and Sam Altman and Greg Brockman were also on the board and Greg was chairman on the board.
Tom: That's right. That's right. So when they, when they kicked him out, and remember, they kicked him out and then literally tweeted it or put it on the wire five minutes later.
Vivek: And the kicking out also happened very interestingly, right? Oh, Ilya sends a text to him saying, can you join this Google meet tomorrow? And Sam Altman gets on the meet and that's just...
Tom: That's a bushwhacking.
Vivek: Yeah. That's amateur league shit, no?
Tom: What the hell? You know, when we finally decide to fire you, we're gonna do it proper.
Vivek: Yeah, I expect at least one week's notice of the Google Meet.
Tom: Right. It'll be there. And we'll title the meeting invite. Google, uh, Vivek's firing. And that will be called, end of the road. Something that won't give it away. So, when they write, when, and then they write this letter, it says, Mr.Altman's departure follows a deliberative review process, note the word deliberative, by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.
Vivek: Is deliberative. An SAT, GRE, or legalese word?
Tom: I don't know what, you know, this is so much, there's so many platitudes wrapped in an enigma, wrapped in a conundrum, wrapped in a, what the fuck? I don't even know what this is about. And then the employees, the employees went shithouse. And by the way, Graham, the founder of YC had said, a while back, apparent, this is 10 plus years ago. If you plopped Sam Altman on an island of cannibals and came back a little bit later, he would be the king.
Vivek: Oh, wow.
Tom: Yeah. And, uh, sort of, I think Graham kind of throwing shade a little bit.
Vivek: But he fired him no? from YC.
Tom: Apparently fired him from YC. But my goodness, if you, you know, I guess Graham was right because the uprising from the employees. All of them back in Sam's corner, pretty remarkable, right? Another thing that the board didn't contemplate.
Vivek: Completely miscalculated.
Tom: Now, there's a guy named Ilya, who's the chief scientist, so he's, cause it's easy now, we've been involved in many companies now, where boards can sort of sit up in the craposphere and not know, what's going on down on the ground.
Vivek: But I think there was also some idealism going on perhaps over there where Ilya was more like, you know, this is all becoming more commercial than we had imagined. And we want to talk about the impact of what we're doing on society, etc. etc. Who knows, right? But I've seen a video of him where he's talking about how the AI, the AGI that they're trying to build, or they're in the process of building is able to, and not just the AGI, but actually the LLMs are able to feel human beings.
Tom: That's right.
Vivek: Connect with them and you think, you know, think your thoughts and all of that and...
Tom: He's on a mission from God. Yeah. Apparently he's told the employees if you, if your first waking thought in the morning isn't about endowing the AGI.
Vivek: Yeah.
Tom: With human level cognition and your last waking thought before you go to bed. isn't about the same, maybe you need to get off the bus.
Vivek: Yeah, feel the AGI. I've heard that's what he keeps saying.
Tom: Feel it. Okay, can we just take a moment of silence here on this podcast? Let's feel the AGI. Yeah, I felt, I felt nothing there. Uh, so, so, to your point, and we said we're gonna put a thumbtack in it and come back to Helen Toner because apparently, a few weeks before Altman's offsite, he'd met with Toner to discuss a paper she'd recently co written for Georgetown University Center for Security and Emerging Technology, where she took open AI to task and was praising anthropic, right, for their approach to safety, right? And so Altman was up in arms and very grumpy about this. He reprimanded Toner for the paper, said it was dangerous for OpenAI, particularly at a time when the FDC was investigating them and all the rest. So, okay, now look. Freeze frame that. And this, a broader theme that we have to kick around here is board governance. In a prior podcast, we had talked about what a good board does. One of the worries I'm gonna have about how we proceed from here, like, and this board doesn't know what it's doing, a lot of Bush League stuff. But your board isn't there just to affirm everything that you think and everything that you want. Good governance entails a diversity of opinion, right? So I'm concerned that Altman...
Vivek: Think that that is not the case. Yes, and on that basis, you know, also reprimanding, I don't know what happened between the two of them in their private conversation, I don't know about, I mean, I've been on the receiving end of lots of feedback from board members along the way, taking me to task. And it seems pretty obvious, like, no, my job is to, okay, that's input, and I, you know, I'm going to listen to it, and it's not quite clear, and I'll disagree, but I don't know. It's, it's fraught, right? When Altman, it appears as we try to assemble a theory. It's not that Toner looked at him funny, but she posited a point of view or shared a perspective with which he disagreed, and maybe that was the seeds of her eventual demise.
Yeah, because, I mean, the board is your boss. That's right. Right? You, they are there to hold you accountable. It's just, it's not like you can just, discount everything they're saying. Yeah, yeah, yeah. That's, that's cute and you can reprimand them.
Tom: That's right. Right. So, so the board is your boss. Now, look, back to Ilya for a minute. If he's involved, okay, so he's on a mission from God to do the AGI. But he's also presumably involved in the flow of work with other employees. Is he anticipating this insurrection? Does he have any sensibility that it's coming? Because the other theory that I started to build on this guy is like, okay, brilliant tech technologist, but are you aware of all these pesky human beings running around your company who are about to really, you know...
Vivek: Because they completely miscalculated that among many other things. But that was one of the big things that the board just did not think through at all, if they thought through anything actually, right?
Tom: Yeah, yeah. So, okay, you know, you botched it, you missed it. There's no credibility there. And, and I'm, you know, I want to kind of conduct a little thought experiment here with you. So there's somebody named Dave Solomon, who's the CEO of Goldman Sachs. I'm not in those board meetings. But it's been several quarters of missed performance, a lot of high level defections, market missteps of a pretty profound nature. Goldman Sachs is bored. Now, these guys are pros, right? They're sitting in rooms, and I have no idea, but it wouldn't be ludicrous for me or other people to presume that they're wondering, hmm, what would his successor look like? And should we be considering this? How do we navigate these straits? Well, if that is in a topic of conversation, guess what, it's going to be a slow, that's going to be a deliberative process, [..] not deliberative. Goldman Sachs is going to proceed stepwise, there's going to be, we're going to consider the possibility, the pros, the cons, the risks. If we did pursue this, what would the transition look like? Who would the successor be? How would we conduct this?
Vivek: How would we communicate about it? Who would be informed first? You mentioned Microsoft. Satya Nadella had no idea.
Tom: All right. There's another one. And yeah, and there's another, right, and another one. How do you not give your largest investor a courtesy call, right? Satya Nadella was on with Kara Swisher last week, and you know, he's very gracious about the whole thing. But WTF, how do you not get a call? Letting me know that you're going to decapitate
Vivek: And if you wanna do this the right way, you have to enroll your biggest investor otherwise it's not gonna work. To your point, if you're coming for the king, you better take your best shot.
Tom: You better not miss.
Vivek: Yeah.
Tom: And so, so look, if Goldman Sachs were to conduct a transition, there, to your point, there would be a communication strategy to the ninth decimal point, there would be a well orchestrated process wherein the right stakeholders are notified at the right time. There's a thoughtful method to getting the right leadership on an interim basis or this poor guy, Emmett Shear, I'm the new CEO. Okay. That lasts like four hours.
Vivek: Yep. Yep.
Tom: It's just, you just can't, I mean, I know we're, we're probably being...
Vivek: Uncharitable
Tom: Yeah.
Vivek: But that's okay.
Tom: But my goodness, right? It's, I'm just a gog at how this all went down. So juxtapose what a Goldman Sachs transition, CEO transition would look like relative to here. By any measure, this, this board of Bush league operators botched it and that, so there needs to be a new board, right? But now let's return to that topic. A board is your boss, your board is not, they're not your, your pansies. They're not, you know, your dupes. Who just do as they're told, if it's a good board. So, who should be on the board here?
Vivek: So, right now, there's Brett Taylor, who we crossed paths with briefly at Salesforce. And he was at Facebook, before that. And then Larry Summers, whom you know a little bit.
Tom: I know him a little bit. Larry Summers, former, uh, Treasury head, and then former president of Harvard University, Larry Summers is, uh, is not known for holding opinions loosely, but he's a pro, and a thoughtful... a thoughtful guy. So you have the start and I guess, I guess Adam D'Angelo is staying on the board.
Vivek: Yes. Adam D'Angelo is still on the board. Yes.
Tom: Can you explain that one to me?
Vivek: I have no idea. I have no idea. And again, this is where these narratives get a little confusing for me because in some of the articles I was reading, Adam D'Angelo was the main instigator. Like he was the representative from the board who was having all these conversations and there was some tension also because he had created, he was working on a startup or a product called Poe that was competing with the ChatGPT app marketplace or something of that nature.
Tom: And then he's starting a device company with Jony Ive.
Vivek: Yeah, no, that's, that's Sam Altman. Oh, that's, oh. Sam Altman was, uh, wants to start a hardware startup with funding from the Middle East, uh, Middle Eastern countries. With Jony Ive, uh...
Tom: The Saudis up in there.
Vivek: The Saudis are of course up in there, um, So lots of interesting stuff. I'm sure Netflix, and Amazon, and HBO...
Tom: Are vying,
Vivek: Are going to make... there are going to be three different, uh, One's going to be a documentary, One's going to be like, uh, fictional re, uh, retelling of, uh, of the whole saga and then something in between.
Tom: And they will all be very entertaining.
Vivek: Yes.
Tom: And loosely correspond to the truth. Yes. So, so look, back, so Sam Altman has an opportunity to restart the board relationship. Appoint a broad minded board. Let's put a thumbtack in that because we want to come back to what we think this board ought to look like. Larry Summers and Brett Taylor are a start. I'm confused about Adam D'Angelo because he was there before. I don't know the guy, but it's, it's a little confusing. Can I say that?
Vivek: Yes.
Tom: So you got to restart the board and flesh it out appropriately. You got to actually subscribe to board governance. So whatever went down with Helen Toner, okay. But, there we are again, the board is not, these are not your yes men, they're not your bucket boys. It's, there's a governance imperative.
Vivek: You need to have a coherent answer, if and when the board asks you, what are you doing about AI safety?
Tom: Right, right.
Vivek: You cannot just say, no, no, no, that's not important. Which I don't think, I would like to believe that Sam Altman didn't just say that, like, no, it's not, that's not important, but, uh. Everybody, the way it's being narrated, these incidents have been narrated, it definitely seems like he didn't like what the board was trying to get him to do. So, this was all orchestrated to just get rid of the board.
Tom: Right. Right. Hey, let's, let's pause for a minute because we're going to come back to what we think the board needs to look like. Can we do a totally unpaid for promotion?
Vivek: Sure.
Tom: Is it time?
Vivek: Sure.
Tom: Because we're going into the holidays.
Vivek: Yeah.
Tom: And, I don't know about you, but I'm gonna be eating myself into a coma.
Vivek: But did you eat a lot?
Tom: I did.
Vivek: Uh oh.
Tom: I did.
Vivek: Cause your LinkedIn post, which, this crudgy LinkedIn post you wrote.
Tom: It was a very curmudgeonly thing I did, cause I don't, I don't like turkey. But that's not to say that I'm not going to eat like a pro on Thanksgiving.
Vivek: Of course.
Tom: So here's a quick sidebar. Here's what we do in my family. Because I'm from New Mexico, my mom and dad and family, we're all together, it's wonderful. We eat a lot of New Mexican food. There's this incredibly delicious dish. It's not queso. It's called chile con queso. And it's um, It's made with lots of green chili and tomatoes and it is delicious. Every year for the last many, many decades, I say, you know what, I'm just going to have a tiny bit of chili con queso before the meal.
Vivek: Just one small cup.
Tom: Just like a small cup.
Vivek: Yeah.
Tom: Not going to overdo it because I'm going to save my appetite for Thanksgiving. And every single year, I put down about a gallon of chili con queso before the meal has begun. And then I sit down and eat another meal and then I pass out. So why do I do these things?
Vivek: I don't know. But maybe it's just habit, because you've been doing it all this time, so...
Tom: Or I'm just, I'm just bereft of self regulation. Or maybe it's just that delicious. Or maybe, or maybe mama's chili con queso is just that good.
Vivek: Yeah. So what are you doing about like, are you doing anything different to work all that off?
Tom: So here's the totally unpaid for promotion. We're going to share our little exercise tips. Okay. So how are you, how are you going to, uh, not when I see you in January, by the way, we're still working here, but when I see you at January after break, how are you not going to show up with 45 extra pounds on you? What's the secret?
Vivek: Exercise. You know how they say, eat, live, and be merry? Yep. I've modified that to exercise, eat, drink, and be merry. Okay. That's four things. Yeah. It's harder to remember than the three, but okay. Come on. It's two E's. E squared DM.
Tom: Oh.
Vivek: There you go. See?
Tom: Little math.
Vivek: Yeah.
Tom: Mm hmm.
Vivek: No, but, uh, uh, I think we talked about this in one of the episodes earlier, but during COVID, when the pandemic hit, I couldn't go to the gym anymore to play squash. regularly. So I started running and now it's become a thing. I, I run almost, I think five days a week. I try to get five days a week-in, three, three, four miles, maybe six, seven miles over the weekend.
Tom: I remember after the pandemic, man, and I hadn't seen you, there's too many zooms. And then you showed up like Forrest Gump, you were runnin and you were suddenly, I mean, you were never chubby, but my goodness.
Vivek: I did lose some weight.
Tom: You lost, but you were scaring me there for a little bit.
Vivek: Yeah.
Tom: Right.
Vivek: But I've put back, I've put some back on now.
Tom: Okay. Okay. And that's, that's, that's just fine.
Vivek: No, but it's become, it's become, like meditational also now, you know, I, I don't run with any devices, and I run all places in San Francisco. Uh, I was actually thinking the other day, I've, I've run in San Francisco, I've run in New York, I've run in Pittsburgh, I've run in Boston, I've run in, two cities in Connecticut, uh, Denver, New Orleans, like all these places that I've been where I just, Chicago, like I just, everywhere I go, I try to run on the streets.
Tom: Well, see, my Forrest Gump analogy is, is apt.
Vivek: Yeah.
Tom: I'm visualizing it with a super long beard now, just in the open plains of Iowa.
Vivek: No, no.
Tom: On your way to San Francisco. All right, well,
Vivek: So what's your routine?
Tom: So this is a totally unpaid for promotion for a guy named Alan, who is this trainer I've been very fortunate to have met actually during the pandemic as well. And the guy is just very clever because he, he comes up with all of these weight motion exercises. Cause I like weight workouts. I need, I find I need the weights, but it's also metabolic. So you really get the heart thumping. And it's never boring like he has all of these kinds of interesting moves that I have to try to figure out and it requires, you know, some coordination. It keeps the, keeps the workouts fresh and I'm gassed out of every single one. So
Vivek: Alan from San Francisco.
Tom: Alan from San Francisco. He's the man. All right, Alan the shoutouts for you I'm going to get him some business.
Vivek: All right. Let's go back. Let's go back. So what does the board need to look like? So as we said, it's a good start with Brett Taylor, Larry Summers, and yeah, okay, Adam D'Angelo. But you have to, well, some, actually I was reading somewhere, well, somebody was saying, Yeah, three white males, right? Now, whether you pay attention to that type shit or not, I don't know.
Tom: You should.
Vivek: But having diversity in the board is definitely very important.
Tom: California's made it a state law now. I'm pretty sure unless I'm misspeaking. I think there's a regulation of some sort that compels at least public companies to have women on the board. You can't just have all white dudes.
Vivek: Yeah.
Tom: Yeah.
Vivek: Maybe OpenAI is exempt from that because they're not public.
Tom: And anyway, it's coming. Something like that's coming. Anyway. And forget the regulation. It's just the right thing.
Vivek: Right thing to do. Yeah, exactly. So there needs to be diversity. Not just like male, female diversity, but diversity of opinions, diversity of thought. People who will challenge the CEO and ask him or her the hard questions. And so I don't know who I would put on the board, but cause I'm not, I'm not that, uh, that evolved yet. But, uh, I think there needs to be diversity on the board. Somebody who, somebody who can represent the ethical side of the equation. Somebody who can represent the, geo geopolitical challenges that come with a company like OpenAI.
Tom: That's right.
Vivek: Uh, international, like what's, what does this mean for the world? So yeah, lots of different perspectives are needed on that.
Tom: Yeah, I want to pick that up. I think, I think it requires at least three kinds of non standard roles, given the stakes and the velocity with which everything is unfolding at OpenAI. Let's recognize this isn't just an engineering or technical problem. This is a human problem, right? And, and Helen Toner, maybe she didn't send the message the right way, but broken clocks are still right twice a day. There's something important about what she was trying to convey, that it's not clear OpenAI is taking seriously. So I want to pick up, you said an ethicist, right? I think a chief AI Data Ethicist, somebody who addresses the short and long term issues, right? But with attention to ethical data principles, right? The way data is used to train these models, the development of reference architectures for ethical data use. You know, we are, you and I are concerned about privacy, you can't decouple privacy considerations, people's privacy considerations from, from LLMs. It should be a senior role I think, on the CEO staff that sort of bridges the communication gap between internal decision makers and regulators, right? If the FTC and others are showing up, this person needs to be able to... intermediate and hold sway in those conversations. Another one that's a little strange, but you remember we had Brian Christian here on our podcast, right? And he taught to us about the alignment problem.
Vivek: Yep.
Tom: Right? How do we get these machines and algorithms to do what human masters want them to do? That's a hard, hard unsolved problem. I think a chief philosopher, and we can figure out what to call it but the person who sort of really wrestles with the alignment problem. How do you define the safeguards, policies, back doors, kill switches, right? If the machines start to do the things that
Vivek: We don't want them to do.
Tom: You have to have a framework, right? A very well thought out framework. And this is where philosophers of mind have spent decades and decades, like trying to think through these issues. Yeah. Plug in somebody like that to see if they can help.
Vivek: I think, I think the responsibility that a company like OpenAI has, goes far beyond just like, Hey, we built the technology. Anybody can come and use it. I think it goes far beyond that. Because to your point, I think people can misuse the technology people will misuse the technology, people are already misusing the technology, so how? What are you going to do? What can you do to put in these guardrails, these checks and balances and whatnot in the technology itself?
Tom: That's right.
Vivek: That will prevent it from being, prevent it from being misused.
Tom: That's it. The final, and this is maybe a stranger addition to the lineup, but I really mean it, is the chief neuroscientist, right? We're now right at that cusp where the machines, as Brian Christian and others have illuminated, right? The machines are teaching us things about the nature of cognition, right? Reinforcement learning forces us to rethink what we thought was going inside our brains, right? And so there's this incredibly rich terrain at the intersection between human cognition and machine cognition. How do we make sense of that, right? How does intelligence unfold within AI models? People are paying, there are many AI researchers who are now claiming, no, no, no, this machine has a theory of the mind. It is introspecting. Well, what does that mean exactly, right? Let's get serious about that. What models of human cognition are most relevant and useful for AI? What can AI teach us? Again, it's backwards and forwards. What can AI teach us about human cognition? That one maybe is less urgent from a sort of governance perspective, but I find it interesting. And I think, I think that OpenAI, if they fulfill their mission. Can run but can't hide from, from all three of these kinds of roles at high levels and potentially up to and including the board.
Vivek: So let me ask you a question about that. Should OpenAI continue to be a non profit? Or should they figure out a way to change the whole structure?
Tom: Yeah, uh, look, I mean, you, you could contemplate a structure where you have sort of a supervisory board and a management board, right? And that was, that was the double headed monster that they had already created. And I think it's pretty plain that that didn't work. Now, just because it didn't work doesn't mean that you couldn't tune it up and try to make it work here. My sense is that the horse is out of the barn on that one. Right? I think it's, I think there's just too much energy now and too much investor pressure, right?
Vivek: See, that's, that's the angle I was actually... trying to look at as well is the cynical part of me thinks that all this drama was because on one branch of the tree, the 770 employees were looking at no money, right? And the other branch of the tree was lots of money That's right. So they picked the second branch.
Tom: What a shock.
Vivek: Yeah,
Tom: ...so weird.
Vivek: Right?
Tom: But look, yeah, I mean, it's, it's very concerning and I think it you know, sure, there's a parallel universe wherein we could create a non profit board governance structure. I just don't see it, given the forces of capitalism that have already been uncorked here, I don't see that it's feasible.
Vivek: Which is not necessarily a bad thing, in my opinion, right? But let's just be honest about what everybody's motivations are.
Tom: That's right.
Vivek: So as I said, and all the president's men, follow the money.
Tom: Right. Or as, as uh, Notorious B.I.G. and Puffy Combs said decades ago, It's all about the Benjamins, baby, right? So no, I mean, that's where we're living. That's where we're living. I think that there is, I'm going to be optimistic. Because you and I are always that optimistic, even though we get curmudgeonly here and there. And I did that Thanksgiving thing that was a little curmudgeonly. I get it. But mostly we're optimistic and it seems to me that there is a governance structure that could work. But thing one, Altman needs to choose it, embrace it and not in a grudging roundabout way, but authentically.
Vivek: Yeah.
Tom: By the way, this can be his greatest legacy.
Vivek: Right.
Tom: Speaking of money, yeah, you can make a lot of money for yourself and your shareholders if you get this right. But think about your legacy in the longer term.
Vivek: Yeah.
Tom: Right. If you can create a group of, of renaissance leaders, who can truly grapple and address these issues credibly at the board level, you know,
Vivek: You will go down in history.
Tom: Walter Isaacson will write a book about you. Right. So, so it is it is within reach. I'm going to be very concerned if they kick the can down the road and avoid these issues. I think, you know, if, if anything good comes from the debacle of the last two or three weeks. I hope it's this, a sound board governance structure with, with the right people.
Vivek: Because if it doesn't, then everybody's worst fears will have basically been proven to be true.
Tom: That's right. Well, I guess we solved that one, right?
Vivek: Of course.
Tom: Put a bow on it.
Vivek: I think we can take the rest of the year off.
Tom: Let's just, I'm going to go pour four martinis because I'm exhausted. Just nailed it. No, these questions are really hard, but, but you know, hopefully some useful ideas to work from there.
Vivek: No, but in all seriousness, I think, I think that these are not just because we like talking about this stuff, but I hope there are lessons for founders, who are looking to start their companies and how they construct boards and what responsibility they have towards their boards and, and, uh, and to board members too, you know, like you, you're there to govern and don't be like the open AI, the former open AI apology of a board. Actually be more like the Goldman Sachs board, which is deliberative, and thinks through these things, games things out, considers things.
Tom: Asks the right questions, shows up with the right issues, dwells where they need to be, steers clear of the places where they don't need to be. There's a whole rich set of practices and habits that great boards employ, and perhaps we'll pick that up. Well, we've talked about it in the past, maybe it's worth picking that up again.
Vivek: Yeah, maybe, maybe. We need to do a startup board stewa one.
Tom: There you go. There you go. All right. Well, listen, uh, that was fun. Hopefully useful. Thanks to everybody for tuning in. Do sign up for our newsletter at superset.com and listen to more episodes at www.theclosedsession.com.
Vivek: Thank you everyone. See you next time.