The {Closed} Session

Harvard Computer Scientist James Mickens on The Ethical Tech Project

Episode Summary

Are we walking a tightrope with AI, jeopardizing humanity's ethical core? Is AI more than just algorithms, acting as a mirror to our moral values? And when machine learning grapples with ethical dilemmas, who ultimately bears the responsibility? Harvard's Gordon McKay Professor of Computer Science, James Mickens, joins Tom Chavez and Vivek Vaidya on "The {Closed} Session." Together, they dive deep into The Ethical Tech Project (a think-and-do tank crafting blueprints for ethical data use), Harvard's Institute for Rebooting Social Media, the art of data stewardship, privacy engineering, and the evolving landscape of AI regulation.

Episode Notes

Are we walking a tightrope with AI, jeopardizing humanity's ethical core? Is AI more than just algorithms, acting as a mirror to our moral values? And when machine learning grapples with ethical dilemmas, who ultimately bears the responsibility? Harvard's Gordon McKay Professor of Computer Science, James Mickens, joins Tom Chavez and Vivek Vaidya on "The {Closed} Session." Together, they dive deep into The Ethical Tech Project (a think-and-do tank crafting blueprints for ethical data use), Harvard's Institute for Rebooting Social Media, the art of data stewardship, privacy engineering, and the evolving landscape of AI regulation.

 

PLUS bonus content: super{set} Spotlight on Checksum.ai co-founder Gal Vered and his experience so far working alongside Tom, Vivek, and the super{set} team at Checksum.

 

Learn more about The Ethical Tech Project: www.ethicaltechproject.org

Learn more about The Ethical Tech Project's ThePrivacyStack: https://theprivacystack.org/

Learn more about James Mickens: mickens.seas.harvard.edu

 

Learn more about super{set}: www.superset.com

Learn more about Tom Chavez: www.superset.com/team-members/tom-chavez  /  Tom's LinkedIn

Learn more about Vivek Vaidya: www.superset.com/team-members/vivek-vaidya / Vivek's LinkedIn

Listen to previous episodes of The {Closed} Session: www.theclosedsession.com

Learn more about Checksum:  https://checksum.ai/

Episode Transcription

Welcome to The Closed Session, how to get paid in Silicon Valley with your host Tom Chavez and Vivek Vaidya. 

Tom: Welcome back to season four of The Closed Session podcast. My name is Tom Chavez 

Vivek: and I'm Vivek Vaidya. 

Tom: This is an exciting episode for us because we have a very important, notable guest with us. We're going to reveal him in a minute, but to set it up a little bit, we're going to look at a whole range of topics now, machine learning, artificial intelligence, security, society, governance, all kinds of good stuff. We're going to give ourselves room to just swing a cat, stretch out. Our guest is James Mickens, distinguished Computer Scientist and Professor of Computer Science at Harvard. As we're getting ready to get going here, I just... James reminded me that he's been on sabbatical this last year. And which I'm sure has given him even more time to, to roam widely and explore new things, but his central focus has been on distributed systems, large scale services, ways to make them more secure. He is also on the board of the Ethical Tech Project, where Vivek and I do some things. And he heads the Institute for Rebooting Social Media. James, welcome. 

James: Thanks for having me. Good to see both of you. 

Vivek: Good to see you as well. 

Tom: Well, let's jump in here, James, because you have a very interesting journey, and I was wondering if you could just back it up a little bit for us. You don't have to go all the way back to the dinosaurs but, you know, you, how'd you get here? You know, how did that all happen? 

James: Hmm. So difficult to answer such an important question concisely. 

Tom: Right? 

James: The shortest version is that, uh, I was born in Atlanta, Georgia, and then I, uh, lived there for, you know, the first part of my life. I got my computer science undergrad degree at Georgia Tech. Uh, then I went out to the University of Michigan to get my PhD. So at that point I'd experienced southern heat and midwestern snow. Then I decided to experience Pacific Northwestern rain, so I went to Seattle. And then I worked at Microsoft Research for about seven years. I was in the Distributed Systems Group. And so there I did research on large scale online services. So basically the pieces of software that run in data centers and that, uh, you know, act as the backbone for all the apps and the web pages that we all, um, know and partially love, partially hate. And then I decided in 2015 to come back to academia. So I joined the faculty of the Harvard computer science department in 2015, and I've been there ever since. 

Tom: So that had to be an interesting twist, right? Because I'm not aware of that many people who are happily ensconced at Microsoft's Research or one of those large groups and then decides to go through all of the pain and tumult of tenure and all. How'd you make that decision? That's not a usual everyday kind of thing. 

James: Well, yeah, it's true. I mean, I had a great time at Microsoft. And it, it was great, you know, when I was in a particular mindset, where I really wanted to be very close to the product groups. And so, you know, increasingly in intimate machine learning in particular, as I'm sure we'll, we'll talk about later, um, you know, access to data sets, access to, you know, the real user data at scale, that's important for doing certain types of research. Um, and so that was super exciting to be able to be you know, adjacent to those groups adjacent to that real data, real infrastructure. But I did miss teaching. You know, I did work, I did miss, uh, working with students closely, um, and the mentoring aspect of things. So, yeah, so I decided to come back to academia. And yeah, I did have to, uh, hustle for tenure and that was existentially terrifying. It's very funny, whenever you talk to a professor who already has tenure. And you say, Hey, what, what, what's it like? Was it? And they say, Oh, don't worry about it, young person, you'll be fine, you know, back when I got tenure on a whaling, you know, schooner, it was, it was a little bit scared blah blah. It is absolutely terrifying, you get judged by your peers. But as one of my good mentors told me, look, you know, just try to do good work, you know, and like, yes, sometimes you'll feel scared. You won't know what's going to happen to you, but just try to do good work. And that's what I tried to do. And luckily, you know, Zeus smiled upon me and I got tenured.

Tom: That all worked out. Hey, well, listen, I mentioned ETP, the Ethical Tech Project at the beginning. And I was wondering if we could just talk a little bit about that. So for listeners who don't know what, what ETP is, Ethical Tech Project is, as we like to call it, a Think and Do Tank, focused on enhancing web safety for consumers and guiding companies to be responsible. Data stewards, by the way, on a quick but relevant sidebar, many years ago, I worked in a think tank. This is a long time ago. And I had a friend who was just endlessly fascinated, like, okay, so Tom, what do you do all day? And I tell him, I think. And you think great thoughts, and then you publish some reports. So we're thinking great thoughts over here, but we're doing stuff as well. We're getting shit done at ETP. And so I was wondering you know, if you could talk a little bit about what draws you to the Ethical Tech Project cause I wasn't going to take it for granted when we asked you to join up. You got a lot of important projects competing for your time. What, what draws you to the work we're doing at the Ethical Tech Project? 

James: I think it's, it's the doing part. I mean, of course the thinking part is also important. I mean, let's, let's not get trapped in epistemology and how think..., but it's like the doing part I think is, is the most important because I think that if you look at the landscape of, sort of people who want to do good at the intersection of policy and tech, roughly speaking. This is a very big, you know, set of, of ideas and people. So I'm not trying to disparage if you look at a high level, the people who understand that technology can have harms and want to make those harms go away or mitigate them in some way. There's a lot of good intentions. You know, there's a lot of people who have various policy proposals to, you know, make cybersecurity better to make ML better or so on and so forth. But from my perspective as a technologist, as someone who, you know, writes code, as someone who used to work with and, you know, currently sort of still collaborates with big tech companies. There's always this challenge of implementation, like how are you actually going to affect the change that you want to see? And one of the big challenges I see in the ethical tech space, writ very, very large, is that there aren't as many technologists having deep conversations with policy people, as we might hope, and also when we talk about these, you know, sort of, attractive, but nebulous concepts like privacy. You know, how do we actually make those concepts real, from an engineering perspective, at least. So, you know, from my perspective as someone who's done a lot of software engineering, the way you make it real is you create protocols, you create software frameworks that allow you to actually put into practice the policies or the ideals that you have. And so that's why I got, you know, interested in ETP because this is what, you know, in my opinion, it's trying to do, it's trying to create these artifacts, these reference architectures, these stacks that you know, real engineers and real people and companies can look at and say, ah, okay, this is a concrete example of a way forward.

Tom: Yeah. Look, I mean, there's a lot of paneling out there. And we're psyched to be doing the work and making it real and actionable and implementable for different organizations. So thanks for everything you're chipping in there. 

Vivek: Yeah and what you said was interesting, James, which is there's a lot of good thinking that's been done by policy people. And the challenge comes in, how do you turn all that work into protocols and frameworks and whatnot. Are there any other roadblocks or blockers that companies face when they try to implement these kinds of ethical data practices, whether it's data stewardship or, or privacy engineering even that prevent them from employing or deploying these kinds of best practices for lack of a better phrase.

James: Yeah, I think there are a couple of blockers. The first, as we just discussed, was the lack of infrastructure for, you know, to sort of make these ideas real. Another challenge is that I think that a lot of times engineers think that we are the anointed people and that we don't really need any of these insights from these other sort of soft fields for, you know, the dilettantes and, and that's super unfortunate, like, as it turns out as the sort of wheel of time is in spinning. Humanity has been making progress, not just on engineering, but on things like sociology, on things like psychology. And so I think that having an engineering first all the time approach to these types of problems is bad because you should actually listen to, for example, um, psychologists and economists and a sociologist. If you care about things like getting rid of bias and machine learning algorithms, you can't just define that sort of statistically and just be done because I read my Bayesian textbook or things like that. Um, I'll say one last thing another challenge I find sometimes is that people and people inside companies believe that doing the right thing either won't be rewarded by the market or it's like not as profitable as doing the "wrong thing". And that's just, I think oftentimes an unexamined assumption that people should look at. I mean, you may be aware of some of the work that some economists are doing around targeted ads, which show that targeted ads may not actually be as beneficial for anyone in the ecosystem except for the people who are, you know, running the, the ad, uh, targeting infrastructure. And so I think we really need to sort of step back and reexamine some of these assumptions we make about how we can make companies that, you know, make money, which is important, just to be clear, I'm not a Marxist, you know. Uh, so people should be able to make money, but also we should be able to deal with these other sort of public good issues. And I think we can do that in a way that's not mutually exclusive towards achieving those two goals simultaneously. 

Tom: Listen, I mean, I love, I mean, we subscribe, James, as you well know. It is interesting, right, to see a younger generation of engineers coming up, whose only frame of reference, it strikes me are sort of overreaching monopolists who have taken liberties and claim, you know, and concentrate as much market power as possible because it's worked, right? And so it, the question is, well, do good guys ever win or do you have... does doing the right thing actually pay off, right? And so it's, uh, it's exciting to be trying to provide those counter examples like, no, you can actually do the right thing and participate in a large market and create a lot of wealth as well. I don't want to derail us, but why that preciousness, that hubris that you talked about, cause I, I worry a lot about that where engineers just, you know, well, I did all of this math and I write all of this code. I know the answers and all of those weenies in the humanities over there, you know, adorable, but irrelevant. How, any psychosocial theories, cause James, I know you have a lot of ideas. What's your like take on how did that happen? Like why the preciousness, why the hubris? 

James: I think it arises because at least superficially, the things that engineers have been able to build, particularly over the past, let's say, you know, 40, 50 years are just amazing. I mean, even just in my own lifetime, the fact that I can translate, you know, one human language to another automagically through something that I can hold in my pocket. I mean, that's, that's magical, that literally used to be sci-fi that I would see in your movies or cartoons or things like that. So at least at first glance, if you look at some of these technologies we built they are quite literally amazing. But then as soon as we think that, you know, one of our instincts hopefully should be amazing for who. Is it amazing for everyone? Are there downside risks to those things? And so to answer your specific question, I think it's sometimes easy for engineers to get caught up in the amazingness that they see directly for the target populations they're thinking about directly, but they don't think, you know, at what cost. Or, you know, who isn't getting access to these technologies? Or did I not consider someone when designing this, you know, ostensibly amazing new feature? 

Tom: Right. And now all of the reverberations of these technologies, right, in maybe ways, like you can solve it in a silo and from an engineering perspective, it is unbelievably great, but then it reverberates, right? Which brings us to this question about, I was wondering if you could just talk about the Institute for Rebooting Social Media, a group that you had at Harvard, it dovetails with what we're talking about here. What's, what are the goals and structure? What's, what's that all about? What should our listeners know?

James: Sure. So, the Institute for Rebooting Social Media is a group that I helped. A lead alongside Jonathan Zittrain, uh, Rebecca Rinkevich and a bunch of other great folks. And the basic idea is that, um, well, let's first start with the motivation, right? Let's let's read the origin story. The motivation is that much like with ETP, we feel that we're kind of in this interesting and important perilous, but hopeful moment with privacy because of a lot of, you know, well known privacy breaches or sort of, you know, bad happenings. We think that in social media, there's a similar type of sort of inflection point that we're possibly on because of things like, you know, Frances Haugan and the whistleblowers inside these various companies, um, because of the research coming out that shows that, yeah, social media can be very good in some cases, but it can have these really devastating impacts on hate speech, misinformation, you know, so, so on and so forth. So what we want to do is we basically want to sort of look at social media. And say, you know, what's working? Let's try to keep that. But then, you know, what isn't working? How can we get rid of those things without destroying some of these sort of economic facts on the ground that I don't think we're going to be able to get rid of and maybe we shouldn't. So as a concrete example of that, you know, a lot of people will push on this idea of, Oh, everything should be super decentralized. That's the only way to give users control over their data. But just for various reasons involving economies of scale and so on and so forth, data centers are here to stay. Breaking news. Congratulations listeners of the podcast. You've, you've heard your first breaking news of the day. Data centers are here to stay because they just make a lot of things efficient. Um, and they will end up costing less for various definitions of cost than fully decentralized ideas. So, you know, that's one of the things we want to look at. How can we sort of give users more control over their data? How can we give them, in some sense, more of a sense of dignity online while not trying to say, we're going to go fully decentralized or we're going to get rid of all kinds of ads. Another piece of breaking news, ads are not going away. I can just guarantee you that because there's like this weird tension inside everybody's soul, which is that we both don't like ads. We also want stuff to be free, at least as we directly perceive it to be so, you know, so ads are not going anywhere. So how do we wrestle with those tensions? And, and much like ETP, we try to be generative. So we don't just want to be issuing white papers, you want to be bringing a diverse set of people, scholars, you know, civic activists, so on and so forth, people from tech companies, too. They're an important part of the solution. Unlike a lot of attempts to do "ethical tech", where we sort of say, ah, well, engineers are purely the enemy in some sense, and we just have to put them in a cage. We want to engage with them because they're going to help us, we think, to make a better future for social media.

Vivek: Yeah, just switching gears now James, AI, machine learning, generative AI now is, is been, of course, it's become a household word. My barber asking me about generative AI, you know, and, uh... 

Tom: Did you school him? Give a little tutorial? 

Vivek: Yeah, I did. I did. I tried to make it as accessible as possible. 

Tom: We got to go to your barbershop.

Vivek: Yeah. This was actually in Boston... 

Tom: It's different from my barbershop. 

Vivek: This was actually in Boston. This was actually in Boston. The Marvelous Barber Lounge in... in Boston, but you know, you've been, and rightly so, these technologies are being perceived as elixirs in some cases to solve the world's problems. But you've been a critic, a somewhat vocal critic of some of these, the challenges that come out with machine learning. So can we go deeper? We talked about a little bit in the beginning of the podcast, but can we go a little bit deeper and hone in on what you believe are the challenges that arise from widespread use, unfettered use of machine learning, AI, etc.

James: I'd love to, I'd love to. This is great. This is like, you know, asking a coffee addict, well, tell me why you love coffee so much. Asking a video game addict, tell me more about the world building in Zelda. Yes. So for those of you who can't see me, my eyes are rolling in the back of my head, and now I'm going to enter sort of like a trance like guru state. So, I mean, at a high level, as we kind of hinted that before, there's some things that, that machine learning can do, that practically speaking, if we ignore sort of downstream effects, we're just focused narrowly on, does this app do something awesome? The answer is yes, you know, so, so I'm not against machine learning in the sense that I don't appreciate some of the goods that it's given us, but there's sort of a lot of intrinsic problems that arise from machine learning that, uh, impart flow from the fact we don't really understand how a lot of it works. And this is different, by the way, then the critique of like, well, you know, why do you fly in airplanes? If you don't fully understand how the engines work or why do you know, it's different because like, at least in theory, there's a set of people at Boeing who understand how airplanes work, you know. But like when you look at some of these models, they're just these sort of deep profound mysteries. And like, I know some people in ML groaning, no, no, no, look at this explainer, look at this medium article. Yes, I understand that there are some sort of theoretical sort of underpinnings for why we think these things work like they work. And yet, if you look at what happens with things like ChatGPT, you know, or, or Bing's version of chatbots or things like this, people put all this effort into making these guardrails, well intentioned effort, by the way. So I'm not trying to disparage the work of those people in any way. It's very hard to make those guardrails because we don't know how these things work, in some sort of deep sense. And so you see like all this effort going to sort of putting these guardrails in and then yet still. There are these pretty easy hacks that you can do to turn off the safe features or to make it, you know, sort of misbehave in certain ways. So I think that's a huge problem and what that sort of exacerbates is a problem that then because these models oftentimes encode biases in the training data, now it's kind of like a double whammy. Because we've got these biases in the training data, we can't fully understand how the models ingest that data, represent those internal biases, and then people want to look at these models as magic. You know, it gives me the heebie-jeebies, when, like I go on LinkedIn, I mean, that's my first mistake, don't go on LinkedIn, but like when I go on LinkedIn and you see someone saying like, hey, you know, don't get left behind by AI or machine learning, bring it into your business, look at all these great things it can do. Good Lord! Would you just walk on a beach? If just some just bedraggled person showed up on the beach and you were walking there and they said, Hey, guess what? I can evaluate resumes for you, and I can do it at a tenth of a cost. Would you say yes? No, you would call a priest or the Ghostbusters or the National Guard or whatever. And yet that's like essentially what we're asking people to do with some of this machine learning stuff. And so it just, it's, I feel like, you know, it's a weird position to be in as a technologist, because on the one hand I do appreciate the amazing things that it can do, but I do think that in many cases we're not being reflective enough about how it's being used, how we put those safeguards in, how we test this stuff, so on and so forth.

Tom: Yeah. Listen, I mean, I've been at this a while. You look back at these hype cycles and the hysteria that ensues, going back to like object oriented programming. Object oriented programming that's going to save the entire planet between object oriented programming and web free slash blockchain. I know you're a big fan James.

James: Yeah! 

Tom: Chum in the water. This one is uniquely hysterical everything that's going on right now with AI ML. I really appreciate your comment also about how scary it is that we really have no effing idea how it really works. An honest researcher in that field, because I, you know, you pick up some of these papers, I stay curious and I look at some of the, some of the summaries and other cutting edge research and you look it up, you open it and it's just a lot of notation hacking, right? And people kind of trying to gussy it up with the patina of a lot of math and symbology. But honest researchers who do it will tell you, listen, dude, we have no, no understanding as to how a multi layer neural network actually does what it does. Zero. You know, so it's, uh, and in that context, here we are just all abuzz, all the flutter everywhere, AI for your business, AI for your bathroom, AI for your car, AI for your kitchen. It's kind of crazy. God help us! 

James: It's completely crazy. And I'll just say, by the way, like, this is one reason why I think that like people who dismiss the existential risk thing, they're going to be the first to go because like it is both true that there are immediate harms for people whose lives are currently impacted by machine learning systems, just to be clear. There are immediate harms that are being done in terms of like, you know, deciding who gets parole and deciding whose mortgages get, you know, um, hand out stuff like that. 

Tom: Right. 

James: But like when we talk about existential risk, right? The fact that we already have so many quote... I mean, I don't want to call them mundane to dismiss them, but we have so many sort of immediate negative impacts of AI that we see already. Why wouldn't we think that if we're not careful? Oh, you know, someone's going to hook up, you know, machine learning to, uh, you know, the Pentagon's a system for doing, you know, early warning or things like this. So I think that we should be concerned both about these sort of immediate near term harms as well as the existential risk stuff. I'll be wearing a sandwich board out by the, uh, the Harvard tea stop later on. If anybody wants to hear more about this, uh, exciting, positive filled philosophy. So cheers! 

Tom: Make sure to wear your tinfoil hat on top of the, of the sandwich board. 

James: Hat singular? Plural, my friend. You gotta have multiple layers to attenuate different frequencies. 

Tom: Of course.

Vivek: But just to play devil's advocate, right? The train's already left the station, right? So you can't, or you can't put the toothpaste back in the tube, or you can't put the genie back in the bottle. Pick your favorite analogy, right? Isn't our only recourse to figure out how we are going to build in the things we were talking about earlier, frameworks, policies, protocols, et cetera, so that all of this amazing technology is used in the, to the extent possible in the right way, in scare quotes, right, right?.

James: Well, I mean, you're exactly right. That, you know, absent a time machine, you know, we, we can't go back in time and whisper to Jeff Hinton and say, Hey, maybe you should become an artist, you know, by the way, Jeff's a great guy. And I mean, that's just, you know, got to throw in a time travel joke there. So yeah, I don't think we should like outlaw the technology, but like what's going to happen practically. Well, I think like what history shows us is that there's going to be a disaster. Like a very big disaster if we don't sort of get ahead of it and then we'll try to do more regulation and then we'll sort of engage in sort of like this, this sort of halting will deregulate. There's a problem you regulate more. We see this in the financial markets to some extent, you know, like where people say like, oh, times are good. Let's roll back some of these regulations we had before and look how well that's worked out. So I think that like, I'm not advocating for sort of like a maximalist position of like, let's abolish this technology. Because I do think you're right that in, in some sense, like information wants to be free. Like we're never going to be able to just sort of completely make people forget about neural nets. But I think that the core challenge I think for people like us, technologists, who are interested in responsible technology is trying to figure out what that balance is, of wanting to foster progress while encouraging regulation, that sort of makes sure that some of the excesses, some of the obvious harms don't come to, to light and it's a difficult needle to thread, um, but we have to try it. 

Vivek: No, 100%. I think regulation is the way to go. The challenge, and there's a separate conversation to be had about what the challenges are with getting regulation like this passed in any way, shape or form. But yeah, I agree with you that it has to be a combination of tech, policy, law, all of it coming together. 

Tom: By the way, the irony in all of this is that AI, machine learning, data science, all of that butters our bread. So we're optimists, right? But in some sense, because we're in the boiler room and we see so much, right? I hope that we, we have a sharper understanding. Not just of the wildly exciting opportunities, but also these perils and hazards that we're talking about. When you look at regulation, I just want, you know, when, when Eric Schmidt talks about, okay, got to keep the government away from this, it's too, they don't understand it, you don't want the government or senators or members of the House of Representatives even thinking about this stuff. It reminds me of that preciousness and hubris we talked about earlier. But as I was saying to you Vivek earlier today, we're walking back from lunch and we're talking about something related to this. Look, um, I had a chicken in my salad. I don't know how salmonella works, but I have high confidence that the government and the USDA has ensured that the chicken I'm about to eat for lunch doesn't have salmonella in it. And a senator and a member of the House of Representatives maybe doesn't know how to build a bridge, maybe they are, I don't know. But the point is that governance, whether they're governance mechanisms or governments, if we trust them, there's a long productive history of them doing helpful things to harness and channel new technologies in a way that's responsible and good for everybody. Not just Robber Barons. Anyway, hey, so let's switch gears, James, we've this thing on our podcast where we like to uh, boost something that we dig, totally unpaid for promotion. It's very exciting. This will be the first time I think where we see the floor, right? Okay. James, we're giving you the floor, totally unpaid for promotion in the closed session. What do you got? 

James: It's awesome. Yeah, I want to boost Ensure, in particular Ensure Plus. It's a nutrition drink. The commercials oftentimes say it's not just for old people. I'm not an old person, I can vouch for that. You'll find yourself sometimes in life in a difficult place. Okay, you've been there before, you know, you've been working all day, you skipped a meal, your body's kind of feeling like, you know, like it's not in its tip top shape. Get an Ensure Plus, man, it's got a perfect blend of just vitamins, minerals, flavor crystals. I don't know, it's just really good stuff. So I hope you all go out, drinking Ensure. Not too many. You know, eat from the earth and all that kind of stuff. 

Tom: How many per day do you have? 

James: I try to put a cap on one per day because although Ensure is delicious, it is completely synthetic. I believe it's created in particle accelerators. So like your body knows that. So after a certain point, if you drink too much of it, like, you don't die. It's like a fate worse than death. Like you're just fully nutritioned, but just the psychological strength is not there. But yeah, one a day that's the key to my success trademark. 

Vivek: James, I'm a big fan. I also have regular shipments of Ensure coming from Amazon. What's your favorite flavor? 

James: Well, you know, it really just depends on the mood and where I'm trying to take myself. You know, it's kind of like being a DJ for your own, uh, inner, uh, Spotify channel. I usually go for the vanilla. People say that vanilla, oh, it's vanilla, but I'm like, no, water doesn't have a taste. Vanilla has a taste and that tastes as delicious. I usually go for the vanilla plus. 

Vivek: Big fan. Vanilla is the only flavor Ensure plus that I will drink. I don't like the chocolate actually. 

Tom: Okay. You two. I'm feeling totally left out. You should have some, you should have some Ensure. I got FOMO. 

Vivek: Yeah. 

Tom: You know, Imma do right after this podcast. I'm going to go buy some Ensure, right? I feel totally left behind. I'm going to go check it out. 

Vivek: Go to Safeway, and you can get a six pack. 

Tom: Bang. 

James: Do it. This is an example, uh this is what a class psychologist called positive peer pressure. We've just changed a person's life during this podcast. Tom's going to go out there. You'll be flipping cars over tomorrow after that first Ensure. 

Tom: Hallelujah. I need all the help I can get. All right. Ensure. This episode's totally unpaid for promotion. You're welcome, Ensure. 

Vivek: Great. So with that done let's pick up on the question of regulation and, uh, we were talking about, right? You recently had an article come out in Nature, where you proposed to create an IPCC like body to harness benefits and combat harms of digital tech, right? What? What does such an interdisciplinary meeting of the mind add? Even if they, you know, lack the power as it is to your industry actors themselves. You know, what, how would it be structured in your, in your mind? How do we get governments and firms as Tom was talking about to listen to people like you and others who are talking about these things? 

James: I think a big sort of, uh, advantage or, uh, an attractive feature of such a body is that, it can get a bunch of experts in one place in one time, you know, abstractly speaking and allow them to sort of function as this singular advisory body. And you're right that they may not have the power to actually pass laws themselves, but as it turns out, like, it can be very helpful if you have a group of, um, scientists and other concerned policymakers, who can say things like, we've done a meta analysis of a bunch of different studies. And here's what they've all shown. Here, here's like a, a menu of possible regulations you could pass along with the pros and cons. And I think that, um, we were inspired, by we, I mean, the, the authors of that article, in part we were inspired by, you know, bodies that have been made for things like climate science. 

Vivek: Yeah. 

James: You know, to be clear, like climate science is not a solved problem, but it's been great to have some international bodies that have been able to say, well, at a high level, here's what the research seems to say. And you know, governments that are responsible, they can look at that research and then try to craft policies that, you know, they think will, will work for their particular countries. Um, but we think that would be a really helpful thing to help level set, sort of the way that, that, you know, national level governments, even state governments, small governments can try to understand this complex topic. Cause it is complex, you know, in the same way that like climate change is very complex. It is not simply like the world is getting hotter. Like, yes, it's like at the most highest level, that's true. Some places are going to get wetter. Some places are going to get drier, you know? So it's this really, um, subtle interplay between, you know, individual actions, corporate actions, government actions, those statements are true for both climate change and for what happens with technology. So we thought it was a really a nice analogy. 

Vivek: Actually, I was just thinking as you were saying that it could, this could be something back to ETP for a second. This could be a project that ETP sponsors, like it could be to your point about just doing the research on learning from all the other regulations and building a framework. It could be a great research project for a master's thesis or, or even a PhD, that is done in partnership with ETP. And then we could make a recommendation, could serve as guidelines or whatever to state governments, uh, federal governments, etc. 

James: I think so. I mean, I, I would say that like the scale of it might, I mean, unless the master student is very good. But you know, I think the scale of this is, you know, going to surpass like any individual's capacity to do it. But like, on the other hand, you know, this intergovernmental body we were thinking of, it's composed of individuals, it's composed of people. And I think that's a really. Sort of important aspect in all of these conversations we have about, you know, ethical technology and governance. And this is something I tell my students a lot too. The technology world is pretty small, it's all people driven. As much as we might like to tell ourselves that it's all about equations and, you know, code and stuff like that. It's all about people. It's all about the decisions that people are making to do or not do certain things. And so that's sort of daunting from one perspective, but also empowering, from another perspective, because it means that, you know, we, if we want to see a change in the tech industry, then we as individuals in that industry we have to act and do things to, you know, bring about the world we want to see. 

Tom: Absolutely! James, I love the way you, uh, think about all of these topics. I am now going to invite you to do a little bit of a victory lap or maybe give yourself a high five. I want to, that earlier topic we touched on decentralization and I mentioned blockchain. I was chum in the water, but let's come back to this because two years ago, I think we were at a, an event and you were speaking quite virulently about the perils of blockchain and the problems that everybody was overlooking. And I love this because you never hold back and you always have vivid ways of, of expressing yourself. Holy shit! You're really right, like a hundred percent right. Now, what do you make of this whole journey, right? Because since that time, the, you know, Samuel Bankman-Fried and the collapse generally of that space, you know, and you're not, you're too gracious to be smug, but like, give us a recap. What the hell happened there? 

James: Yeah. By the way, I'm not too gracious to be smug. 

Tom: Well, then go for it. 

James: Yeah. Thank you for thinking that, but oh no, my mini character strengths, the ability to hold in restraint. This is not, it's not up there, but I mean like at a high level. Yeah, people say, like, sometimes, you know, like, your parents might say to you, like, you know, I hate to say I told you so, not me, I love it, I tell these crypto people all the time, I told you so, I told you exactly how you would be destroyed, I said you would be the instrument of your own destruction, I'm merely a chronicle, I'm like Herodotus or whatever, I'm just looking at this stuff. But so, like, at a high level, you know, what, what happened there, I mean, a lot of it was foreseeable. So, you know, when we look at you know, like Sam Bankman-Fried, for example, that's just some old fashioned fraud, you know, good old time country store fraud. You know, and people sort of want to think that somehow there is something magical about cryptocurrencies because cryptocurrencies rely more on code. Well, there's a couple problems of thinking like that. First of all, like, how do you think the Federal Reserve works? Like, how do you think the modern banking system works? It's not like we're just trading, you know, Babylonian tablets with, you know, esoteric writing. Like, the banking system is very computerized already. Now, a core difference, though, between, you know, what the Fed is doing and what SWIFT's doing, and let's say what people want to do with Bitcoin or things like this, is that they are trying to essentially create mechanisms that either implicitly or explicitly lie outside of traditional regulatory schemes. And everything that we know about human history and our desire for, you know, just greed and hope and optimism in some case, like some people who got duped by these crypto people are good people. They're good people who saw a fad, they didn't want to get left behind, but it's like, this has happened with Tulips, this has happened with Beanie Babies. So to me, like, the reason why I think it was an easy call to make, that this is all going to sort of end up in a ridiculous way, is that we've seen this before. Like one of the few advantages of getting older is being able to see patterns in things. And so, like, if you look at the history of economics, the history of boom bust cycles and the history of what happens when you have, like, these highly speculative, unregulated markets, everything we know about the human condition told us this wasn't going to work well. Now, note that I haven't even talked about, you know, sort of fundamentally the de... decentralized aspect of things. I've already just talked about how, you know, we're creating this sort of financial system that's existing outside of the bounds of normal regulation. When we look at it from a technical perspective, in terms of like, why hasn't someone made like a killer app on Bitcoin or things like that, these fully, you know, these fully decentralized approaches, they just don't scale that well. People oftentimes observe, Oh, well, you know, there's like a bunch of people, you know, there's a bunch of minors, for example, they're doing stuff. Well, first of all, there's not actually a huge number of mom-and-pop minors. Like if you listen to this sort of like, pick yourself up by your bootstrap, like and ran type, you know, all you people need to go find some different books to read, by the way, I don't understand how it is that you, you talk to these people. It's like, what's your favorite book? A fountainhead. But when you last read, I I'm reading it right now, I'm wearing some augmented reality, but I'm reading it right now. Just find another book. Literally almost any other book would just change your life for the better so much. But if you talk to them, they, they make it sound like in theory, like, you know, and in practice, Oh, you look at Bitcoin. It's just people like you and me, just, you know, normal everyday folks from Americana running these Bitcoin miners. Absolutely false. Absolutely false. If you join the Bitcoin network with your like commodity laptop that you got from Best Buy or whatever, you're not going to make a Bitcoin in expectation. You're going to make an electricity bill, but you're not gonna make it. Why is that? Economies of scale. This is one thing that's so interesting about these ostensibly decentralized mechanisms. When you go fully decentralized and you don't have authorities that are preventing consolidation of fiat power, in the cryptocurrency world, what do you end up seeing? You end up seeing people who are wealthy. In the fiat world using that wealth to consolidate and get power in the crypto world. So for example, when you look at, you know, who's, who's mining in practice, who's getting most of these rewards, it's these huge crypto rigs. Who are they funded by? They're funded by big companies, rich individuals who have the yen, the euros, the dollars, all the fiat currency to go build the data center and to fill that data center full of Bitcoin toasters, whose only reason it is to exist, is just to solve the bitcoin mining puzzle. So this is once again is just something that should be just intuitively obvious to the casual observer, that if you don't like they now always use like look at what happened with Burning Man. Another sort of like libertarian fantasy gone obviously awry how great would it be just have a bunch of people hang out in the desert, love no rules. Certainly it will stay that way and now like these rich people come in and basically bring in like Atlantis' on zeppelins that are like air conditioned and have like private strike forces and stuff like that. He was like, who would have thunk it? Anyone who's literally read anything about economics or regular. So, you know, the same thing ends up happening is crypto things. It's just, it's just silly. And like, I try not to make fun of students. Um, but if they tell me things about crypto, I, I just tell them, I'll make fun of you directly because it's just not a good way. It's the same thing if someone came up to me, Oh, I have a, I have a dream. I have a glint in my eye. I have a dream. What's your dream? I want to become a crackhead. I'll make fun of them. That's a bad dream. Okay. So like if your dream is to somehow like make cryptocurrencies, you should just change what you want. Okay. End of rant. 

Tom: James, James, James, your smack is so fresh. It's so on point. We salute you. 

Vivek: Are you like this when you're teaching as well, James?

James: Yeah, mostly. 

Vivek: I would love to, I would love to audit your class, actually. 

Tom: Listen, and for our listeners, as we close out here, go to YouTube, because some of, some of this banter, you can find it. I've listened. James, big fan. I follow you on YouTube. This is not just James, you know, on a lot of Adderall here in the moment. This is James. Just, just on a Monday. 

Vivek: Ensure Plus, though. That's the secret. 

Tom: And Ensure Plus. 

Vivek: That's the secret. 

Tom: Obviously. And I need to go, I'm going to go buy some Ensure Plus as well. 

James: That's the secret. 

Tom: Done. James, thank you so much for joining today. This was a lot of fun. 

Vivek: Thanks, James. 

James: Yeah, thanks for having me. Yeah, I had a blast. Yeah, thanks so much. 

Tom: Thanks for our listeners. We'll see you next time. 

Vivek: See you next time.