The {Closed} Session

Philosophy, Data, and AI Ethics with NYT Best-selling Author + Data Scientist Seth Stephens-Davidowitz

Episode Summary

From unpacking Google search patterns to understanding the philosophical underpinnings of big data, Seth Stephens-Davidowitz offers a unique lens. As the NYT Best-selling author of “Everybody Lies” and a renowned data scientist, he delves into the ways data mirrors societal nuances and the vast implications for tech and its intertwining with everyday life.

Episode Notes

How does Google search data reveal hidden human truths and behaviors? What philosophical challenges arise when interpreting big data? In what ways does data reflect societal biases and preconceived notions? What's the potential of data science in revealing patterns that might be invisible to human analysts?

Seth Stephens-Davidowitz is a data scientist, New York Times bestselling author, and sought-after keynote speaker. His 2017 book, Everybody Lies, on the secrets revealed in internet data, was a New York Times bestseller; a PBS NewsHour Book of the Year; and an Economist Book of the Year. His 2022 book, Don’t Trust Your Gut, on how people can use data to best achieve their life goals, was excerpted in the New York Times, the Atlantic, and Wired.  Seth has worked as a data scientist at Google; a visiting lecturer at the Wharton School of the University of Pennsylvania; and a contributing op-ed writer for the New York Times. He received his BA in philosophy, Phi Beta Kappa, from Stanford, and his PhD in economics from Harvard.

Learn more about super{set} at superset.com

Find more episodes at www.theclosedsession.com

Episode Transcription

Welcome to The Closed Session, how to get paid in Silicon Valley with your host, Tom Chavez and Vivek Vaidya. 

Tom: Welcome back to another episode of Season 4 of The Closed Session Podcast. My name is Tom Chavez. 

Vivek: And I'm Vivek Vaidya. Our guest for today is Seth Stephens-Davidowitz, a data scientist and author known for his work in big data and how we can use it to uncover insights about human behavior. Seth has worked at Google as a data scientist and authored some of the best books of the intersection of data and human behavior. One of my favorite books of his is the book called Everybody Lies. If you haven't read it, you guys should all get a copy and read it. 

Tom: It's a must read. 

Vivek: It's one of those books which is chock-full of insights, but also makes you laugh a lot. So, uh, yeah, I highly recommend it. 

Tom: Absolutely. Well, listen, Seth, great to have you with us. Thanks. Thanks for being here. You know, I, I was a little surprised when I was learning more about, about your background that you studied philosophy and I studied, I spent a bunch of time studying philosophy at one time, so it's always nice to meet another sort of refugee, you know, philosopher who takes a left turn and ends up doing a bunch of data analytics stuff. I'm just curious, like, how did, how did that happen? What's the transition from philosophy to data analytics? Is there a steel thread that runs through all of it? 

Seth: I don't know if there's a thread in any aspect of my life. I kind of just, it's funny because I wrote this my set and my last book is called Don't Trust Your Gut. But if I go through my entire career, I've just been like trusting my instincts. I was interested in philosophy, then I started studying economics and then, you know, economics kind of led me to oh, data analysis is really interesting and not just necessarily about inflation or interest rates, but about all these random questions. And there's this explosion of data that we can kind of understand human nature way better in terms of big data, data scientists, you know, the R statistical language, Python. I don't think any of these things... 

Tom: None of that existed. Yeah. Barely a state of mind 

Seth: ...moving forward as people pick careers, like who knows what the world's going to look like, you know, majoring in something and that being your career, the rest of your life may be breaking down.

Tom: Well, but so you're kind of a polymath and a journeyman, as you say, you're a pilgrim, you move from this field or this field to, to other things. Is it just you're a curious guy and you just wanna keep learning? Or is there...? 

Seth: More curious and I'm kind of bored easily. So, yeah, you could say polymath. You could say flaky as well. Uh, that, you know, some, some people like, you know, initially, when I got my PhD, the plan was to be, uh, an academic, but you know, academics, you have to like specialize in this very narrow topic your entire life. And the thought of that just want to made me want to blow my brains out. Like, I did a study of racism, you know, in politics and I'm like, you know, my advisor is like, okay, well now you're a racism in politics expert. And like the thought of just like, I'm like, no, I said what I had to say about racism in politics, you know, then moving on to the NBA and these other topics. So a little bit of a dilettante, I guess, and a little easily bored and restless. But so far, it's worked for me and I am just like, whatever skills I have, I think my number one unique quality is I'm just like, off the charts curious about things. So I just really, really get obsessed with questions and going deep down a hole. So... 

Tom: You know, you're harking me back to early on. So Vivek and I had our first company was a company called Rapt and it was doing a lot of pricing and inventory optimization. So I remember when, when Microsoft bought the company, there was a famous analyst who had been tracking the company. And she says, Oh my gosh, Tom, this must be so sad for you because you've become so expert in pricing. And now are you going to be able to do a lot of pricing optimization anymore? And I remember telling her like, Laura, do not worry, I'm good, spent nine years doing pricing and it's time for something new. So yes. 

Seth: Yeah. I can definitely relate. And you know, and I do at the same time, I do admire people who've been spending their entire life. 

Tom: Yeah. 

Seth: You know, doing one thing, uh, you know, there's something I think deeply admirable about it, but it's not a, a trait that I, uh, that, that I have. 

Vivek: Let's actually pick that curiosity thread up. So in, in your book, Everybody Lies, you discussed how Google search data is similar to a confessional, right? Where people reveal their secrets and thoughts that they might not openly express. So can you share what, what was your initial inspiration for analyzing all this Google search data? And what were some of your findings?

Seth: So I was doing my PhD in economics and somebody just told me, like, there's a new tool, Google Trends, and, you know, you should look at it and, you know, I was just playing around and, you know, because I was from an economics background, I'm like, oh, you know, maybe you can predict marketing trends and things, you know, or, uh, you know, how products are formed or something, you know, they're, they're kind of stand, maybe they're a standard economics, you know, predicting on things about the unemployment rate, there are things ways to use this. And then I just had this idea and something between being a curious person and maybe a little mischievous person. I'm like, wait a second, like, the most interesting thing I'm telling Google is not necessarily like, you know, something in standard economics, like what's interesting about Google is people are just typing whatever pops in their head and, uh, you know, their kind of secret thoughts. And right away, once I had that idea, I'm like, okay, I got to do something there. And then I just started playing around with this data. And the first thing I uncovered was it's, it's sad, the level of racism in the United States, like I was just naive, clueless. I'm like millions of Americans are making these like horrible searches, like mocking African Americans. And this map almost perfectly predicts where Barack Obama underperformed, like just like insane relationship. I'm like, Oh my God, like there's this secret uncovered with our Google searches of like this collective, you know, nastiness and racism and racial animosity that we just had no idea and then I just play a cat. Then the idea is just kind of wrote themselves and that like any time, you know, any kind of standard question that would be hard to answer with, you know, surveys, what percent of, you know, population is gay, uh, you know, some like really important one sick... uh, ones like child abuse in the United States and, uh, and yeah, you know, just all, all these topics and abortion, you know, how many people are having secret abortions without kind of telling anybody and just like all these topics that are really important, but have been very difficult to study with traditional methods because people don't want to answer them honestly. So, uh, I just kind of, it's kind of one after another kind of a five year period of just exploring everything we could find about people in their Google searches. 

Vivek: And it's amazing, right? Like that you can get all this data. It's anonymized of course, but but the fact that you can, you can get this data and you're... as you're saying, it's a secret, it's a confessional people will ask Google things that they're not comfortable mentioning. I mean, even amongst close friends, actually. 

Seth: Close friends, like doctors, therapists, like they're just, you know, something comforting about putting something into a box, you know, on the internet to, uh, you know, saying it out loud to even someone you trust really well, your partner, you know, you, people might not say, but yeah, it's, it's a wild view into the humanity, just unprecedented. And one day it'll think, it's becoming more normalized when people talk about Google Trends. 

Vivek: Yeah. 

Seth: You know, certainly at the time I was thinking, most people didn't even know what Google Trends was, you know, for the first time, just like, totally, you know, unvarnished window into the human psyche. It was just kind of wild and radical and fascinating. And yeah, it was. It was, uh, it was kind of a wild ride that whole time, you know, writing that book and then kind of going around talking about it and stuff. It was, it was, it was crazy. 

Tom: Hey, so Seth, I'm wondering if, if we could maybe enlarge that idea a little bit and think about how it applies to businesses and organizations more generally, where, you know, everybody's sitting on really large swaths of data these days. Everybody wants to understand this gulf between what their customers say versus what they actually do. How do you think about that? Is there, are there lessons that others can pick up and apply there? And then I guess related is, and Vivek, I thought you were getting to some of this as well. What are the ethical issues, right? Cause you can't just go hog wild with all of this. And there are privacy considerations and the like, how do you think about all of that? 

Seth: Yeah. So, uh, well just Google search in general can be a very powerful tool just at any industry. So after my book came out, I was like uh, for a while, I was making my primary living as a keynote speaker to kind of companies and industry groups and bunch of them would be like, can you tell us, you know, something about Google... from Google search about our business or industry. And I'm like, sometimes I'm like, you know, this isn't exactly a, you know, sexy, this isn't exactly a, you know, secretive business, you know, I've talked to a pension group or something. People aren't like secretly searching their, you know, weird thoughts about pensions. Like, you know, I don't know that you need these, you know, Google search that, but then almost every time I found something interesting. So for the pension group, I found that the number one Google search about pensions was what is a pension, which is actually both amusing and really profound because I think a lot of people you sell your products and you don't necessarily know what people, what questions people have about your product. And sometimes the questions are so much more basic than you think. So if you're deep in pensions your entire life and you're thinking, okay, I'm trying to make a commercial card for you to get pensions. You're kind of like, you know, my pension plan is the best because this, this, this, and that. Let me start explaining that. And they're just like, no, wait, what is a pension? Like, like, much more simple questions. And you know, and then people's, you know, all the data that, you know, all companies collect on how people use their site and where they click. And, you know, obviously, you know, many companies now are relying heavily on A/B testing, which is an incredibly powerful tool to see what actually works instead of, you know, guessing, having a designer say this version of the website's best, just, you know, throw it, throw all of your ideas at the wall and see which one actually leads to clicks. And I think A/B testing is one of the areas where I started getting uncomfortable, like, even encouraging these methods because for the ethical reasons you, you, you talk about where I'm like, we're getting so addicted to the internet because I think the number one reason we are so addicted to the internet is the power of data analytics and A/B testing that, you know, Facebook and Google and Netflix and, you know, every company is just knowing, you know, yeah, TikTok, Instagram, they're knowing better and better exactly what will keep us hooked to these products and, you know, just doing it and there's no regulation. It's not, you know, you know, cigarettes aren't just allowed to do whatever they want or not, not acknowledge some of the, some of the dangers of their products, so I've written a lot about A/B testing and a lot of people don't even know just how powerful it is explaining its power and why it's so powerful and it is for someone curious just amazing to see the results of some of these experiments and companies that have gotten 30%, 40% returns from a test you'd never imagined would give those types of returns. But then it's like, you know, our, our company is becoming too powerful in getting people to do things that they, they don't actually want to do, but that the data says they're going to do. 

Tom: And that we don't even know that we like, right. I mean, I, I like to sit in a... I have a sauna at night, I like to get in the sauna and I, I watch 10-15 minutes of YouTube. Some nights I've, I've burnt myself to a crisp in that sauna because I get down a rat hole in YouTube and, and I'm, I'm agog, like, oh my God, here I am watching professional rodeo. How did that happen? Right? And somehow they, they know us better than we do. 

Seth: And it's so weird to just think of like a data scientist, like you just are a data point and they're just looking at that and predicting you as if you're, I don't know, an electron or something like, where are you going to be next? And, and we are just way more predictable than we sometimes think. 

Tom: Yeah. 

Seth: And you know, what's going to draw us in, capture our attention, capture our minds. 

Vivek: So Seth, I'm curious actually now. You mentioned racism in politics earlier, and you explore a lot of these themes in your book also. I'm curious as to what came first, the chicken or the egg? I mean, did you start with the issues and then arrive at the questions and conclusions, or did you... Did you have this in mind that, oh, I'm going to explore these issues like racism, sexual preferences, stereotypes, et cetera. And then you went looking for the data. What motivated you to explore these specific areas? 

Seth: Once I knew I was going to look at Google searches, I just went after kind of any topic that I thought people, people might be... the traditional data sets wouldn't be good at capturing, you know, someone in Mississippi, are you gay? Like, you know, a lot of people are going to have a hard time answering yes to that question because so the survey saying that, you know, 1 percent of males in Mississippi are gay. Like, should we believe that? And then I'm like, well, actually, you could probably see with internet behavior, a better proxy of, you know, the size of the population and, and then sometimes, you know, something would just pop up in the data where you're like, Oh, that's, that's interesting or I'd have a Eureka insight, I'm just like, one day I was like, it was very interesting comparing, uh, Facebook and Google because Facebook's so public. Facebook, everybody sees what you say and Google's anonymous and... I just like, I was in the shower, I'm like, it'd be funny to just see how people compare their husbands on both sources. And that was something I included in Everybody Lies that on Facebook, when people say, you know, post my husband is. It's my husband is the best adorable, amazing, so cute. And then you know, all their friends see it's like, Oh, they have the best marriage. And then you look at the Google searches for my husband is and, is my husband is a jerk, my husband is annoying, my husband is a slob. That was just an idea I had, you know, again, like, it'd be fun to just juxtapose, you know, our descriptions of husbands on different, you know, public and private data sets. 

Vivek: Another kind of thing that comes to mind, especially these days, right? When you look at all the noise about Gen AI and, and the importance of data and the quality of the data, etc. How did you deal with or address the issues of, for example, bias in the data sets that you had curated and analyzed for your books? 

Seth: Well, there's no one size fits all answer. Just basically every time you're doing a data analysis, you have to think, you know, think about the data you're getting and how good it is. And, you know, I was using a lot of data sets that hadn't been used before, really much by researchers, you know, Google Trends or unstructured data from websites or, you know, other things that a lot of people, you know, and then the questions are, well, you know, is this a representative sample? You know, what about poor people, who aren't using the internet or older people? Or, whoa, you guys hear that? Yeah. Yeah. There's a crazy storm here in New York City. That was wild. Yeah. 

Tom: It sounded like, like a gigantic dish flew off your kitchen wall or something. That was shocking. 

Seth: Uh, that was thunder. So I don't know if it's a bad omen or something. 

Tom: Well, you're, it's a lightning bolt. You're about to say something really heavy. Go. 

Seth: No, I don't think so, but, uh, when you're using a new data set, it's kind of obvious that you need to think through these questions, but I think what people don't realize is you also have to think through these questions when you're using an old established data set. There's a lot of times the old established data sets, you know, you look how the sausage is made and they're making a lot of assumptions and there are biases in those data sets that we don't always necessarily think about and, you know, it's not always getting a full sample. And surveys, for example, can be very biased and, you know, but I think if it's a data set that's been around for a while, people are much more likely to just say, okay, you know, that's great data, that's gold standard. And if it's new data set hasn't been around, people are like, you know, trying to poke holes in it. I think that's a great, you should always be thinking about the biases, thinking about how to correct them, if you can correct them, how they might be influencing the analysis. But I just encourage people to do that for every data set, not just a new, weird data set. 

Tom: Right. Well, you know, we've had, uh, Brian Christian on this podcast, he's the author of The Alignment Problem. And as he points out, and exactly to your point, Seth, historical data sets, like I'm thinking now about the sentencing recommendations algorithm, right, trained on historical sentencing data for judges and so on. Well, it's rife with, with, with racial bias, right? So that's exactly the wrong data to be training a new data set on. 

Seth: Yeah, exactly. And I think this is a little bit, you know, outside my wheelhouse, but definitely AI and machine learning can, I think, help fight biases because a lot of times, you know, a lot of our biases, we're not really aware of, but sometimes you can add a line of code to be like, make sure races are treated equally in this system, and then it'll just do that, you know, you'll have to be constrained in that way. Uh, whereas, you know, human beings can be very, very, very biased, obviously. Right. So I think, you know, ideally data analysis, AI could be very useful in fighting bias. 

Tom: Right. There's an interesting kind of impossibility theorem that's been proven in exactly that sphere where the more you try to de-bias the data set, the more, you know, the further confounded you are, right? The more that you try to unbias it, the more bias you, you can inadvertently introduce. So it's a, it's a hard, hard, sticky problem. Hey, Seth, I wanted, I couldn't resist when you and Vivek were chatting earlier, I went and I opened up a browser window and I typed, why is my husband dot, dot, dot in Google? So here's what, to your point, here's what it comes. Why is my husband yelling at me? Why is my husband so mean to me? Why is my husband always angry? Always mad at me. Why is my husband sleeping so much? 

Seth: Yeah. Yeah. So one of the things in my book I said is a lot of us feel envy because we're on Facebook or Instagram and we see all our friends. They're on a great vacation, they're with their happy family. If you're like looking at Facebook too much, go on Google Auto Complete and you'll see what's really going on, you know... 

Tom: It's the opposite of a Hallmark card. Hey Seth, we have a little tradition around here where we like to do a totally unpaid for promotion, and lately we've been given the floor to our guests. So this is your opportunity to, uh, get a harrumph out there or, or to boost a product or a thing, any old thing that you think is cool and worthy of a boost. What do you got? 

Seth: I have a product that I do not think necessarily needs boosting, but I still am going to boost it because, you know, a lot of popular things I think are unjustifiably popular, but some are legitimately that good. And for me, that is the Bruce Springsteen concert, which I recently went to. And anybody, you know, he's getting a little up there in years and he's canceled his September portion of the tour because of health issues. Like, this has to be a bucket list experience for everybody to see a Bruce Springsteen concert. It's the height of rock and roll, a concert, he has just mastered the craft at such an insane level that is a life changing experience and I encourage anybody who has not been to make sure, you know, that that's got to be high on anybody's bucket list. Make sure you see a Bruce Springsteen concert. 

Vivek: Have you, uh, seen the movie Blinded by the Light?

Seth: No, I actually have not. 

Vivek: You should, since you're a Springsteen fan, you should, uh, definitely see the movie. It's based on a true story and, uh, uh, it's about these two kids in London, in England, who are, who get hooked on to Bruce Springsteen and, and it's, it's just, it's a great movie. I highly recommend it.

Tom: Nice. Well, Seth, thanks for the, uh, that unpaid for promotion. I'm gonna, I, I gotta say Springsteen's on my bucket list. I have a list of concerts that I've decided I need, like certain bands, I need to go see. Or acts I just need to see wherever they are on the planet cause I've missed a number of shows after the, you know, the bands were no more or whatever. And then I'm so frustrated after the fact. So yes, this is on the, this is going on the list. 

Vivek: So let's, uh, let's not, not shift gears, but come back to a point that Tom was making earlier about privacy and, and security, right? So, as more and more companies start to leverage data and put, put it into AI to do various things like you're doing now with leveraging AI to help you in your creative process. This, how do companies balance the need for privacy and security while, as they collect all this data, process all this data, while still capitalizing on the potential that AI, Gen AI has? 

Seth: Yeah, it's tough. You know, just as a consumer, I always feel more comfortable with the enormous companies. They get some of the bad press, you know, Facebook and Google because they have a privacy leak or something. But I'm always like, they're the ones who can really afford the security that I feel a little more comfortable having my data with these enormous companies than I do with a smaller company that's gonna have a harder time, you know, hard, a harder time protecting the data. And even they're not just a smaller company, even like, you know, Fortune 500 companies, you know, their privacy and security teams aren't anywhere close to, you know, the size of those of Apple and Google and Facebook and some of the big tech companies. So, uh, I think, you know, every company is at a disadvantage relative to some of these companies and protecting data. And that's why I'm, yeah, I am a little more wary to put my data in the hands of smaller companies. But I think it's just, it's really important to, you know, I'm not a privacy and security expert necessarily. So I wouldn't be like, you know, these are the, you know, precise tools that a company should use to best do it. But, you know, definitely users care a lot about, you know, making sure their data is safe and, uh, you know, particularly certain types of data I think. You know, one of the things that my book focused on is that we are leaving our secrets with certain websites and you know, what is actually a secret of a user, you know, and I think we've thought, you know, traditionally a social security number or something or password and how to protect that. But, you know, if you're a dating site. you know, the fact that someone clicked on it, you know, what users they click on. That's very personal, private information, uh, and, you know, I think, so I think also it's important to kind of us widen the notion of what's a private fact about a person. It's not just, you know, the obvious, like identity theft or something, but it's kind of just, you know, people don't want to feel like, and I don't know, some people are embarrassed of the type of music they listen to, like, you know, I can relate. There are a couple songs where I'm like, you know, if my Spotify playlist was revealed to the world, there are a couple songs. I'm like, I don't know that I want the world to know how much I listened to those songs. So I think that's companies need to keep that in mind as well. There's, you know, a lot of, a lot of things, you know, people do view as sensitive information even if it's not, you know, some of the obvious markers of sensitive information that we've thought about.

Tom: Seth, Vivek is a huge Olivia Rodrigo fan, and he's been, he's been keeping it on the low, and I've been telling him like, dude, just get out there with it. It's fine.

Vivek: I have, I have no idea who this person is. 

Tom: No one's judging you. 

Vivek: I have no idea. 

Tom: You have no idea who Olivia Rodrigo is? 

Vivek: No. I, I actually don't. 

Tom: Oh, that's even worse.

Vivek: Yeah. I don't, 

Tom: Jesus. Hey, so Seth, you're coming at this, it strikes me, you know, the way a trained economist/ data scientist has approached these kinds of issues and we're asking these questions about AI and ML. Vivek and I have been at this for a long while and... you know, the very, actually that first company I mentioned earlier in the podcast was an algorithms company before AI was a thing, and I'm not saying that to be precious and old school. It's just like we, you know, so the line for me between what people think of as data science versus AI versus ML today is very, very smudgy, right? I'm curious if, if you'd like to weigh in on how do you think about these lines? Cause I mean, a lot of the statistical technique techniques from data science are first class elements of, of an AL, mL protocol for, for many engineering operations these days. How do you, is, are there useful distinctions that we need to maintain or do we just put it in a jar and shake it all up and see what we get? How do you think about that? 

Seth: Yeah, I think for a while us economists. We're a little confused by the data scientists because, you know, we had all these very simple, this is a regression. And then it seemed like they were just calling regressions, like machine learning. And, you know, they, you know, do a little bit of high dimensionality, dimensionality reduction to machine learning model and it's even AI. And we're, and, you know, anonymous kind of started copying that where some of the journal articles, they'd be like, we're doing machine learning, but it didn't seem all that different from stuff, you know, that had been in economics journals for 30, 40 years. 

Tom: There you go. 

Seth: That said, once these LLMs started coming out and stable diffusion came out, I think we all agreed this was like totally next level stuff and, you know, really harnessing neural nets and picking up patterns and none of us imagined that a neural net could ever pick up. And so I think, you know, definitely the, you know, there, there does seem to be a difference when you start getting to, you know, analyzing if it's trillions of, or, you know, trillions of web pages and trillions of words and picking up patterns in them, that does seem to be a totally other class, you know, and having billions of neurons and multi layers that does seem to be a different class of analysis than, you know, a multivariate regression versus a lasso versus a random forest. Those those all seem kind of small potatoes compared to, you know, these enormous neural net models that have totally shaken up our world in the past few years. 

Tom: But now, and just to follow up then, so, but also as a trained economist, you know, presumably you subscribe to von Neumann–Morgenstern utility, you know, theory and the like. So the good news about prior techniques is that they, they're axiomatically grounded, right? Whereas an honest machine learning researcher will tell you they have no idea. How a multi layer neural network generates the answers that it generates. It's all kind of, it's just this huge ghost in the machine and it works from an engineering perspective. We like that. But how do you, you know, do you have any strong feelings about that one way or the other? 

Seth: My general feeling of that is it's awesome and maybe we've been putting too much emphasis on trying to prove exactly why something works instead of just throwing data at it, you know, yeah, letting it work its magic. So, uh, you know, I think yeah, these LLMs have just totally blown me away. I think they've blown everyone away at this point. And, uh, and it does seem to be a lesson in that maybe you don't need to fully understand things, though, you know, I'm a big Nassim Taleb fan. And the way the world moves forward is through tinkerers is not through theorists. And then a lot of things we haven't fully understood, why they work, but they just work. And, you know, certainly true of human society more generally. And, you know, sit, I think there probably is a mistake that, you know, I definitely consider myself an economist and I think us economists have probably been mistaken in thinking that, you know, these complex systems like the economy, we're just going to draw models of how everybody behaves and we have to understand and prove the theorems of exactly how the economies going to work. And, you know, frequently things just work for reasons that are well beyond you know, human comprehension. And I think that's, that's definitely a lesson that all of us have to take from the success of these, you know, enormous neural nets. 

Tom: Here, here. 

Vivek: I guess, you know, it's, the human brain is complex too, right? We don't know how we make decisions. We like to think we do, but we really don't. And so the, the nets, the neural nets kind of, these LLMs are in that category, at least as far as I'm concerned. 

Tom: Well, since we're talking to a recovering philosopher here, I remember I took this, you know, you take this class on Cartesian philosophy, Descartes was the guy who said, I think, therefore I am, you know, it's like the declaring on call for rationalist principles first philosophy or evidence based, you know, thinking everywhere. And then, you know, here we are in the age of neural networking with neural networks that are mimicking human cognition, to your point Vivek, and maybe a more apt articulation would be something like, I free associate and I figure shit out on the fly, therefore I am. There's no God's eye. 

Seth: And I mean these big questions kind of, it's drawn me back to my philosophy days a little bit, you know, what is consciousness, you know, that kind of one of those big questions of philosophy and, and, you know, kind of put those questions aside a little bit. And now, you know, as a society, we're all thinking, you know, is there a point at which an LLM might be conscious or, you know, neural net would be conscious, what would that mean? What would that mean for ethical, you know, ethical questions? It's pretty, pretty wild stuff. We're definitely seeing things that, you know, we're not very far away from, you know, these models certainly being able to act in ways that would be indistinguishable from a conscious being. 

Tom: That's right. Oh my goodness, Seth. We could go on and on on all of these questions for hours and hours, but unfortunately we're getting close to time here. I really appreciate you joining us today on the closed session. For our listeners, you got to check out all of Seth's writing. There's some spicier, more R rated stuff that we, we didn't get into today. But, um, urge you to check it all out, there's a lot of depth and, and, and it's a very kind of, you know, like the word polymath I'm sure it's much more structured and disciplined than, than, you know, it's, it's far ranging, Seth, and it's, it's really great to see you covering all of this ground. 

Seth: Thanks so much for having me.

Vivek: Thanks for joining us, Seth, and thanks for tuning in to our listeners. Don't forget to sign up for our newsletter to stay up to date on our latest episodes and news at superset.Com. We'll see you next time.