Everyone’s talking about AI - so The Ethical Tech Project decided to listen. Joining forces with programmatic privacy and data+AI governance platform Ketch, The Ethical Tech Project surveyed a representative sample of 2,500 U.S. consumers and asked them about AI, the companies leveraging AI, and their sentiment and expectations around AI and privacy. On the latest episode of The {Closed} Session, get an inside look at the survey results in a deep-dive conversation with the team at The Ethical Tech Project. Are Americans aware of the rapid pace of change in AI? How do they feel about companies mining their personal data and leveraging that data into LLMs and other AI tools? What are consumer expectations of the right to privacy alongside ethical data principles like transparency, fairness, agency, and accountability? In test purchase scenarios, will consumers choose products with ethical AI features? Joining Tom and Vivek on The {Closed} Session are Slingshot Strategies partner and pollster Evan Roth Smith and Ketch Head of Solutions Jonathan Joseph to dig into the survey results and get us into the head of the typical American consumer. Find out just how much consumers will reward ethical AI - with their hard-earned cash and trust - when offered by the companies and products they love.
Everyone’s talking about AI - so The Ethical Tech Project decided to listen. Joining forces with programmatic privacy and data+AI governance platform Ketch, The Ethical Tech Project surveyed a representative sample of 2,500 U.S. consumers and asked them about AI, the companies leveraging AI, and their sentiment and expectations around AI and privacy. On the latest episode of The {Closed} Session, get an inside look at the survey results in a deep-dive conversation with the team at The Ethical Tech Project.
Are Americans aware of the rapid pace of change in AI? How do they feel about companies mining their personal data and leveraging that data into LLMs and other AI tools? What are consumer expectations of the right to privacy alongside ethical data principles like transparency, fairness, agency, and accountability? In test purchase scenarios, will consumers choose products with ethical AI features?
Joining Tom and Vivek on The {Closed} Session are Slingshot Strategies partner and pollster Evan Roth Smith and Ketch Head of Solutions Jonathan Joseph to dig into the survey results and get us into the head of the typical American consumer. Find out just how much consumers will reward ethical AI - with their hard-earned cash and trust - when offered by the companies and products they love.
To learn more about The Ethical Tech Project, visit ethicaltechproject.org and subscribe to their Substack newsletter at news.ethicaltechproject.com.
Download the survey report at ethicaltechproject.com/survey.
Learn more about Ketch at www.ketch.com
Learn more about Slingshot Strategies at slingshotstrat.com.
Welcome to The Closed Session, how to get paid in Silicon Valley with your host, Tom Chavez and Vivek Vaidya.
Vivek Vaidya: Hello and welcome back to season four of The Closed Session. I'm Vivek Vaidya.
Tom Chavez: And I'm Tom Chavez.
Vivek Vaidya: So today we have something special for our listeners. We're joined by Jonathan Joseph from super{set}'s portfolio company, Ketch and Evan Smith, a pollster and partner at Slingshot Strategies. We worked with, uh, J, uh, JJ and Evan, worked together create this survey which we're, which we're gonna, gonna dive into today. Evan was a, is a partner at Slingshot Strategies. He oversees international polling and data operations. Uh, he's an advisor to various diverse campaigns and causes and he's a co author of, uh, a book called Putin's Master Plan. Actually, Evan, I've seen that book in bookstores. And I, I think I'm going to buy it cause I'm reading, I'm actually reading a fiction book right now called Moscow X, which is, uh, by, uh, McCloskey. He's a former CIA analyst. It's a spy fiction book. Of course, Putin is there and, and money, money people from Putin and they want to get money out and on all of that. So, uh, yeah, I'll, I'll check your book out as well. And then JJ is at Ketch. He's a Head of Solutions and Marketing for Ketch.
Tom Chavez: Welcome to both of you. This is going to be, this is going to be a good conversation here. So let's, let's go ahead and jump on in here directly. There's no, a lot of good stuff to cover here. I was wondering, let's just kind of back it up for our listeners and give them a quick a quick sense of the story behind this survey. Why now? What's the point of this? JJ, how do we, how do we get going here?
Jonathan Joseph: You know, Tom, first of all, thanks for having us, Tom and Vivek. It's um, we just felt like nobody was really talking about AI, and we felt like we needed to kickstart that conversation, you know, if you know what I mean. Um, no, but seriously, we, we wanted to understand how consumers felt about AI, given that it was so, so present in the zeitgeist. But the way we wanted to approach it, uh, Evan and I was to think about, if you did the right thing with AI, would consumers reward companies that approached it ethically, that approached it responsibly? And if so, how much would they do that? Uh, would they spend more with you? Would they trust you more? And so there was a quantitative piece here in addition to the qualitative. And that's,, that's really what drove us.
Vivek Vaidya: And then you, this survey was done in partnership with the Ethical Tech Project, right, JJ?
Jonathan Joseph: Yeah, that's right, Vivek, and the Ethical Tech Project, you know, it's a, it's a convening of industry people, academia, policy makers, legal, and, and the idea is how do you establish the frameworks for the responsible use of AI? Both from, first of all, what are good data practices? What does that even mean? But also, what does that technical footprint look like to actually execute on ethical data, on, you know, privacy by design? Which are a lot of words that get thrown around, and we wanted to put some real practical meat behind the bones. And of course, as you know, Ketch is all about that as a privacy management software company. And so it was a natural convening, uh, and Evans runs some great surveys, uh, as well, and so we thought this is the... this is the triumvirate.
Tom Chavez: So Evan, roping you in here. So back us up now and how did you think about, you know, you do a lot of this kind of work. So maybe even just a brief tutorial for our listeners. How do you think about standing up one of these studies from scratch? How do you design it in the way that leads to, you know, the greatest insight covers, you know, the most interesting or important swath of the topics at hand? And how did you approach, how did you approach it here specifically with, with regard to these topics?.
Evan Smith: Well, for this survey, and, and as, as JJ was alluding to, you know, there, there is this whole conversation going on around AI and, you know, when I hear these new topics breaking into the news, breaking into the zeitgeist, breaking out of the world of specialists and practitioners and people making all sorts of assumptions and trying to react quickly. First reaction I had, first reaction JJ had and the folks at, at ETP had was, you know, let's get some data behind this, right? Show us the numbers. Let's really nail this down. Because then otherwise these narratives spin up and, and, uh, we're dealing with all sorts of, you know, potentially false assumptions that work their way into how people operate. And so to stand this survey up, we wanted to tackle not just the, you know, multitude of questions that we would ordinarily ask, that we've even seen other surveys, um, by other pollsters come out and talk about, right? You know, what, who do you trust to get this right? How do you, you know, how much do you value privacy? How concerned are you? We wanted to tackle this with some really heavy duty, uh, statistical analysis. And so as part of this survey, we did what's called a conjoined analysis, which is a statistical technique where you sort of throw thousands of scenarios, in this case over 20,000 different tests of product comparisons in front of respondents. Uh, and we had 2,500 respondents in the survey. All of these you know, options that we tested, data privacy features, standard consumer features, price features, to see how in, in a real choice and selection scenario. Respondents and consumers reacted, right? And so that's what let us, I think, cut to some really the, the, the most biting conclusions, the most fascinating conclusions out of this was to, to go a level deeper than what we've seen a lot of other quantitative research do.
Vivek Vaidya: So as you were thinking about designing the questions and, and figuring out how you were going to do all this analysis, what were your expectations, both of you, like, what were your expectations going in and how do they compare to reality?
Jonathan Joseph: You know, I, I thought that people would be super worried and concerned about AI just given, it just... it seems like there's a lot of negative sentiment around it and that's going to generally cause more harm than benefit. So I was a little surprised that it was more balanced than that but also a whole lot more nuanced. And so there's a couple of pieces there, you know, I would say that consumers are cautiously optimistic about AI. They pretty split on favorability, unfavorability, pretty evenly split, split on, is it, is it a harm or will it be a benefit? But when you dig in by the use case, it starts to get super interesting. Like for example, things that are seemingly innocuous, that help us in our everyday lives, like translation or navigation, people were all about that and AI is going to do great for us there. But you start talking self driving cars and you start talking wealth management and healthcare and people start to get really worried. So, you know, there is something about AI as it applies to the use case. And the worry kind of splits, if that makes sense.
Vivek Vaidya: How about you, Evan?
Evan Smith: One thing I was, I was concerned about going into this and... was sometimes when you do survey research on something that is so quickly evolving, um, and the people are learning about in real time and is new to so many people as, as AI is, sometimes you get mush, right, in your, in your results, right? Sometimes people don't know what they think, they're answering not sure and don't know to every question, people are too split to draw real meaningful conclusions and... and that was a concern here, right? Because AI is, is new, and we didn't get that. We got really conclusive consumer sentiment here around this, this really burning desire for someone to go out there and get AI data privacy right, and get ethical data privacy practices right. A clear, deeply, widely held even upon probing, uh, sentiment that people are unhappy. Consumers aren't happy with the current state of data privacy. They feel violated, uh, in their privacy rights. They don't have one, you know, lodestar that they're looking to, that they feel will come and save them. They are waiting for someone to, to come out and get it right, and they're ready to, to reward whoever gets it right, whether it's the government, whether it's industry, whether it's advocates consumers are really strongly, passionately motivated for someone to come along and make sure AI is, is done right.
Tom Chavez: Well, so, so diving in here, because I think as JJ points out, what the study seems to be telling us is that the consumers are cautiously optimistic and at the same time, they're worried. So one of the first findings is that 75%, right, 75% of our respondents would go back to a time when companies didn't have so much of their personal data. So it's this kind of, I don't know, it's, it's adorable, quixotic, what's the word? Like really? You want to go back to a time when, you know, let's, let's go back to a time when, when we had horses and buggies and, and we were smelting our own steel, that was beautiful too. I'm playing with this a little bit, but it, it, on the, it feels strangely, it feels bizarre to me that cause, that people want to go back to that time when this is so plainly where we live today. So what, what do we infer from this? Are people fed up? Is it just a skepticism? Is it... what inference should companies listening to that draw from, from that 75% number?
Jonathan Joseph: Yeah, well, you know, what are they really saying when they, when they say they want to go back to this time when companies didn't have so much data, Tom, I think it's... maybe it's a frustration or a little bit of hopelessness that the cat's out of the bag a little bit. And the companies have this data already and consumers are maybe a little fed up about what's happening to that data and really the lack of control over it, I think. If you tie that statistic, that 75% of people want to go back to the time when people didn't have so much data, and you start to ask people, well, we can't go back there. What can we do about it today? The answer is, well, we want choice and control. We want to be able to choose as consumers what businesses are doing with our data, we need to be able to revoke their permission, uh, when we feel like it. We want people to, we want brands to be responsible when they collect and use our data. And there's a ton of rules around that, that we can dig into later on what they mean by that. But that's what I think it is, Evan, I'm kind of curious what you thought here as well, because it's... going back to a time, I think to me just meant we need to control this thing and, and, and we let brands have this data, you kind of want to be crazy on it.
Evan Smith: Yeah, I, I, I agree entirely with JJ, you know, if you can't go back to the horse and buggy, what do you do? You, you invent the traffic light, you invent the seatbelt, you invent the airbag, you invent the, you know, the, the clover interchange, right? You do everything that gets people comfortable with the new reality and, and willing to engage with it and patronize it at scale. Uh, and, and, and that is what we found, you know, we've of course found this deep doubt and deep skepticism and in, in many ways worry, concern around personal data, business data, all these sorts of things. But then when we went ahead and tested, okay, if a company did this, did X, Y, Z, P, Q, R, consumers are responsive. They're ready for the seatbelt. They're ready for the traffic light.
Vivek Vaidya: Yeah. And I think that's consistent with JJ and you, as you were pointing out, right? One of another finding was most consumers, I think 90%, think that companies should give them choice and control over their data. My question though is do they know what that means? So Evan, what do you think about that? Do you know consumers know what choice and control over the data would look like?
Evan Smith: No, no, they don't. They don't. They're, again, they're ready, they're ready for, for someone to ride, you know, the king over the mountain, right? To, to ride in and say, here's someone ready to get this right. And, uh, and we shouldn't expect consumers to articulate the details of what good data privacy practices look like or understand the technical side of what's possible and what's not possible, uh, for a company to engage in. But what that leads us to is this huge first mover advantage for, for people who are willing to again, whether it's a regulatory mover or an industry mover or an advocacy mover to come in and do it in a way that, uh, you know, to do data privacy to do AI data privacy specifically, uh, in a way that consumers, uh, feel. Yes. Okay. Finally, someone is reactive to this. There is a long, uh, history, right? Prior to AI, prior even to the, you know, the widespread adoption of, of the internet of concerns around data privacy, personal privacy, you know, how that's regulated, how industry treats that. And, and those fears have never really been fully addressed in, in, uh, the US consumer population. And so, the, the first mover advantage is enormous and, and I think explains some of the, the findings, some of which are quite dramatic, uh, in this survey, that, that we see when, when consumers are presented with real data privacy options that, that they certainly wouldn't be able to articulate themselves. But when someone articulates it for them, they're responsive.
Tom Chavez: Bearing in mind what you just said, Evan, which is people want this, they don't necessarily know exactly what it looks like. But they're receptive to, to a solution. It is still surprising, at least to me, that, that the respondents in this survey so overwhelmingly agreed with the idea that businesses should adopt ethical data practices. Maybe I'm naive, maybe I'm, but I, because certain days I think a lot of us connected to these issues are a little disheartened, like, are we just kind of slogging away in the wilderness? Who's with us? Here the results were just unassailable, right? They're so clear. Were you guys surprised by by this result? Because I, I certainly was.
Jonathan Joseph: I was... I was surprised at the extent of it, for sure. I mean, above 90%, you know, overwhelming agreement on most of these issues. I think generally, we don't give consumers enough credit for understand... in how much they understand that data and their data is the currency, right, that fuels the data economy, that they really get it, they understand that in exchange for their data, they get better things from businesses, they get personalization, they get discounts, I mean, even discounts, they get, they get whatever, and I don't think we've given them enough credit, and if you tie that to what they've seen over the past five or six years with data breaches and Cambridge Analytica and the whole long list of things, right? And when you think about it in that context, well yeah, then of course, of course it's unassailable that they would want some, some control over this.
Evan Smith: And I think it, you know, part of what's, what's, what's working here is the self interest that consumers have, right? They're not viewing this as ethical data practices necessarily. They're viewing this as data practices that protect me, right? The, the abstraction around ethics isn't, you know, isn't as salient, but what's salient is my company was targeted by a, a, a phishing attack, you know, my, my parents or grandparents were targeted by, by this or that, you know, my friends was, was involved in a data breach and lost all their passwords, right? All of these things, horror stories that you read about, personal experiences that people have, uh, to them it's not, it's not necessarily the ethics that, that motivate the consensus. It's the, the self interest, and yeah, I don't want bad things to happen to me. And if these data... are the data practices that will present, prevent that then I endorse them fully, and we see, we see consumers endorsing them fully.
Vivek Vaidya: Yeah. And, and just so building on that, right. One, another interesting finding for me at least was that businesses implementing ethical data features, lift purchase intent among consumers by double digits above, uh, above baseline. So what, how did we, uh, what was the process to get to this finding, Evan?
Evan Smith: Yeah, so so this is the finding of the conjoint test. So we tested six feature sets. We tested a privacy feature, fairness features, agency features, accountability features, transparency features, and then standard consumer incentive features as well as price. Consumers saw you know, different business types, they saw cable TV providers and online retailers and travel packages, they were presented with product pairs about 900 different product types randomly assembled from, from all those different feature sets. In, as I mentioned 22,500 different pairings. And were asked to select between them which one of these are you going to buy? If these are, you know, cater... you know, between these two features, in between these two products. Uh, and which one of these do you trust more from a brand trustworthiness perspective? So, we're then able to take all that data together in, in the conjoint analysis. And establish what the absence of a certain feature does in terms of how, uh, you know, how likely a consumer is to, to purchase a product with or without it. And how likely a consumer is to, to trust a brand that has it or doesn't have it. And, and across everything we tested, right? We saw an average of, of 15.4% increase in purchase intent when a product had at least one of these ethical data practices. And of course, some are more impactful than others, we had a little bit of a scale. But they all outperformed traditional incentives like discounts and, you know, easy return guarantees and consumer customer support guarantees or whatever the case may be right for the product. But we, we went in and tested, you know, literally tens of thousands of scenarios here. And, and sifted through the data and found, yeah, these, these incentives work when, when consumers are, are confronted with this Stark choice between having good data practices and, and not having them.
Vivek Vaidya: Yeah. And JJ, like one thing that's fascinating to me in this whole discussion, right? And specifically the answer to this question is. In our work at Ketch, right? Privacy is always and privacy compliance, ethical data practices, etc. They're always viewed as a defensive move, right? So this finding tells an alternative story that businesses can and should perhaps play offense, right, with adopting ethical data practices. What do you think about that?
Jonathan Joseph: Yeah, 100% agree Vivek. It's, um, you can't just approach privacy, for example, as a compliance thing, as a defensive compliance thing. First of all, you won't even get budget within your own organization if you approach it that way, it, it has to be an offensive move. And it's funny in all, in all the angles that we try here, one of the, one of the ones we wondered about is like, you know, well, businesses just do the right thing. Well, they just do the right thing and, you know, approach data ethically and, you know, sad to say, not necessarily right. Not all of them will, maybe we should say, and so that's, that's one of the key reasons we wanted to do this study. How do we put a number behind ethical data use and as Evan was saying off the charts, people will buy more from you when you do this in a lot of ways as we think through other values that consumers share with brands, like sustainability, like diversity and inclusion. This isn't that different. And I mean, just look at Patagonia and everything it's done in creating equity value as it plows into sustainability. And I think as you say Vivek, it's for businesses who do privacy right, who do ethical data right, who do AI right, it's not just because you're a good guy, it's, there's cash on the other end of it, there's revenue, there's brand growth.
Tom Chavez: Speaking of cash and brand growth, Evan, you do a lot of these studies, so you're, you're accustomed to different levels of brand and purchase intent response. Can you kind of calibrate the 15.4% here relative to some of the other tactics that you see out there?
Evan Smith: Sure. I mean, I mean, we have to be clear about, about what that finding means, right? This is an experimental environment and consumers are asking to choose, they're being presented upfront in their face with these choices about data privacy. And, and they're, they've just been asked a whole bunch of questions about data privacy, so they're thinking about it, right, as they're making these choices. Uh, but what it, what it suggests to me is, is that foregrounding this stuff, you know, and we did, of course, foreground it in this study, foregrounding these choices, um, is, is, as JJ said, you know, it's a sales incentive, right? It's, it's, you know, we compared this stuff against discounts and all these other things that, that, that, uh, companies are used to doing and used to doing, you know, an ROI calculus against, okay, if we discount our product by 10%, that costs them money on, on, on their revenue, but they're seeing some sales uh, improvement out of it. So this is a way to understand data practices, ethical data practices in a similar way to say, okay, yes, there might be some costs or, or internal process sacrifices around, for example, you know, limiting how long you keep customer data or allowing, uh, consumers to, to revoke customer data access, right? These were two of the most popular things or issuing transparency reports to your customers about, about how you're using their data. Okay, of course, there are costs associated with that, just like there are costs with any other sort of incentive, you might offer a customer. But there are also clearly here upsides in, in purchase intent. And we tested those traditional incentives in the exact same environment, in the exact same context, side by side with these, uh, data practices. And the data practices outperformed the traditional stuff. So as, as, as consumers start to think about this stuff more and more, as it's in the news constantly, and as, as scandals happen around companies that do this the wrong way, uh, and, and, and, you know, put customer data at risk and as people hear about these things, uh, it will become more and more top of mind. Uh, and consumers will look for for these more ethical approaches, more self preservationist again, from the, from the consumer standpoint approaches. And how they make purchase decisions and which brands they trust when they're choosing who to buy from or engage with.
Vivek Vaidya: Yeah, I want to come back to one thing that JJ, you mentioned earlier, and you and I have talked about this in other contexts as well. You guys also test support or opposition. across various use cases, right? And I found fascinating that for use cases like translation, navigation, and virtual assistants, as you were calling out JJ, there's overwhelming support. But when it comes to use cases like customer service, investment advice, and self driving cars, there isn't that much support. In fact, there's overwhelming opposition to these use cases in the use of AI. So Evan, what do you think is behind that that difference in finding across use cases?
Evan Smith: I think it has to do with finding the balance between helpfulness, because all the use cases we've tested are more or less helpful, and accountability, right? Because there is a trust deficit right now around AI, uh, it's, it's, easy to hold a translation software accountable. You can cross check against your Google Translate or your Pocket Dictionary or your friend who speaks, you know, Spanish or Chinese or Russian or whatever you're trying to translate to. It's harder to hold an AI car that T-bones you accountable, right? You know, do I go, do they have insurance? You know, consumers don't, haven't wrapped their heads around what that looks like yet. Same for, for things like medical diagnosis, right? Which was another one that, that sort of had a, a middling, uh, outcome, uh, with consumers compared to some of these other things. You know, what, what tested at the top was translation and, and, and autocorrect and spelling and navigation, fraud detection, right? Things where it's just almost all upside and there's very little risk of having to hold an AI accountable. So any, any scenario, you know, we've seen this around writing articles and essays. Who do you blame if someone goes and asks an AI to write something and it gets something wrong? Right. We've seen high profile cases where AI written pieces are wrong. Who's accountable? The person who put in the prompt? The... you know, the, uh, the, the prompt engineer, the developers of the prompt, you know, how do you hold something like that accountable? Meanwhile, something like a translation software and an autocorrect or whatever, that's easy to hold accountable in the consumer mind. That's, that's at least my perspective on what's, what's motivated some of these, these preferences. Not necessarily how helpful it is, but how easy it is to hold, hold accountable or sort of do a backstomp against the AI.
Vivek Vaidya: Yeah. JJ, what do you think?
Jonathan Joseph: It's um, I go back and forth on this one, Vivek. It's interesting, I know we've talked about it a lot, uh, with Evan. There's something to the downstream effect of the result that AI gave you and some kind of understanding, I think, innately that, that consumers have on the risk. So translation, you get a translation wrong, like what's the worst that can happen? You know, you get a navigation wrong and hey, you're five minutes out, not even gonna, you're not, you're not even really going to know that you are five minutes out. You're just going to go the way that, that, you know, AI told you to go. But then there's something about customer service and investment advice, I think they're different. Customer service, you're pissed off and you call a helpdesk. Look, you need help, you want help. Like, last thing you want is to talk to an AI. You're pressing zero, you're getting a, you know, customer representative as soon as you can. Investment advice and self driving cars just... I mean, one's, one's touching your money and the other one's touching potentially your health, right? Yeah. Like the, the consequences are huge. And I think people naturally worry about AI making decisions there. You know, especially, Vivek, we've talked about this, just the way that AI works, the res... the, the result could be different every time. So you ask it a question about investment advice and you ask it again, and you're gonna get the same answer. And does that give you confidence?
Vivek Vaidya: Yeah. And dev...Definitely. I found that, I found that breakdown by use case is very interesting. Uh, one thing that struck me was also for things like translation and navigation, you don't know the answer, right? That's why you're asking the machine, right? But with, with investment advice and with self driving cars, like. Well, I can drive a car better than an AI, right? So there's, I think there's some of that also that plays into it, uh, in my opinion, but I found that finding quite fascinating. So we're coming up on closing, but before we close out, we have this thing we do, which is a completely unpaid for promotion. And, uh, so we're gonna do something different over here. We're gonna, we're gonna give you guys some prompts and ask you to choose one or the other. And so I'll start JJ, should we leave AI regulation to big tech or to DC?
Jonathan Joseph: I would lean towards DC on that one. All right. Wow. Only because we've, we've, yeah. What do you guys think? I mean, only because we've seen data regulation being left to big tech and we kind of all saw how that worked out. Uh, so I think I want to give DC a try on this one.
Tom Chavez: I think you got major fox hand-house issues, if you leave it to big tech. Evan, so you've worked on a number of campaigns. I don't know if we adequately underscored that at the beginning, but thank you again for all of the help with, with this survey, but you've worked on, on, on polls, your professional pollster. I don't know if I, if that's the right way to characterize it. You've been on lots of campaigns. What's the favorite one you've worked on?
Evan Smith: I would say, uh, Congressman Pat Ryan Congressman from upstate New York. Terrific! Terrific member of Congress. Very bright future. Very excited to, uh, see him win re-election and, uh, and continue on. And I would say outside of, outside of, you know, outside of electoral campaigns, very proud of some of the work we've done around closing 13th amendment loopholes, uh, around unpaid labor or low paid labor in prisons.
Tom Chavez: Wonderful. I got to ask though, as a, as a segue, most chaotic campaign? Come on.
Evan Smith: I was, I was the pollster for the Andrew Yang 2021 New York City mayoral campaign.
Tom Chavez: Oh!
Vivek Vaidya: Wow.
Evan Smith: You, you and some of your listeners may, may have closely followed some of the chaos surrounding
Tom Chavez: that had to be sparty.
Evan Smith: It's hard to top it. Hard to top it
Tom Chavez: Yeah. For both of you, who's your, who's your favorite politico? Who's the most intriguing political figure you're, you're looking at right now in, in US?
Jonathan Joseph: Oh, geez. That's probably a better question for Evan. I don't know if I want to weigh into that one, Tom, but you know, I'm a capitalist on outward appearance, but actually I'm quite a socialist on the inside, so I feel the burn from time to time. I like what he was trying to do.
Tom Chavez: How about you, Evan?
Evan Smith: I'm, uh, I'm quite interested in the, in the political future of Gretchen Whitmer. That's, that's not a, uh, not a particularly revolutionary answer, but someone who's gotten the electoral and political mechanics quite right in a state where it's tough to do it. She's gotten the governance mechanics, right? And she runs a really tight ship within her state party. It's very frustrating when politicians win elections and do nothing. She manages to win elections and then do something. So I appreciate that about her. And I, I'm curious to see where she goes. She's not a client or anything. I'm just excited about...
Tom Chavez: 100%. She is super intriguing to me. It's amazing what she's achieved in Michigan under really fraught circumstances, but, but getting results, right. Even amidst the chaos there. Yeah. She's she's remarkable. She's got a big, interesting future ahead of her.
Vivek Vaidya: Evan, what's your, uh, best Indian dish?
Evan Smith: I don't know if I, if I possessed the qualifications to, to really, I mean, the, the easy, you know, an onion chili uttapam from Saravana Bhavan. Oh, wow. Is that a fair answer?
Vivek Vaidya: That's, wow, that's, that's very specific. That was not what I was expecting, but an onion chili uttapam from Saravana Bhavan too. That too. Wow. That's...
Tom Chavez: Damn, you nailed, that's like a 10 from the Russian judge, Evan, because, uh, Vivek over here is a serious, serious chef. Top notch Indian cuisine.
Evan Smith: Okay, well, I'm glad I passed the,
Vivek Vaidya: I can't make uttapams, but I, I love, I love to eat them.
Tom Chavez: Well, the thing you got to love about Indian food is it only takes about 10, 15 minutes to whip it up.
Vivek Vaidya: Yes. It does actually just take 10, 15 minutes to whip up an Uttapam. It's south Indian food, Tom.
Tom Chavez: I knew that. I was just testing.
Vivek Vaidya: I know. I know. I know. I know. Is there anything about Indian culture, food, etc. That you don't know?
Tom Chavez: If you have any, any questions, I'm here for you.
Vivek Vaidya: Yeah. Okay.
Tom Chavez: Always.
Vivek Vaidya: Okay.
Tom Chavez: All right, everybody. That is going to be our wrapping up point before Vivek and I launch into a bunch of extra unnecessary tomfoolery. Let's call it there. JJ, Evan, really such a pleasure to have you both with us today in the Closed Session.
Vivek Vaidya: Thank you so much guys.
Evan Smith: Thank you.
Jonathan Joseph: Thank you.
Vivek Vaidya: All right, everyone. Thank you for listening in and don't forget to sign up for our newsletter at superset.com. We'll see you next time.
Tom Chavez: See you soon.