The {Closed} Session

AI and Bias in Hiring with Frida Polli, CEO and co-founder of pymetrics

Episode Summary

Kicking off the fourth season of the {Closed} Session podcast with a great topic and guest: Frida Polli, CEO and co-founder of pymetrics, which was recently acquired by Harver, joins us to talk about the critical role that technology and specifically AI and neuroscience can play in eliminating bias in hiring and beyond.

Episode Notes

The {closed} session - Season 4, Episode 1

Kicking off the fourth season of the {Closed} Session podcast with a great topic and guest: Frida Polli, CEO and co-founder of pymetrics, which was recently acquired by Harver, joins us to talk about the critical role that technology and specifically AI and neuroscience can play in eliminating bias in hiring and beyond.

Listen to the episode and read the transcript at superset.com

***

Listen to more episodes at www.theclosedsession.com

Twitter: @closedseshpod

Learn more about super{set} at www.superset.com

 

Guest, Frida Polli, CEO and co-founder of pymetrics (acquired recently by Harver)

Twitter: @fridapolli

Instagram: https://www.instagram.com/fridapolli/

LinkedIn: https://www.linkedin.com/in/frida-polli-phd-03a1855

Harver: https://www.linkedin.com/company/harver

Twitter: https://twitter.com/harverhrm

Episode Transcription

Announcer:

Welcome to The Closed Session, How to Get Paid in Silicon Valley with your host, Tom Chavez and Vivek Vaidya.

Vivek Vaidya:

Hello and welcome to season four of The Closed Session podcast where we talk about company building, entrepreneurship, and we examine the latest trends and insights in technology and business. The special episode to start off season four, we are going to be discussing with a special guest, who I'm going to introduce to you momentarily, the critical role that technology and specifically AI and neuroscience can play in eliminating bias in hiring and beyond. Our guest today is Dr. Frida Polli. She's the chief data science officer at Harver. Harver recently acquired her company, Pymetrics, which uses AI and neuroscience to help companies make fair and efficient hiring decisions. Dr. Polli is a leading expert in the intersection of AI and behavioral science with a PhD in neuroscience and an MBA from Harvard University. We're thrilled to have her here today to share her insights and experiences with us. Welcome, Frida.

Dr. Frida Polli:

Thank you. Thank you so much for that wonderful introduction. Glad to be here.

Vivek Vaidya:

Do you prefer Dr. Frida, Dr. Polli,-

Dr. Frida Polli:

No.

Vivek Vaidya:

... Frida? What do you-

Dr. Frida Polli:

No. Frida's fine. I will only make you call me Dr. Polli if I'm mad at you.

Vivek Vaidya:

Oh, okay. There we go. There we go. My wife also has a PhD in English literature and so far, I haven't had to call her Dr. Gupta.

Dr. Frida Polli:

There you go. Good.

Vivek Vaidya:

So I guess I'm doing something right.

Dr. Frida Polli:

Perfect.

Vivek Vaidya:

As we start, Frida, can you share your journey? What made you start-

Dr. Frida Polli:

Sure.

Vivek Vaidya:

... Pymetrics and then what led to the acquisition by Harver and everything else-

Dr. Frida Polli:

Sure.

Vivek Vaidya:

... that happened in between?

Dr. Frida Polli:

Yeah, happy to. I was a very content academic neuroscientist at Harvard and MIT for about a decade. Really enjoyed the science we were doing. Ultimately realized that it didn't have as much real-world application as I had initially hoped or thought it would, not for any fault of the particular lab I was working in, but just because cognitive neuroscience unfortunately, it's just hard to translate the learnings into the clinic. So I became a little bit disillusioned because I like to do as much as I like to learn and so transitioned out of academia through the MBA program at Harvard and really didn't know what kind of entrepreneur I was going to be. I thought I was going to do something in the life sciences. Then as I sat there at HBS, I had a front row seat to recruiting for two years, because that's what MBA students do, and I just saw all of the flaws inherent in the process.

One of the biggest flaws is that people weren't really able to understand who a person really was from a what we call soft skill perspective. They had their hard skills, what their resume said, but they wanted to understand, is this person a team player or more of an individual contributor? Are they motivated by intrinsic or extrinsic rewards? Are they risk-taking or more risk-averse? There was really no way to understand any of those things. People were resorting to what I call [inaudible 00:03:13] on resumes as well as coffee chats and recommendations from your section mates and just all these very archaic and not very scalable methods. The proverbial light bulb was, oh, but we know how to measure all these traits in people using these cognitive science techniques. So that's how the idea for Pymetrics came to be.

Then we were off to the races because HBS was a great testing ground for a lot of our early tech. Then raised outside money in 2013, built the company, had tons of successes, worked with some incredible clients, many of whom just saw massive reduction in not only the traditional type of bias that you might think, gender bias and racial bias, but inherent to a lot of our processes is socioeconomic bias. You only recruit from certain schools. You only recruit from certain employers, while unfortunately, those tend to be less socioeconomically diverse pools. So a lot of elimination of socioeconomic bias as well. And in conjunction with that, very strong gains in retention, employee performance, not to mention obviously efficiency. But I think that at the end of the day, if you can improve retention and employee performance while also greatly improving all types of diversity, that's just a huge win.

So incredibly successful. I think the pandemic really was a wrinkle that no one in HR expected that was a bit turbulent for many HR companies. So at that point we realized, oh, we're still subscale where we'd like to be... We were fortunate that somebody saw the value in the technology and really wanted to buy it so that they could offer it as part of a broader platform and that's why the acquisition in August of 2022 happened. But again, incredibly great ride that we had and incredible outcomes that we saw with this technology. So we're just very excited to continue to grow the platform within a broader company.

Vivek Vaidya:

Yeah. That was going to be my follow-up question, is you are continuing the build-out and growth of-

Dr. Frida Polli:

Absolutely.

Vivek Vaidya:

... the Pymetrics platform within Harver as well.

Dr. Frida Polli:

Absolutely, yeah.

Vivek Vaidya:

That's awesome. That's awesome. Congratulations.

Dr. Frida Polli:

Yeah, and now I'm their chief data science officer so I get to do a lot more cool data science stuff, again, on a bigger scale, which is always really exciting to work on new projects. So couldn't be happier. I think the HR field has changed so much in the 10 years that I was a part of it and was just very exciting to see that. I honestly think there's a tremendous amount more growth to come. We might get into this later. I think some of the challenges that lie ahead of not only HR, but any field that is considering using artificial intelligence or algorithmic decision-making is how do you deal with the concerns that the public has, that regulators have in terms of how can you really know if something is lacking and biased, how do you test that, how do you protect consumer privacy?

There's just all sorts of data privacy and bias issues that I think this nascent field is really going to have to address in a much more thorough way than it has been historically. I think those are some of the challenges that I think we're going to see play out, that we've already seen play out, but I think will continue to play out in the next five to 10 years or so.

Vivek Vaidya:

Yeah. I've had a couple of my own personal experiences with bias in hiring and whatnot. One of the more interesting things for me, this happened actually in early 2019. 2019 was when there was a lot of attention that started to get paid to bias in hiring, specifically gender bias and racial bias as well. We were trying to hire people as were so many other companies. I was using a platform that showed candidate profiles. You type in your search. I'm looking for JavaScript ReactJS, whatever your technical criteria were, and out popped a list of candidates. Some of those candidates had uploaded their photographs. Sometimes you can tell gender and/or not socioeconomic, racial ethnicity, whatever by the names and whatnot. So there was a checkbox at the top-right which said, eliminate bias, so remove bias.

Dr. Frida Polli:

Geez. Okay.

Vivek Vaidya:

When I checked it, it did not show me any names and it removed people's pictures. Okay, that's progress. Then I said, okay, now let me uncheck it, right?

Dr. Frida Polli:

Right.

Vivek Vaidya:

Because I-

Dr. Frida Polli:

Oh, my God.

Vivek Vaidya:

... have some experience with mapping people's names to their racial identities and whatnot.

Dr. Frida Polli:

Sure, sure.

Vivek Vaidya:

So now, this is where it got interesting, if I pick people at random, I got a biased result.

Dr. Frida Polli:

[inaudible 00:08:27], yeah.

Vivek Vaidya:

I had to look at at their names. I had to look at their photographs to actually create diversity in the candidate group.

Dr. Frida Polli:

Correct, yeah.

Vivek Vaidya:

That was for me a light bulb moment. Actually, one of the companies in our super{set} portfolio is a company called Eskalera, which uses diversity, equity, and inclusion principles to measure things like employee engagement. We've had Dane Holmes, who's the CEO of Eskalera, as one of our guests. So I'm curious, how does your technology work? How does the Pymetrics technology work in-

Dr. Frida Polli:

Sure.

Vivek Vaidya:

... one level more detail than you've described so far?

Dr. Frida Polli:

Yeah. Let me just start with your example. I think that the bias of all kinds, whether it's gender, socioeconomic, racial, disability bias, really can creep in at any stage in the funnel. I think what you're describing is that the sourcing, because at that point you were sourcing candidates, was quite biased. If you did a random selection, you were more likely to get one gender or one ethnicity, right?

Vivek Vaidya:

Correct.

Dr. Frida Polli:

That is one type of bias. The next type can be in selection. Let's say you had no sourcing bias. Let's say you had equal number of men and women, equal number of different ethnicities. A product that shows selection bias would be oversampling one gender, oversampling one ethnicity, which unfortunately can happen often for a variety of different reasons. I'll give you some examples. Some of the more old-fashioned, but still quite in use, meaning, they're used by about 50% of companies, cognitive tests are quite racially problematic in the sense that they use questions like, what is a cul-de-sac as an test of IQ?

Well, I would put forth that asking someone what a cul-de-sac is isn't measuring their IQ, it's measuring, again, their socioeconomic background. Because race and socioeconomic background are so tightly tied, some of these tests really have strong biases against candidates of color such that for certain tests, some that are quite well-known, only three African Americans and four Latinos pass for every 10 Caucasians. That's selection bias. That's saying, look, we're passing through a lot more of this category than that category and that's just an example. The argument that these test providers will make, which is a spurious one, is like, oh, well, but these cognitive tests predict performance so well that they're fine, which by the way, legally is true. So if you have this type of bias, but you can prove that, well, the candidates I picked performed better, they're legally defensible is what it's called.

We think that's a spurious argument because as I just mentioned to you, Pymetrics has had many examples of improving diversity of all kinds and also showing that people perform better. So we think that relying on these older tools that have a tremendous amount of often racial bias is just not fit for the times. We can move beyond that. And again, it's been shown with just pure cognitive tests as well. What we do and the way we can show, approve basically everything I've just said is we built a technique whereby it's an ensemble method. So instead of building an algorithm, we essentially build thousands of algorithms when we're doing the training. This is not rocket science. Anyone can do this really. Then what we'll do is we'll plot these thousands of algorithms across two axes. One is lack of bias, essentially, where one is a perfect number and zero is what you're trying to avoid, and also accuracy, some measure of accuracy or model performance.

What we'll do is we'll just select from that upper right quadrant where the algorithm is as close to one as possible in terms of gender and race ratios, and also as high-performing as possible by using that method, and then we'll further refine the algorithm. But that's not that hard to do. The method I just described is quite simple, but very few people are doing this. What they're doing is they're building one algorithm and if that algorithm shows bias, oh, well, they're not doing a whole lot to fix that. But because it's predictive of performance, they're saying, well, it's defensible. It's like, sure, but that's like saying, "Well, I developed this one drug. Yeah, it has terrible side effects, but it works. It's like, well, why don't you try?

But now with AI, we can test thousands of molecules and pick the drug that has the best performance and the best safety profile. We haven't really transitioned that model of thinking to these types of algorithmic building processes. So it's not hard. It's literally just like testing the safety profile and the performance of a drug. At the same time, I would say the vast majority of companies in the HR space, at least the more traditional ones, have not adopted these methods. I think society is the worst for it.

Vivek Vaidya:

Yeah. The way you're describing it sounds like it's a clever application of algorithms and approaches that are already out there in the assembly.

Dr. Frida Polli:

100%.

Vivek Vaidya:

It's the assembly that's the novel part that you did at Pymetrics. A lot of-

Dr. Frida Polli:

Yeah. Look, the truth of the matter is I think some of the challenges are less in the algorithms and more in the data.

Vivek Vaidya:

Yeah, I was going to do that.

Dr. Frida Polli:

Yeah. So basically, what we did at Pymetrics is we developed a novel data source. I was telling you that part of the light bulb moment was realizing you can measure risk tolerance or you can measure team behavior. You can measure all these things. So that's where our data is novel and proprietary in the sense that we collected that data. That data is unique to us and we were very careful in using that data that we knew that it was very linked to things that were important on the job, but that it also didn't have gender and ethnic bias in it, if that makes sense. So we shied away from any data sources that we knew were more problematic. You couldn't do that entirely, but we tried to do that as much as possible.

The challenge with a lot of people using, let's say, existing data sets, let's say you get an existing dataset that has resume data or experience data, whatever, that already has so many proxy variables built in. Men and women have different experiences, so do different races. So then when you try to use what we call our debiasing technique, this dual optimization method, it's quite true that once you eliminate the "bias," you can often get rid of the performance of the algorithm. But that's a bad data problem. It's not an algorithm problem.

Vivek Vaidya:

Correct.

Dr. Frida Polli:

It's like you've just been a bit lazy and you haven't really tried to look at some of the patterns in your data and say, what types of data should I be using that will help me solve this problem?

Vivek Vaidya:

100%.

Dr. Frida Polli:

Then you get all these silly comments in my opinion that are like, oh, well. Literally, I just saw one on LinkedIn yesterday where it's like, "Oh, well, yeah, you can have an algorithm that's unbiased, but you can also have a random number generator too," meaning basically, it doesn't predict everything.

Vivek Vaidya:

[inaudible 00:15:31], yeah.

Dr. Frida Polli:

Yeah. And you're like, no, that's just because that's been your experience because you've been using bad data and that's been your experience in trying to remove the bias, but that is not what de facto happens if you have a good data set and robust machine learning methods.

Vivek Vaidya:

Yeah, no, this discussion reminds me of a quote that I use often these days, is more data beats better algorithms, but better data beats more data.

Dr. Frida Polli:

Totally, 100%.

Vivek Vaidya:

You see it in this whole movement with data-centric AI these days that's going on.

Dr. Frida Polli:

Yeah.

Vivek Vaidya:

100% agree with you that the underlying data that you use and the amount of effort you put in to procure and source that data is what's going to differentiate-

Dr. Frida Polli:

And clean that data. Look at the data and make sure that you don't have massive proxy variable problems. Again, I think unfortunately, it just comes down to, and again, you can find all sorts of reasons for why this happens, but a lot of times I think when there's just this push for like, oh, grow the business, grow revenue, people are not very careful necessarily in what they're doing and they're just throwing everything at the wall and just seeing what works. That's when you get into these problems and then people sell that tool or sell that algorithm, whatever, and then they get into these situations where they don't like what they're doing, but then they have to come up with these arguments of like, oh, well, if you remove the bias, you're also removing the signal. It's like, well, that wouldn't have been true if you'd done things differently in the design process, but yes, where you are now, that probably is true. So again, it's just, unfortunately, not enough thinking going into what you're doing in the design process.

Vivek Vaidya:

Yeah, and I think now with GenAI and tools like ChatGPT, I think it's going to shine even an brighter light on these data issues because the data that you use to train these LMS and whatnot,-

Dr. Frida Polli:

100%.

Vivek Vaidya:

... and fine tune them also-

Dr. Frida Polli:

100%.

Vivek Vaidya:

... is going to result in differentiation.

Dr. Frida Polli:

100%, yeah. No, for sure. In the process of building Pymetrics, we also were fairly helpful, I think, in passing the first law in the nation actually that has any kind of oversight of AI. It goes by the Automatic Employment Decision Tool, moniker, it's Local Law 144. New York City's Bias Audit Law is probably what people know it by. The reason I mentioned that is because I think that it is, well, it's for another podcast, but it was quite a struggle to get the law implemented. It was a four-year process. However, in this process, come to have met a lot of AI auditing firms and my best quote from one of them recently was when the whole ChatGPT started making the headlines, this one auditor that we know is like, "I'm going to consider sheep farming because this space is getting so crazy." I just had this image of him off herding his sheep on some mountain where no one could reach him.

Because I think a lot of people that are thinking through these things carefully are quite concerned about ChatGPT. You saw the open letter about ChatGPT and just all of the problems that this technology, very powerful and exciting, no doubt, technology could unleash on the world. So yes,-

Vivek Vaidya:

Actually,-

Dr. Frida Polli:

... back to your point.

Vivek Vaidya:

... since you brought it up,-

Dr. Frida Polli:

Sure.

Vivek Vaidya:

... as a practitioner, what are your views on the open letter?

Dr. Frida Polli:

I agree with it.

Vivek Vaidya:

Yeah?

Dr. Frida Polli:

Honestly. I do because again, maybe you think I'm a negative Nelly or whatever, but I just think that, again, I don't know how, maybe this is getting into controversial territory, but there's the whole Timnit Gebru, Margaret Mitchell scandal that happened at Google and all of the flaws that they had pointed out in these large language models and these are the same ones that we're adopting in ChatGPT. So just from that perspective, we already know that there are issues in these large language models and so that should give us pause. But then we've seen, and there's been reporting on, there was a great piece by Kevin Roose in the New York Times basically talking about how he, I don't know if you saw it, but he basically purposefully had a very long discussion with one-

Vivek Vaidya:

I heard that.

Dr. Frida Polli:

... of these [inaudible 00:19:56], yeah. Exactly. The AI started telling him, "You should leave your wife. You're not happy." Then last week, I think it was reported that some person that was majorly depressed had a chat with the ChatGPT, this was in Belgium, that told him that yes, he was concerned about climate change and the AI basically told him, "Yes. Well, the world would be better off without another human," and the guy-

Vivek Vaidya:

Oh, wow.

Dr. Frida Polli:

... killed himself. Now,-

Vivek Vaidya:

I didn't read that.

Dr. Frida Polli:

... again, we can't be like, well, it was the AI's fault. But at the end of the day, this technology is very powerful, especially on humans that are already susceptible. Kevin Roose is not going to leave his wife because the ChatGPT tells him to, but unfortunately, people who are in vulnerable places are much more susceptible. So just think from that perspective, we don't have enough guardrails, I don't think, to protect very vulnerable humans. So that's one concern I have. Then the whole deep fake issue is another just, I think, quagmire that we are not ready for. We already have so many people that believe in QAnon without deep fakes. Can you imagine the craziness that will be unleashed in terms of deep fakes misinformation? So from my perspective, I think it's an incredibly exciting technology and I don't think it's ready for prime time. That's my perspective. We should have some kind of a moratorium. We should have some kind of whatever. Again, I'm an AI enthusiast in general. I'm a technologist. I'm a technophile, but I do agree with the open letter that I think more guardrails, I think, are warranted.

Vivek Vaidya:

Yeah. I think people tend to take these binary views. If you say anything in support of a thing like the open letter, then oh, you're a technophobe. You don't believe in AI. You're hindering innovation and all of that. Whereas what you're calling for really is balance.

Dr. Frida Polli:

Right.

Vivek Vaidya:

Right?

Dr. Frida Polli:

Yeah.

Vivek Vaidya:

And you're not saying that technology-

Dr. Frida Polli:

Exactly.

Vivek Vaidya:

... should not be used, but it should be used with guardrails and the right checks and balances.

Dr. Frida Polli:

Correct.

Vivek Vaidya:

My wife and I were big espionage, mystery, these kinds of buffs.

Dr. Frida Polli:

Yes.

Vivek Vaidya:

There's a show on NBCUniversal on Peacock called The Capture.

Dr. Frida Polli:

Yeah. Okay.

Vivek Vaidya:

To your point about deep fakes, the second season of Capture has just this mind-blowing concept where they show you what's possible and it's scary.

Dr. Frida Polli:

Yeah, it's very scary. And again, we're going all over the place, but just in the last week, there was the news of the leaked classified documents, people wondering if they're true. The whole point is we are in such an era of mistrust and disinformation that I think unfortunately, this type of technology will only allow that to be possible on steroids. If there was ever a time that we already have way too much of that, I think that's why we need to be even more cautious when democracy, one could argue, is being certainly imperiled by some of the disinformation and things like that.

Vivek Vaidya:

What role do you think entrepreneurs like us and startups can play in combating this?

Dr. Frida Polli:

I'll just speak from my own experience.

Vivek Vaidya:

Yeah.

Dr. Frida Polli:

I think the challenge that entrepreneurs have in my opinion is that I think a lot of entrepreneurs want to "do the right thing." I think a lot of us want to design things the right way, put on guardrails. I think the challenge is that the venture model is what it is. The venture model doesn't say grow moderately and don't break things. The venture model says grow fast and break things or move fast and break things. Again, it's just because, well, that's their model. That's how they make money. It's not that they're bad people, it's just that their model of how they make money is not necessarily always compatible with designing products in a way that is more careful. So I think that's the tension, honestly, just truthfully. I don't have a magic answer as to how we solve for that, but I think that there's an inherent tension in the funding model wanting one thing and the entrepreneur being pushed in that direction without necessarily thinking that's the right way to go.

Vivek Vaidya:

But don't you think that-

Dr. Frida Polli:

That's my perspective.

Vivek Vaidya:

Yeah. One of the things that we say at super{set}, you can do well by doing good. One of the ways that we think you can thread this needle is by creating what we call ethical tech. We're sponsors of this project called the Ethical Tech Project and privacy and responsibly gathered data, responsible data practices, responsible data stewardship, I think is part of the solution, at least from our perspective. How do you think that plays out?

Dr. Frida Polli:

Well, I couldn't agree more, I just don't think that's necessarily the norm. I think what you're describing, you guys have clearly been very careful, thoughtful, and that's amazing. I don't disagree with any of that. I'm just saying that from my experience and experience of others, I don't think that every single investor that is investing in technology necessarily has that kind of lens. That's all I'm trying to say. So therefore, when the option presents itself to maybe remove breaks, or not necessarily even breaks on growth, but just breaks on doing different things, trying out different products, that you might otherwise caution against, I think that the decision isn't always made to put on those breaks. Again, it's not just tech investing. I think it's just in general, right?

Vivek Vaidya:

Yeah.

Dr. Frida Polli:

There is this conflict and I think it can be carefully threaded. We obviously did that at Pymetrics, but I think it's not always the easiest path-

Vivek Vaidya:

Sure.

Dr. Frida Polli:

... and I think that it can present challenges.

Vivek Vaidya:

Like you, we're also optimists, right?

Dr. Frida Polli:

Yeah. I'm a total optimist.

Vivek Vaidya:

Yeah. No, I think-

Dr. Frida Polli:

Again, you guys have that very specific belief system, design philosophy, whatever you want to call it. I just don't think that every single person that builds tech or designs tech or invests in tech has that for better or worse, whatever reason that is, and therefore, I don't think everyone benefits from that. Does that make sense?

Vivek Vaidya:

Yeah. That's true. That's true. So then one question, related one, do you see the regulatory landscape changing in the coming years as this unfolds?

Dr. Frida Polli:

I do.

Vivek Vaidya:

Yeah. In what ways?

Dr. Frida Polli:

Yes, I do. Well, it's already changing. I talked to you about New York City's Bias Audit Law. There's like three or four other bills underway in California, New Jersey, in DC, and that's just the US. Then you have the EU AI Act, which most people think is going to go into effect either this year or next year, which will be much more comprehensive and sweeping. Canada is either passing or has passed some laws. It's just happening everywhere. I think part of it is, I don't want to say it's a reaction, but it is a reaction to... So what we saw, what I saw in the 10 years of building Pymetrics is that on the one hand, and I was shouting from the rooftops, AI is amazing, it can debias human processes, I was the biggest advocate and I was constantly being asked for, well, prove it.

Vivek Vaidya:

Yeah.

Dr. Frida Polli:

Honestly, there isn't a way to prove it unless there are some standards. You can't prove something because if I prove it this way and you prove it that way and nobody has to adhere to any kind of standards, then it's like, oh, what's my word against yours? So people just become very skeptical, remain skeptical.

Vivek Vaidya:

Right.

Dr. Frida Polli:

I think it's just like climate change. Any kind of reporting standard structure in order for folks to gain confidence in these systems, that's my long held belief, there has to be some slight level of oversight. Doesn't mean you have to start talking about making things legal or illegal. I think we start with mandated reporting, which is the lightest touch oversight. You start with that just to even get a sense of the ground truths. Because so many conversations I would have with people or employers or journalists or whatever would be like, well, I'd say, "Oh, well, Pymetrics is less biased." "Well, how do you know?" And I'm like, "Well, how do you know that the process you're describing is unbiased and you don't even know what the ground truth is?" So if you don't even know what the ground truth is, it's hard to then say something is better or worse.

Just as a historical precedent, before we started having pollution standards in the US, they did literally a 10-year study of air quality, and that's a little bit like what we need in, I think, the algorithmic space, which is we have zero idea as to how these things perform, what bias is in them. So we need to at least have the mechanism in place where we can interrogate, investigate, report on so that we can gain a sense for what is ground truth. Once we know what ground truth is, then we can start talking about, okay, does this need to be changed? Is this okay? Because right now we're literally just flying this massive airline jet blind.

Vivek Vaidya:

Right. Yeah.

Dr. Frida Polli:

So that's my perspective is that we just need to understand ground truth and the best way to do that is just collect data and get some reporting on it.

Vivek Vaidya:

Yeah. It's fascinating. As you were saying, giving the example of air pollution, there are so many of these examples and precedents you can find in other industries where these-

Dr. Frida Polli:

Completely.

Vivek Vaidya:

... similar problems have been studied and approaches have been defined. It's time to apply those same frameworks to AI at large.

Dr. Frida Polli:

Totally. But I think the problem is, and again, without naming names, I think that we saw this in New York. Big tech companies came in and lobbied extensively and I mean extensively to not have this bill pass. And literally, this bill just asked for reporting. We're not making things illegal. We're literally just saying, "Hey, report on the levels of gender and ethnic bias that are present in your algorithms," and there was so much lobbying and so much pushback. We saw this happening with bird's eye view. It's just very unfortunate. The whole mantra is like, oh, we're self-regulating. We're fine. We'll do it ourselves, which by the way, has been the mantra of any industry that has known that some of its products has had potentially some negative externalities and didn't want any kind of oversight.

Again, it's a very tried-and-true playbook. I think it's just very unfortunate because as a technophile and as somebody that loves technology and wants technology to flourish, I think this is setting up for a battle between people that fear basically the concern and the fear and the negative viewpoints of technology to just continue to increase when you have folks on the other side resisting any effort at light oversight and understanding of what their technology does. So I think it's a poor dynamic to be setting up.

Vivek Vaidya:

Yeah, and I think it just exacerbates the are you with us or against us kind of dynamic when-

Dr. Frida Polli:

Completely, yeah.

Vivek Vaidya:

... there are so many people just advocating for there to be these standards and these fair practices and whatnot, whereas the other parties are saying, "No, not going to do it." Then if you challenge them, then you're just against it all, right?

Dr. Frida Polli:

Completely. Yeah, you're anti-business. You're anti-capitalism. You're all sorts of things.

Vivek Vaidya:

Yeah.

Dr. Frida Polli:

Literally, we would have these accusations thrown at us, and I'm like I've been running a for-profit enterprise for the last-

Vivek Vaidya:

Right, exactly.

Dr. Frida Polli:

... 10 years. None of those things are true. I don't even know picketing person with a sign that's not interested in making money. That's not true at all. There have been lots of examples of this. There was the Fair Credit Reporting Act in the 1970s or '80s, I can't remember, that basically decided who was on the hook if credit cards were used in a fraudulent manner. That changed, that allowed for the credit card industry to flourish tremendously because prior to that, everyone was like, "Well, I don't know. Maybe I'm on the hook. Maybe the bank's on the hook." Who knows, right?

Vivek Vaidya:

Right.

Dr. Frida Polli:

Once that was put to bed, then the industry was able to flourish, and that's a perfect example where regulation actually helped the industry flourish. There's many examples like that and I think it's just very unfortunate, and sorry, now we're getting on a totally different soapbox, but that any type of regulation is viewed as anti-business because-

Vivek Vaidya:

Support.

Dr. Frida Polli:

... that's not what the facts purport to. Again, it's back to these black and white ways of thinking that are unfortunate.

Vivek Vaidya:

Looking forward and just trying to look optimistically now, as you think about the future, and you think about AI, and look, as you were saying, these technologies are going to flourish,-

Dr. Frida Polli:

100%. As they should.

Vivek Vaidya:

... yeah, and they are going to impact how we change work,-

Dr. Frida Polli:

Yeah, 100%.

Vivek Vaidya:

... what skills do you think will be in high demand as AI becomes more and more prevalent? How can individuals, entrepreneurs, employees prepare themselves for these changes that AI is going to bring?

Dr. Frida Polli:

Honestly, I think the biggest skill, and I wouldn't call it a skill, is adaptability. Because I think life is literally constantly, constantly changing and I think the ability to adapt is the most critical "skill" that we can have.

Vivek Vaidya:

Yeah. People are predicting that, oh, all jobs are going to go away and all of that. That's like two-

Dr. Frida Polli:

No.

Vivek Vaidya:

I don't know.

Dr. Frida Polli:

No.

Vivek Vaidya:

Yeah.

Dr. Frida Polli:

We've been saying that for, I don't know, honestly, almost it feels like 10 years and look at where we are now. There aren't enough workers to do the jobs that are needed. We're in a situation where, quite frankly, if some machines were to take some of those jobs, that would be a benefit-

Vivek Vaidya:

Benefit, yeah.

Dr. Frida Polli:

... because we cannot find people to do those jobs. I think what's unfortunate, and I was at Davos a couple of years ago and saw Eric [inaudible 00:34:03] speak to this, which I thought was very telling, basically, AI and machine learning, the technologies are there. What is lacking is the will right now to implement some of those for fear of some of these nightmare scenarios coming to be, all jobs going away. But unfortunately, the losses in productivity that are occurring because we're not fully adopting these new technologies are not trivial. I think there's just a lot of resistance. There's a lot of gum in the wheels, whatever the expression is, that is hindering this progress. Again, I'm just going to go back to the fact that I think a lot of this is stemming from fear of the technology, concerns around how it could be used poorly.

The more technologists are continuing to want to hide what they're doing and obfuscate, the less we're going to be able to remove and work on this resistance that exists in the public and the media. So it's just like it feeds on itself. The public is scared, but then the tech companies don't want to say anything and so we're at these loggerheads where unfortunately, we're not able to use this technology in the way that we would like. The reason that I became interested in regulation is because I want the technology to expand, not because I want it to contract. Does that make sense?

Vivek Vaidya:

Yeah.

Dr. Frida Polli:

I don't think we're going to be able to expand the technology as well as we would like to without-

Vivek Vaidya:

Some form of regulation.

Dr. Frida Polli:

... some meaningful oversight, yeah,-

Vivek Vaidya:

Yeah. The final question for you-

Dr. Frida Polli:

... or oversight.

Vivek Vaidya:

Yeah. Final question for you. What advice would you give a budding entrepreneur right now?

Dr. Frida Polli:

Honestly, I think no matter how the ups and downs of entrepreneurship, it's still the best job you can have. My husband looks at me like I'm crazy because he is like, "Oh, my God. That was so much work. You sure you want to do that again?" I was like, "I'm ready." I don't have the next idea I want to pursue and I have a job right now, but yeah, I think it's the best way that you can have to be employed, honestly. I would just say go forth and prosper. Because I think entrepreneurs are the kind of crazy, optimistic dreamers that we absolutely need more of in the world. And yes, they're going to fuck things up, excuse my French, sometimes,-

Vivek Vaidya:

It's good.

Dr. Frida Polli:

... but for the most part, they're really going to... I think they honestly, many times, exhibit the best of what humanity has to offer, which is optimism, trying to solve a problem, doing the impossible against all odds. I don't know. I think it's a great way to earn a living.

Vivek Vaidya:

Yeah. There you go. You heard it, folks. It's the best job you could have and go forth and-

Dr. Frida Polli:

It is.

Vivek Vaidya:

... prosper.

Dr. Frida Polli:

Go forth and prosper.

Vivek Vaidya:

Sage advice from a fellow entrepreneur. Frida, thank you so much for joining me here today on the first episode of season four of The Closed Session.

Dr. Frida Polli:

Absolutely. I'm so excited to be your first guest, so thank you for having me.

Vivek Vaidya:

That's a wrap for this episode and we'll see you soon.