Engelberg Center Live!

The Equality Machine FUNTIME BOOK PARTY

Episode Summary

Welcome to Engelberg Center Live, a collection of audio from events held by the Engelberg Center on Innovation Law & Policy at NYU Law. Today's episode is a FUNTIME BOOK PARTY presentation by Professor Orly Lobel. Professor Lobel discusses her new book The Equality Machine. The discussion is lead by Professor Jeanne Frommer. The episode was recorded on March 23, 2023.

Episode Transcription

Announcer  0:00  

Welcome to Engelberg Center Live!, a collection of audio from events held by the Engelberg Center on Innovation Law and Policy at NYU Law. Today's episode is a fun time book party presentation by Professor Orly Lobel. Professor Lobel discusses her new book The Equality Machine. The discussion is led by Professor Jeanne Fromer. The episode was recorded on March 23 2023.


Jeanne Fromer  0:29  

Hi, welcome, everyone. Thank you for coming out on this rainy spring day. So I'm gonna give a brief introduction to an amazing, amazing woman scholar person. But I want to give a brief introduction so we get to hear about her fantastic book. So we're delighted and lucky to have Orly Lobell with us today, Orly holds every distinguished title from her home university, University of San Diego. She's the Warren Distinguished Professor of Law. She's a university professor, big deal. And she directs the Center for Employment and Labor Policy. And I've known orally for a long time. And I feel like Orly has such a constellation of talents that are going to be reflected in what we hear about today. She has a intellectual curiosity, a creativity drive her finger on the pulse of things before anyone realizes that there's something going on. And so today, we're lucky to have her talk about her latest book, The equality machine. And you just as an instructor, if you haven't already, he's a wonderful writer. And you should definitely be reading this book afterwards, when she's written two other fantastic books, including this one, which non lawyers will love as well, my husband saw it sitting at home and picked it up and couldn't put it down. You don't own me, which is about Barbie, and brats, and some litigation that happened. So but Anyway, without further ado, I'm gonna turn it over to Orly to talk about the equality machine. Thank you that was really


Orly Lobel  2:20  

that was that was really, really moving, Jeanne, because coming from you such a wonderful introduction. And such a meaningful introduction is, is especially moving for me because I see you as a role model and an inspiration. And I learned from you every single time and from your research and mentorship and network. So I was telling Jeanne and Michael, I was Michael, thank you, thank you for having me. Even you know, on a rainy day, I will always come to New York City, which I love. And specifically to NYU, which I'm very, very much love, I'll put on my purple dress and show up. And, you know, stay here as long as you'll have me. I'm particularly excited to talk about this book, because it's, it's really, I see it as both a very personal book and a universal book, a book that should be personal to all of us. And a book that was written to be the beginning of a conversation and not the end. And as we all know, you know, even since the book came out, you know, it's not like nothing has been happening on the AI front and technology. So it's really been so rewarding to have these exchanges and you know, smart readers, we have. I see Tomer candidate who was co authored one of the essays that came out last week, there was a symposium and the yield journal regulation, continuing the conversation, and just I learned from from each of these exchanges. I said it's a personal book. And I start with kind of my, one of my personal stories of how we are a family that has experienced quite recently how life saving life changing technology and particularly machine learning, data biometrics, and really kind of autonomous decision making by machines. Can can be just what any family dreams of my middle daughter has type one diabetes. And just two years ago, the FDA approved for the first time a close circuit of autonomous decision making of data that's continuously taken from her body on her CGM, continuous glucose monitor and then It makes decisions with the glucose, the the insulin pump that's separately attached to her body body. So kind of two different machines that are collecting data, improving learning and being much more accurate than and quick. And then anything that we have been able to do. And when I kind of set out to think about, you know, how we should think about technology, or what I said here, like what we talk about when we talk about AI, how we think about, you know, technology that's changing every aspect of our lives. That's why I think it's so personal for all of us. I really kind of wanted to look at, you know, all of these different sectors from the workplace, as Jeanne said, employment and labor law are my, one of my primary research fields, but also schools, education, care, health, relationships in the family, and even intimate love and sex. And it's funny, I, you know, Tel Aviv University, do you have an event for the book and when the there were a big panel of speakers that talked about different aspects of the book and how they see it, and how that relates to their research. And when I came up to thank everybody, I said that, yeah, this was really wonderful, like dream team of commentators, but maybe because of my parents that are here in the audience, nobody talks about the sex part. So my parents are not here. Now. You can definitely ask me about sex robots. There's a whole chapter about that, where I really wanted. I mean, this is this is important, because it's this way that I approach each one of these areas and sectors of our lives where I wanted to come with fresh eyes. And I wanted to tell something about what you know, is being called in the reviews a counter narrative, a contrarian and I find it a little funny to be contrarian and 2023, by actually wanting to think about the costs and benefits, the potential and risks, the you know, the opportunities and the problems, and not, you know, just about what I had seen, and really kind of had been a big motivation of writing this book, have a lot of negative reporting really kind of a disproportionate emphasis on AI as wrongs tech as dangerous risky. Kind of harmful and and disrupting our lives and scary ways. And I kind of part of the investigation was kind of where does that come from? And I saw it as kind of a cycle, a trajectory where there was a lot of optimism, and then there was, you know, there are and I want to be clear, I never in the book deny that there are risks, and there are problems of there have been wrongs. But I saw a lot of conflation between our tech lash that was initially associated with big tech with, you know, a very concentrated markets and specific corporate behavior that was problematic. And the technology itself, the indictments, on what the technology is, you know, you know, whether we need it or not. And we've had both in the kind of scholarly writings and in the public imagination, and then also translate it into policy. And I'll talk about that, too. We've had kind of this wave of capturing our imagination of even just kind of the titles of how we think about AI and algorithmic decision making. So just the title is algorithms of oppression, weapons of mass destruction, technically wrong, sexist and bias algorithms, other threats of toxic tech, race after technology with the subtitle, The New Jim code, automating inequality, and then kind of the overarching surveillance capitalism. So you see, you know, like, all of this specific emphasis that there's exclusion and inequality and bias that is embedded in these new technologies, these new digital technologies that have aI which is a very broad, you know, set a set of capacities capabilities to begin with, but, but also the kind of overarching, you know, any extraction of data is going to be harmful, and we should be really afraid of it. So Have you no one responds, and I think this is actually happening, this has been really exciting to see, definitely, since the book came out and conversations that I have, with both private industry and and policymakers, and also, in the research field, kind of this, understanding that, you know, when we point out the problems, it's just not enough, because the first thing is that the train has left the station, its AI is here to stay, that's actually the nature of technology in general, and kind of the story of human progress that, you know, we do have these cycles of fear. And, you know, the printing machine, Genie knows all of these very well, and mark, and Amy, like, all these kind of questions that are, you know, art is creativity gonna go away, when we have new technology of looking at Jessica to, like, we have digital photographers, and you know, are we going to not have that professional anymore. So we, we, we've been in kind of those stories before, and the technology is not going away. But for me, it was more important than just saying the train has left the station, to actually look at how we're talking about the harms that we are talking about right now. And say, you know, beyond, you know, just oh, we don't have any choice. And let's, you know, just actually think about next steps and be I talk a lot about skin in the game kind of be not from the outside critiquing, but actually thinking constructively about how to direct the, the the


tech and the digital data collection, and you know, the deployment of AI in good ways. But also, even on kind of its own merits of, you know, looking at whether what we have right now is algorithmic bias and automating inequality. When I when I look through a lot of the reports, and I look at government reports, congressional hearings about you know, what, what is happening with algorithmic decision making, I see, or I saw in the book and a lot of different chapters, and I continue to see a lot of logical fallacies, in, like, our own biases, and how we are interpreting what is happening. And the kind of, I think I have a set of, you know, what these fallacies are, and I'll unpack that a little bit more. But the overarching one is that we have this double standard that we demand from Ai, perfection, when we're not kind of asking compared to what, and for me, especially as somebody who does study, behavioral policy, behavioral economics, like co author, with social psychologists, with people from the business schools, I do experimental research, and I think about, like this decision making in hiring and firing and pin and, you know, a lot of different aspects. To me, it was always really important to ask, you know, compared to what, what the status quo do, we have, you know, some perfect decision making, or we have human decision making, and we know so much about the cognitive fails and, you know, biases that we have. And what I really wanted to look at with each kind of shift is, are we are these these shifts? Are the new decision making processes outperforming what we had before, not simply, are they perfect, are they, you know, without any, you know, inaccuracies. I think, you know, the the most intuitive one that really drives me nuts is when we see these reports about Robo taxis and self driving cars, and we see like, you know, report on one accident that happened on the road, and, and immediately there's kind of the the reporting of, oh, they're unsafe, we can't, we can't have them. And I wanted to investigate more like, why are we not asking about the lifesaving, you know, probability of already having acknowledged that drives better than human error. So, and it's an empirical question on each one of these fronts in the health chapter, I talk about mammograms and screenings of like radiology screenings and introducing bots. And we can talk empirically about, you know, are we there yet, but I don't think that we've developed enough of a language of when are we? You know, when do we become I'm comfortable to take the human out of the loop and to actually insist on algorithmic decision making on AI to do a lot of these good things that we want to achieve, like wellbeing and equality and, and the scaling of all of that access to, to healthcare and to jobs. And I'll just say one more thing about the comparative advantage to the like, it's actually in the book, I unpack that much more. So there's the comparative advantage. And in performance, there's also a comparative advantage in the trajectory of performance. That was, I think, kind of embedded already in what I was saying about, you know, that we, as humans have, we learn, you know, we have the capacity to learn, we have the capacity to improve, but as someone who again, like teachers, employment discrimination, for example, and I, you know, research, how corporate culture can change or not change, when there's discriminatory, there's, like, you know, a culture of bias that pervades the workplace, there are a lot of things that are very difficult to change, even if somebody, you know, go through litigation, and you point to an executive that has made bias decisions on pay for women, for example. And so the ability to improve an algorithm that's found to be imperfect versus to improve our own decision making capacities is also part of this equation of thinking about comparative advantage. And the third type of comparative advantage that I tried to impact is this question of transparency and auditing and kind of detection of the problems. So we have really been, and I don't have that book on the kind of slide of the, you know, other kind of story books, but our friend, Frank Pasquale, has a book called The Black Box society, and we've really been kind of accustomed to talk about algorithms as black boxes. But again, I think here, we have to ask this question of the comparative question, you know, are we humans, so transparent in our own decisions? And, and even, you know, do we understand our own mechanisms and what has impacted our perceptions, our preferences, what, you know, what shapes us? I know, guys, you know, also has a book that's coming out, but is, you know, about how technology is affecting our, our behavior, you know, the addictive qualities of it. But the point here is that, you know, we know, and I know this from a series of experimental research that I have, particularly with you, well, Feldman, where we show that people are really bad at understanding their own motivations, and their own decision making. And again, if I use the same example of like an executive that made some discriminatory decision, in the context of the workplace, it oftentimes it really is unconscious, or, you know, the, there's a lot of things that are in play, that are very difficult to unpack, and there's not kind of this, you know, paper trail of how that decision has been made. And it's true that algorithms to that, you know, it's very difficult. And this is why, you know, the language of blackbox, you know, where it comes from, it's very difficult, probably impossible, in a lot of cases to understand, you know, with with so many so much data, and so much, you know, computational capacities that come into, you know, finding the patterns and making the decisions. But what we do have is a digital paper trail that can be audited at the output stage. And so, some of the book is actually focusing on kind of how we, in the law world, focus a lot on the inputs of like, what is prohibited considerations factors to go into our decisions, what what we what should be considered, and what I'm arguing is that we really need to rethink that. And in general, I'll talk about that more I think I've always been on the side of wanting to know more on everything and and then kind of regulating the, what we do with that knowledge, rather than not knowing how things so so it's both on kind of the anti discrimination front but but also more generally, this is my stance on The Privacy like I'm taking on also the privacy in some ways, you know, like, data minimization and all that, that is, has been shaping our policies on both sides of the pond, right, and Europe as well. I see my two Europeans here. Okay, so, so it kind of said this, you know, there's one of the things that I want readers and, you know, people who are continuing the conversation to be very kind of sensitive to and kind of, if this is a message that I can, you know, bring with, you know, to everybody with the book is that there's a lot of the same stories that are told, and it was part of my research to look, you know, even in the scholarly articles, like you see, it's kind of sloppy, like the same stories of why there's, you know, algorithms are problematic, the same stories are rehashed. And so one of the main stories and it was just told by John Oliver, like last week that he did a segment on AI and how dangerous it is. And he told the same story that Amazon tried to do it hiring app, to automate their hiring processes and take humans out of the picture. This, by the way, is being done by basically every single fortune 500 company these days, you know, like they they get


1000s and 1000s of resumes all the time. And there's no other way than to automate some of that process. And, and John Oliver, and everybody else report that what Amazon did was, of course, completely failed, because it turned out that the, the algorithm just told them to hire white men named Jared who play lacrosse, or something of that, like, I'm sure you've heard this story. And what is not told in this story is that a Amazon has never had never deployed this software, because it was, you know, exactly, because of what I said before it was, you know, able to check what, you know, what, what does this algorithm work well, and it had, you know, all these outputs, it could audit it in advance. And that's really not the gold standard anymore of, you know, how do you do algorithmic hiring, and I have a whole chapter about how there's been so many advances. So in the computer science world, we talk about, like, there's exploitation algorithms, there's just look at the past what, you know, what had been done before, and just replicating that. And that's, that's the white men named Jared, you know, result. But there are exploration algorithms, where they really kind of try to find new skills, undervalued talent that we have not seen before, there's a gold standard has become to have, like two algorithms that are operating together, one that is creating some results, and one that is checking for diversity and a balanced, you know, balanced results. So there's been a lot of advances, and one of the key arguments that I think is important for policymakers. And this, this goes to kind of questions that, you know, I'm kind of anticipating misreading of the book and subsequent articles I'm writing, where, if I'm saying, oh, you know, there's machines can be quality machines, and you know, AI for good is great. They're like, oh, so we don't need to regulation. That's what you're saying. And actually saying exactly the opposite. I'm saying that the story of the tech lash has actually prevented us from having the richer type of regulation that we really need on governance, public governance of these newfound capabilities that we humans have. So perversely, we've been so focused on kind of creating these safeguards of like, do not, you know, use biometrics to, like do not collect our data. And we have very little language in the policy and law and policy world about what is actually like, how do we create incentives to do the, you know, to deploy the better types of algorithms? How do we differentiate between them? So that that has been important to me, and, and another story, so that so I think the twin like leading stories that I've found, were the kind of hiring contests and then the, the facial recognition one. And again, I think everybody is aware of the story, which is very, you know, problematic. But again, if you think about it, the timeline, this is like 10 years ago, a research coming out of MIT that shows that facial recognition technology then was less accurate. it on women and people of color. And this, you know, translated into a documentary called coded bias. And I think it very much shaped some of our, you know, fears and reluctance to accept facial recognition. But that's a good story of for me to kind of differentiate between different fears that we have. So do we fear the technology itself or the technology at its imperfect stage and I, and then in the book, I show that there's a lot of this conflation between, do we fear the technology because it's too powerful, or because it's not powerful enough, because it's perfect, because it's imperfect. And that this first problem of it being imperfect, and imperfectly recognizing certain demographics, that is actually the much easier problem and the one that is transitional, it's one that really requires more data, more representative data. And, and I argue, and, again, I don't think we debate this enough, I argue that there is going to be tension between this kind of stance on privacy, and the desire to have more accurate, more powerful and more complete, and Representative algorithms. And we have to contemplate like, we have to really talk about these trade offs. Throughout the book, if you read the book, I want to be very open about trade offs, normative trade offs that we have. And I say that these are not new problems, these are problems that we've always had, we've always in a democratic society, have had a lot of normative principles and values that we care about. And there are tensions between them. And we have to be very open about that between safety and privacy between, you know, free sweet speech and equality between different freedoms and individual liberties and collective goods and distributive justice. And I argued that a technology can actually help make a lot of these tensions salient, can give us more language and more ways to in some ways, quantify recognize that some of these will won't be quantifiable. But they it, I think, makes us, you know, have to have more frank conversations, and especially actually even unpacking, you know, it's called the equality machine. But I also recognize that just the concept of equality itself is, you know, as a very, you know, rich and in itself has some internal tensions. So, it can help us, you know, talk about this in better ways and more open ways. And also, it can mitigate, and this is very important to me, mitigate, you know, some of these tensions, so you can kind of have both, I always want to look for the win wins, and there are ways to use to leverage the technology for that. So, you know, a couple more examples, I want to be quick, so I will not really go through them. But I also think about content moderation in that same way of, like algorithmic moderation. And it's combo with human moderation. I'm very critical. Actually, if you're interested, I have a new article called the law of AI for good, where I'm very critical of the Federal Trade Commission report that says that Congress shouldn't do anything on the front of content moderation, and using AI for online harm, because the, the technology is so rudimentary and new and it can be biased, and it's really kind of not taking any stance, which is just wrong. Because, you know, and then and it leading to how we talk about section 230. I'm like, should we ban it? Should we repeal it? Should we, you know, or should we just allow complete freedom rather than giving actually guidance to all these companies that are actually doing in practice so much. So again, kind of that paradox of not having enough regulation because we, you know, are focusing so much on the problems with the new technology. Same with health. Anyway, and I'll be quick, I'll just say that I, I actually want to introduce into our conversations, these new rights kind of more radical possibilities of not just not having this human in the loop as a solution for everything when we fear AI, and I show how it's you know, it can be really problematic, but actually having at some point, recognition that we will need a right to having AI in the loop or human out of the loop. I'm gonna talk about that more and similarly, I just talked about this, but not just having the language of do not collect, do not use my information, do not store my information, you know, right to privacy, but actually a right to be included and, and specifically here, again, this this very much infuriates me when there is this assumption that more surveillance, if you want to use that term or more data collection will not only be harmful kind of as privacy in a kind of deontological, like harm and itself, of intrusion upon seclusion, but specifically an assumption that it will harm the more vulnerable I show in every single chapter. I think that that a lot of sometimes it will be right. But, but a lot of times, it's also actually false that the collecting more about, you know, what, you know,


the health of, you know, people who are traditionally excluded from clinical trials, collecting more about, you know, poor neighborhoods, and where we need to invest in infrastructure, including people creating more digital literacy, making them more connected, you know, Guy and I might have some, you know, something to talk about here, but, but I think we have to talk about how being not connected, especially when we now move to the global arena, and not just kind of having our narrower world here, that is, at least as significant a problem and, you know, in terms of equality and financial independence as, as the problems of of connectivity, and, you know, like algorithms creating some, you know, again, I don't want to mitigate that, or minimize that in any way. But we need to talk about that. Okay, last thing is that I want, I actually really think that this will become a big field, and it's really nice set of policy being much more involved in thinking about us, not just you, like, should you know, is AI trustworthy? But how do we learn to trust the right kind of AI, so kind of, I spend quite a bit of time in the later chapters of the book, thinking about differences in culturals, I kind of traveled to Japan, doing COVID, it was remote, but actually, I'm going to Japan in a couple of weeks. And it's quite exciting, because I was arguing in the book that the Japanese for a lot of different reasons that I unpack in the book, I have a much more comfortable relationship with not only digital technology, but specifically with embodied AI and in the form of robots and humanoids. And bringing them into their homes and all that. And and now I'm, you know, a g7, representative to the Japan Task Force on digital governance reporting to the WTO. And I'm gonna be in Tokyo talking about all these things. And I see this, they're like, why don't? Why are the Europeans not getting it? So it's been quite exciting to see all that. Last Last thing is that, you know, this all the kind of the elephant in the room is that we need governance, as I said, we need public private partnership, we need regulation, but we also need a competitive market. So I kind of wanted to bring back, you know, also, my previous books that really are about more competition than I have, you know, some thoughts about how do we continue to think about antitrust and competition in that sense? So yeah, I thank you.


Jeanne Fromer  33:51  

Sit down, of course, take a sip of water. Thank you so much for the really illuminating talk. And so to kick things off, and I hope we'll have time for questions with everyone else. But let me throw out the AI that everyone's been talking about for like weeks on end, which is check GPT. Because there's so many things related to that. And I'd love to hear how you think about different trade offs, and law and policy interventions. In that context, I'll throw out a bunch of things, feel free to respond to the pieces that intrigue you. So first of all, one of the threads I think we're hearing in that in the context of Chachi to GPT, and other forms of generative AI, I think, you know, a certain groups of workers that I think didn't really fear so much for their jobs are all of a sudden, really nervous, agitated artists, visual artists, for example, perhaps writers as well. They're worried that you could say, you know, you could ask to create things in their style and it will put them out of work eventually, and it may even discourage people from going into this space. I mean, the This is a theme we've heard in lots of conferences you brought up. And then at the same time this is going on, you know, there are a number of reactions, and one of them is technological, in nature, that there are groups of researchers that have done this and the privacy context already, right, trying to block, you know, faces from being recognized to block images from being read by these models as a way to protect the artists. So that's one response to, you know, this, you know, economic, let's say, concern that's going on. And so, you know, I'd love your thoughts, you know, one about how, how you think about, is this technological warfare or this arms race? Is that a good thing ultimately? Or is it actually a bad thing that interferes with technology from doing something? Good?


Orly Lobel  35:53  

Yeah. So these are all tough questions. It's not that, you know, I, I think there's like some magic one saying, Yeah, sure, let's just accept all of it and not care about the consequences, there is disruption, for sure that it's happening. And there is, again, it's exactly like you presented it, there's a lot of good that can come out. I actually think that, like with GPT, here, and here, and in our world and a law school, you know, I think about access to justice, and the cost of representation, legal representation. And, and I really think that this is huge potential for having much more, you know, kind of distribution of, of who gets to look at, you know, their, their documents and draft, you know, contracts and be represented. And, and hopefully, I think I do think that some of this is the story of human progress in general, I think, again, Jessica can answer these questions. Very much. So about like people fearing their jobs, and you know, what, how will they use their creativity when technology is doing creative things? Also, I think that the story that I hope is with us, again, I thought about this in the context of law is that we can at least embrace the kind of more mundane things that are that we can strip away from, from a lot of jobs and, and focus our time on on the things that are, the more creative. Having said all that, I don't think that it's also I don't think it's completely right, what I just said, because I don't think exactly like you said, I don't think that we're any longer and that, you know, machine is better. And this and humans are better than this, I think that machines are taking on creativity, and it's kind of softer skills and emotional stuff. And I and so I actually think And this brings me to actually a whole other kind of world of my own research, I think this is a moment that we will have to grapple with what work looks like, which we have had to do anyway. I mean, there's, there's this anomaly that especially developed in the United States, but it has kind of spread around the world of, you know, attaching so much to work, in terms of your welfare and social, you know, benefits and, and there's, you know, I get a lot of questions about the gig economy, and, you know, the contingency of work and what will happen with platform work, and everybody's focusing on Uber, but it's going to happen to everybody, like, it's going to be the programmers and the architects and the attorneys. And there's, there's no denying that there's going to be a lot of dislocation, we need to be much better at re Skilling and anticipating, you know, where the jobs will happen. And it's going to be it's much faster than before. But also, we just, I mean, I think it's a moment for more radical thinking about you know, like, universal benefits and thinking about just new modes of of working that are sustainable. And I didn't answer I know that in the answer the question that you're most interested in, which is like the this kind of intellectual property rights and like, you know, the So, so there again, like I think I get with Chad GPT and the related kind of deli and all that the kind of artistic creations I think, for me, not giving them access to gobble up, like, you know, all human creativity. He is I think it's problematic. I think I think it's great. It's funny, because we're all inspired. I mean, all of us who do this kind of work, we know that we're you know, that's that's how we we stand on the shoulders of giants. We're always inspired. So if if chat DPT inspired by millions and millions versus, you know, each one of us that has been inspired by just, you know, a handful of, you know, artists that we learned, well, I mean, let's give that you know, what, I don't want to say her or whatever I'd like, you know, I do want to make them a little bit more, like on the same standing foot. But But I think let's give chat GPT kind of what we can and think about, you know, do we still need the same kinds of protections that we needed before? Or let's, you know, let's see what happened.


Jeanne Fromer  40:55  

Yeah, I really think I mean, it's almost like a mirror aspect of what you're talking about, about humans and machines. This is really a moment to reflect on humanity. Yeah. As well as what machines do right. And I really like your notion of progress. Not perfection. Yeah, in this context, because a lot, there's a lot we don't want to look inwards, about and explore about our own humanity. But I'll shut up because I want to leave time for questions from other folks here. I could ask more. But do folks out here have any questions they want to ask or comments? Go ahead, Matias. If you could use the mic so everyone could hear you.


Unknown Speaker  41:38  

took questions. I totally agree that of course, we've seen that quite a number of times the printing press Radio TV. We've also seen it in the Gallup polls, which for the first time allowed to really calculate what people really need. And then we were suddenly scared what they really need and what they really think in the 20s. But all of us have even seen it twice personally, the internet and the 90s. And now the AI the biotech and so forth. And at least from my personal experience, my personal view on this is that surprisingly, the internet in the 90s was actually received in a quite an A through easiest way, sharing having more contacts having more communication. In the end of the day, it was also a two sided technology in many ways. But there wasn't that kind of weird skepticism which reveal now about the AI. And I'm really asking myself why that is why are people so skeptical towards AI where they are barriers they welcome to the internet. And this brings me to these mistakes it quite certainly we'll discuss this in the book at length, and I haven't read it yet, I have to admit. But is it perhaps that the AI tends to make weirder mistakes, so the human mistakes are just human. And so we tend to understand them very as AI makes reader mistakes. So for example, the traffic crash if someone in the dusk, overlooked a truck in the street and then ran into it with what have understanding. However, if we can then document that they I saw it and thought it is a roadside, let me think how could you do this is amazing. You see it as a truck, you know, so we underestimate the strength of the AI it could see it. And then we overestimate what we could have done in actually evaluating it. So is it just that it makes weirder mistake which we can understand? And how would we deal with this what we educate people what we tell them? Because it's very hard to the bias, as we all know. And the second question is on policy. I couldn't agree more with you that we are too skeptical towards certain of these technologies, in particular in Europe, I don't have to tell you. And biometrics as well as content moderation are obviously excellent examples because it is about comparative advantage. That's true. Now Europe tries to regulate that human AI interface. And what comes out is strikingly naive. And so I'm asking myself, is it just that we are still in the skeptical period and we just have these black and white pictures instead of approaching regulation? or isn't? Isn't that perhaps that we just don't know how to regulate this yet? Because Europe tries to do it in a more specific way. And it pretty much seems to me regulation before the fact they're fantasizing about things like red buttons to switch off the AI, which reminds me of all the Kubrick movies. They are fantasizing about the AI telling about its recital and so forth. So all this strikes me as regulation before the tech so I'm asking myself, isn't the under complexity of current regulation approaches rather about knowing too little at the moment and about this fundamental skepticism?


Orly Lobel  44:58  

Yeah, No, these are all really good, great questions. And it's actually a lot of the motivation of writing the book and doing the research. Were those kind of two fronts of, you know, what, what is it that we misunderstand what leads us to misunderstanding, and I follow I, at some point, I quote, Danny Kahneman, you know, behavioral social psychologist and where he talks about actually the vaccines. But in this context of how we think about technology, a lot of times where we have this feeling that anything that we do is kind of in our control. And, and there's kind of, there's a lot of biases that we can unpack, like the status quo bias and the kind of default or, or inaction by US versus if we're actually doing something like, you know, inserting something that's human made, like the human made versus the, the natural, we have these kinds of fears of things that are an end, I actually in the book, and I hope you'll see, I go back all the way I think, to King Solomon, and and, you know, a lot of kind of an Chinese history of like, we we've always had this ambivalence of like the fantasy of the machines, but the fear of, you know, losing control. And, and so there's a lot of kind of that cultural thing. I think, I think there are ways that we can be by I'm very optimistic that we can be bias and it goes back to your goes to your second question about that button. And that, like, you know, stop everything. So a good example that I like to bring where we all as a society have agreed for years, that there's a human out of the loop exactly at the riskiest times, which is exactly the reverse of what the EU draft AI policy AI bill actually says right now, which is like, highest risk? Let's, you know, not have aI lowest risks have, yeah, so we've agreed as a global community that in aviation and commercial aviation, in the highest circle, you know, risk circumstances, when there's bad weather, the pilot has no control over the plane. But we actually don't make it so salient, like, each time, right? Like, you don't have like an announcement. Just so you know, I can't land this plane right now. And it's gonna be a robot that's doing it. There's one central computer from what, you know, one in the world, in Los Angeles, and soon it will be the case that just like cars, it will happen. I don't, I can't tell you exactly what year, but will happen that will have completely autonomous planes. And, you know, and, and there'll be safer. But the the policy is the EU policies, like not just what you said about the red button, but it's also like the policies, and here in California, it's, it's now on the books, and it's bills before the before Congress right now, of like notifying people that they are be like, the decisions are being made by an an algorithm, like as that, you know, kind of this understanding of transparency and accountability as this like moment of notification of the consumer that that's the, you know, and, and I open it up to say like, if that's if there's a tension there between, like, if you're having to tell a patient, like the reason I'm giving you this recommendation of this treatment is because the algorithm told me this, and we know from the research that they're better than me, the physician calculating that, if it's an empirical question, if that's going to be better or less in terms of adherence to care, and, you know, acceptance and overriding by the physicians themselves, which I go through research on that I think we need more research, but we already have research that sometimes that's really bad idea. There's experimental research on that. So I think we have to be very frank about the kinds of policies and you know, that we need to get to the goals that we need, and if the goals are both, like, let's be more transparent, and have better health and their intention, we did each other and we have to make those decisions. You know, I know where I would make the decision, like what decision I would make, but I think, you know, I don't want to give all the solutions and so you know, democratic processes, but we're not even having those conversations.


Jeanne Fromer  49:46  

So we have time for maybe one very quick last question, Harry.


Unknown Speaker  49:51  

Well, it's a quick question. I'm not sure about the answer. So I have sort of as I'm listening anymore, and maybe it's in the book early I honestly don't know if if you want to regulate artificial intelligence or AI? What would you write down? Exactly? Does it have a definition? Or is artificial? And is AI a metaphor? That sort of covers a lot of different things. So that because you sort of go back and forth talk about algorithms, I finally figured out what an algorithm is. So is AI sort of this general metaphor for Oh, my God, the machines are taking over? Or if I were a legislator, or if I were in any trust person, I said, Oh, I want to split AI off of Google. What would I do? So that's, but that's Yeah, we don't have any time.


Orly Lobel  50:44  

Yeah, this is so excellent. It's it's a perfect question to end with, because I think goes back to the the other question of like, how do we do this better in terms of regulation? We need? First of all, we need so much more expertise in government itself? That, you know, to your question, and we shouldn't have, I actually don't think that it's useful. And I've been critical of this, like, I actually have a new paper that is critical that like, the Biden ministration has an AI Bill of Rights. Yeah, like, separate from all the legislative bills, like there's the AI Bill of Rights, and it talks about it and kind of this bundled way. I you know, I actually think that we need regulation for each one of these technologies. Like, you know, what would be the gold standard of having hiring algorithms, or video, you know, emotional detection of like candidates. And we actually do need to flesh out like, you know, I talked about content generation and facial recognition and, and when, you know, when the FDA approves autonomous decisions and radiology, we need, we need regulation on all those. And I think that from what I see, I don't even I will probably agree that it's not really useful to have just like this overarching, like bundling of everything that is possibly AI whether it's like truly machine learning, like you know, neural network technology that is making completely autonomous decisions or you know, I think it's just because these are a different technologies be very different contexts and goals and sectors. I think it's doing probably more harm than good to have these dislike, what the EU is doing like telling us like, this is what AI is and just be really really careful.


Jeanne Fromer  52:47  

And on that note, please join me in thanking Orly.


Announcer  52:49  

The Engelberg Center Live! podcast is a production of the Engelberg Center on Innovation Law and Policy at NYU Law is released under a Creative Commons Attribution 4.0 International license. Our theme music is by Jessica Batke and is licensed under a Creative Commons Attribution 4.0 International license