Engelberg Center Live!

Conspicuous Consumers: How AI Impacts Consumption

Episode Summary

This episode is audio from the How AI Impacts Consumption panel from the Engelberg Center's Conspicuous Consumers Symposium. It was recorded on October 17, 2025.

Episode Notes

Episode Transcription

Speaker 1  0:00  

Ed, welcome to engelberg center live a collection of audio from events held by the engelberg center on innovation Law and Policy at NYU Law. This episode is audio from the how AI impacts consumption panel from the engelberg Center's conspicuous consumers symposium. It was recorded on October 17, 2025

 

Jason Schultz  0:27  

Well, good morning everyone, and excited to see you all, and have you be part of day two of our conspicuous consumerism of this topic and this event, I'm excited about this panel. I'm Professor Jason Schultz of parting in glory center. I also run the technology, Law and Policy clinic here in particular, because, as we all know, AI is out of control as an idea, and it's been hyped, and there's all kinds of craziness going on, and I find in all the conversations about AI that I'm searching for, what actually is happening on a concrete level, like, is it making any difference at all? If so, where? How do we kind of drill down into the things that matter? What are those things? And there are many things that matter with AI, but getting kind of clearing the wheat from the chaff, or whatever the right analogy is, and getting to the sort of practical bits and thinking about what the real implications are, especially both short term and long term, I think is, is an ongoing challenge with this topic. And so I'm particularly excited about this panel, because I do think that this question of the consumer and how AI writ large. And we'll talk about the fact that AI is a sort of placeholder umbrella term. There are lots of different particular technologies that may or may not qualify. We're not going to get into the debates, per se, but when we when we talk, we're going to try and be very specific about particular types of of technology and functioning in that world. I think it is going, I fundamentally believe it is going to change some of the ways that we think about the consumer, but I'm not sure exactly how and when and where, and in particular, I think, you know, we've seen in the last few years a lot of discussions around general consumer protection. But they've mostly, I would say, I mean, they've been really good sub pockets of conversations, but they've mostly, I think, been focused on core things like deception and deep fakes, right? Like, is AI going to trick people, or is it going to manipulate people, or is, you know, and some of those, and those are all very important topics, but this question of the role of the consumer in kind of IP tech policy and other places, I think, hasn't had as much robust discussion. So on this panel, I think we're going to try to do a little bit of a deeper dive into some of the questions, particularly ones that folks have, turns out, been running about for years and actually thinking for a long time about which, you know, is always my favorite thing to do, too, is to say, you know, this isn't a question that's brand new. Maybe the context has changed, but we do have some some good thinking about it. But I want to look at particularly well, we're gonna have a lot of things. But one of the things that you know, when you look at IP or tech policy, they want to think about consumer protection as a balance between that, you know, and roughly incentives, competition, innovation, like, how do we protect consumers but also let people do interesting and potentially risky things? And when we have, for example, you know, some of the obviously more recent genitive AI, kinds of inventions, and also agentic AI. I think it is going to pose interesting and new questions related to that. And so I think we're going to get into that a bit today, but just to sort of give some examples of the things that I guess sort of inspired me to want to do this panel, right? I mean, we can think about the question of machine consumption, which in copyright law and especially fair use, these questions of like, what is being consumed, how it is being consumed, for what purpose does it matter whether humans ever consume it or not. So these kind of questions actually are really interesting to me, because, as many of you know, like you can go as far back. I mean, you can go very far back, but one of the places you can go back to is the, you know, early 1900s controversy over the player piano and the phonograph. And, you know, before the 1909 Copyright Act, there was supreme court decision whitesmith Music versus Apollo about the player piano. And the Supreme Court basically saying, well, if it's if machines and the only thing that can kind of understand the player pianos, music sheet, or there is some sort of format that humans will never truly. Consume. Maybe copyright law shouldn't care, like that won't necessarily be considered a copy. Now, that got changed in the 1990 act, and we've had of evolution since. But that question got kind of asked and answered in a particular context of technology, right? Because it is a kind of consumption. And then, you know, as we think about some of the evolution of fair use and copyright and some of these questions around technological innovation right leading up to the, you know, copyright lawsuits now, which I guess at last counter is run 73 or so, you know, even, even some of the early, early reverse engineering cases like Sega versus accolade and connected. You know, computerized code was being consumed in certain ways, but for what purpose it was a real question. And then it's being consumed in part by humans trying to figure things out, and then also, at some point, by machines, in terms of interoperability, things like that, all the way up and through the image search cases, like with perfect 10, the plagiarism detection case of I paradigms, where you're loading student essays in to try to figure out if they copied from somewhere, all the way to the hockey trust and Google Books. Cases where, you know, the courts were asking, well, consumed in what way were these works? You know, how were they used and for what purpose, and by a human versus machine? And that, again, was sort of, I think, a set of the fundamental questions courts are trying to figure out there. So I want to turn to the panel, and I'll just briefly introduce them by name and title, because if you know, their bios are extensive, their accomplishments great, and it's all online. So I trust that your AI agent or your search window can answer all the questions. And then we're going to have each panelist sort of talk for five to seven minutes about some of their thinking, and you know, to what degree they've written about it that has helped them kind of think about the role of the consumer in this moment we're in. And then we'll have a little discussion, and then open up for Q and A. So first I'll start to my left. Mala Chatterjee is professor at Columbia Law School. Next to her is demon Desai from the is it Scheller College? Scheller College of Scheller College of Business at Georgia Tech. And then Erin przynowski from Michigan law. And so with that, I'll turn it over to Mala to talk to us a little about her thoughts.

 

Mala Chatterjee  7:35  

Hi, it's good to see everyone I am at Columbia, but I spent a lot of time affiliated with the engleberg Center and lurking around NYU, so it's nice to be back. My interest in AI technology broadly, comes from the angle of being a philosopher. So I think about the philosophical foundations of various legal institutions that are structuring our relationships with and rights in information, and in particular, how these relationships with information transform and construct our identities, both across space and over Time. And most relevant to this panel, sort of two lines of projects that I've been working on. One worked on for a long time. It's about just the nature of expression and what to what extent do we regard expression as something that is distinctly coming from a speaker that we take to have things going on in their head like ours, and how much does this matter, this idea that there's something being said versus something that we're just experiencing, and then in relation to how this should impact thinking about authorship and copyright, and then the other sort of line of work has been more general about how the law conceptualizes persons, and in particular, what relationship if any, does this have to how we tend to think of persons, which biological bodies that have conscious minds. These things have sort of coincided in various ways with the things that matter to us for a long time. But as we become more cyborgy, it becomes clearer and clearer that it's not really what matters to us all the time, that it's like, oh, this is a biological person, right? Or this is and so thinking about a framework for figuring out, when is it the case that we really care about this being a human actor, such that things like the law's mental state requirements could only be satisfied by an actual human versus like this player piano stuff. When does it matter, and how do we even begin to think about this? You. Question, and like seven years ago, Jeannie and I wrote this paper called minds machines and the law that used copyright law and volition as an example for just thinking about a framework for approaching these questions that sort of brought to bear on our thinking about mental states. This distinction from philosophy of mind, but also cognitive science between conscious and functional mental states, or ways of understanding mental states. Is volition something that has to do with conscious experience of intending an action, or is it just something that operates in a certain way that triggers a certain subsequent behavior, and just sort of thinking about how the question of what matters is going to depend on the context. In particular, there might be asymmetries and parts of intellectual property law, for example, where what we care about on the side of authorship versus the side of infringement is not the same thing. And how do we began sort of thinking about that. So those are kind of my most obvious, like connections in terms of my work. Having said that I've, like refused to write about AI for at least a couple years because I've kind of driven insane by how we talk about it. We talk about it like it's a monolith, in ways that, I mean, this is obviously what Jason was saying. It's like, not really helpful. I mean, in some ways, I think it's like, worse than, you know, trying to talk about like, the internet as a monolith, because we're not even talking about the same thing, like, we're like, unlike the Internet to some extent. So, so, you know, I think it's useful and important to actually animate the tool in question with a particular application, and in thinking about, you know, questions of impact and evaluating them. So that's like, how I approach thinking about this stuff. I feel like every paper should start with. I was saying this at dinner. First question exactly. Tell me exactly what the difference is between the AI version of this thing and like the human analog that we're familiar with. So the example we were discussing at dinner is like, why is it different? Tell me exactly why it's different for us to have a deep fake Elvis impersonator and like a human one. And why does this matter? Why is this a difference that should be important to us? Because many of us who end up thinking about IP are like we devoured a lot of sci fi, and we should know that technology, it might be changing things, but it might just be revealing something that's always been the case, and actually pushing us to to revisit, to revisit those so that's all very general. I can say a couple sort of specific things that on the topic of AI and consumers that strike me, notwithstanding the sort of big disclaimer. So one it has to do with sort of revealing preferences that we have, but also risking reshaping them. With respect to human and AI content. And one thing that really strikes me is that I think the most interesting, a lot of the most exciting and also most disturbing examples of AI generated content are parasitic on particular humans, whether it's content that is like emulating a certain artist's style someone who's passed away, or like making a Beach Boys style cover of a Black Sabbath song. Like these are all sort of parasitic on our interest in actual artists or, you know, deep fakes of particular human beings. It's like, Well, it matters that there's a particular human being that's like, makes this thing, whether it's pernicious, a pernicious application or a more expressive one, that's what makes it sort of interesting to us. And so, you know, in the conversation that we had prepping for this. We were talking about how, as we have more AI generated content, we kind of like risk wiping out the thing that was of interest to us in the first place, and kind of ending up in this, like a robbery situation, as you put it, where it's just AI generated content, eating itself and spitting out stuff that just gets further away from the thing that we actually care about and the thing that we you know, these applications feel like they're bringing us closer to another thing that distresses me and how we talk about AI has to do with just revealing, not just status quo biases, but kind of like unwillingness to recognize that this is an event horizon with respect to possible expressive applications of this stuff. So I think sometimes the way we talk about AI content and human content, or people do is like, it's like the zero sum game, rather than, you know, actually taking seriously the prosperity. Fact that this could be like, a way in which we could expand human creativity in ways that like are not salient to us right now. And the reason I think about this a lot is like, I'm a big so I love electronic music. I was talking I was talking about this at dinner last night, and technologies like sequencers and synthesizers and drum machines like, until those existed, this entire genre of music was just like, not possible, and those machines just unlocked music as imagination, rather than as something that's like, constrained by physical reality in ways that, like at the time, I could imagine a lot of people seeing these things as like, kind of like beside the point of, like, playing an instrument. Like, why are you playing this sequencer? And it's actually just this just makes this just opened a whole new genre of music, and really a whole new space of possibility. And I worry that by sort of being too hasty in our lack of imagination, we might make decisions that foreclose, foreclose this possibility, and that, you know, any kind of hasty ex ante actions could, you know, lead us, lead us down a disturbing path. So it's kind of a weird place to be where I think I said this in our meeting. It's like an event horizon, but also the decisions we're making are like determining what is going to happen on the other side. But those are just two of the two of the particular I'll stop talking for now, but I will have the chance to do

 

Jason Schultz  16:31  

just to, just to sort of highlight a couple of things that, right? So I think, yeah, I think there are many things we can talk about there, but one of the things that's interesting, and you just named it. So I just wanted to kind of raise it up. Right? When we think about consumers, right, there is this question of machines and humans and cyborgs and all these things, but there's also the present versus future consumers, right? Yeah, because I one of the things that the player piano and phonograph, and especially the phonograph moment, and any of you who've looked at the John Phillips Souza has this incredible essay called the menace of mechanical music and how horrible it's going to be. Is going to destroy vocal cords in America and stuff and but some of the things that never was succeeded at the moment was that phonographs also enabled, for example, DJing, you know, or karaoke, or, like a lot of these other things, that created interesting, different kinds of way of consuming as well, and there's always this balance. But let's turn to Devon and

 

Deven Desai  17:28  

things first. I think I'm pretty much 100% with Mala on if you can get the term A out of your head, you'll be happier.

 

Jason Schultz  17:38  

It's really the anti anti AI. Literally.

 

Deven Desai  17:41  

Try and touch on, because I'm actually embedded with computer scientists, although I'm at the business school, I'm also with our machine learning group for the last 10 years at Georgia Tech. And honestly, the word is software, right? Still very broad, but it demystifies it. Ian Bogost has a lovely piece about this, the sort of almost religious fervor that goes around the notion of algorithms and AI. And I think that that matters, because then you can put your arms around, well, what is it we're talking about? So at the first cut, in terms of the notion of, well, how is it going to how does machine learning, or this ability to create through software affect consumption as a simple matter, it's lowering the barriers of creation, right? This goes back to digitization. Most of the people in the room have been around for the early days of MP threes, et cetera. Know about the copying effect, but at the same time to mother's point. David Byrne has a great essay about the sort of six ways you could make music. And the point is, even when he wrote that, which I think is almost 20 years ago now, there were you could do almost everything you wanted as a solo artist. That would have in the 70s and 80s been 1000s of dollars in a studio with a producer, and his real point was, now, the problem is, like any entrepreneur, in that sense, you can't do everything right if you're a creative person and you're taking all that time who's selling your stuff. And then there's the other extreme, the 360 right? That's Taylor Swift, though she interestingly, has probably outwitted the producers, because she's that good at what she does, but fundamentally, it's that other spectrum where you are a complete enterprise. You still create, but you're also on the road, et cetera. Music, again, shows really changing patterns in the industry. But as I think came up yesterday, it wasn't as if there wasn't more creation, right? Just go to YouTube. Tons of bad garage music is available. The questions you get into are the esthetics, is it good? Questions that people in this room? I don't know if Rebecca was able to stick around, but fan fiction, mashups, those sorts of things. But then why does that happen? What's the pattern in my work? What I've argued is. You get that digitization, but then when it's networked, you end up with democratized and decentralized production, which really changes how people think about what's going on, the power to possibly infringe, to maybe be a parasite or not. And if you really want to get nerdy about it, go back and look at Claude Shannon's basic paper on information. He's the font of information theory. And the basic point was data and newness. What's really troubling to us, I think, is that as our data sets get bigger and bigger, and what's happening now is that it turns out things that we thought were incredibly unique, which is actually not unique, is unique, that we're unique strike incredibly may not be patterns emerge, and what's happening with these systems is they're very good once fed incredible amounts of data at detecting those patterns. And then the really interesting question comes to well, what are you doing with that? So in a paper called between copyright and computer science, I was fortunate to be working with a colleague, and I said, well, one Wait a minute. Are you telling me that it's kind of common for people in a natural language processing world, which is the back end of llms, and the things that created the first generative AI stuff, by the way, Google has the real paper on this. It's called attention is all you need. OpenAI took it. If you look at open, AI's first papers, what you'll see is they hit Basic Computer Science benchmarks. Did this seem to emulate what their industry and their discipline had come up with as human or closer to human? And then all of a sudden, they said, Wait, there might be something else here. And they threw it out there. I mean, Altman himself openly admitted, sometimes you just throw it out there to see what happens, which was irresponsible, in my opinion. But what I asked was, well, wait a minute, did they need the entire book? And my colleague and co author, Mark Riedel, said, Well, no, because it seems to me, if I understand how information works, you could just use chunks of it, little bits and pieces. You don't need all of Harry Potter, which got a lot of attention because people were able, granted, it was a little bit of a software attack. To be fair, it's harder and harder now to get it to cough up fully infringing text. Images are new and a new problem, but the point is, at least for text, if what you're trying to do is emulate different patterns of human speech. You don't need the entire thing. There's all sorts of ways you could have slice that and really not then have embedded in the data set the ability to cough up that chapter. There's a problem, though, because at the same time as this stuff gets out there, society is saying, Well, no, no, you need to be accurate. Well, when you have something that's relatively particular, now the machine needs an exact copy to do what's roughly called memorization. So when you're now saying, oh, I want a chat bot to be accurate about something, well, if that something is fairly particular, right? You know, what did Mark McKenna say about something, or Jeannie, or anyone else in this room, well, you're gonna need to have that full thing and cough that up, or it, or it might paraphrase it, but let's face it, it's gonna get it pretty messy because it's no longer doing a snippet. It's generating stuff. This is where the hallucinations and things, where people generally go, well, that's not what I said. Or more broadly, that's just not correct. So you have a new tension about what we're asking these systems to do. So as a consumer, you've got this trade off. Do I want it to just generate something kind of interesting, or do I want it to be fairly accurate? We're getting new choke points as well because of the demand, either regulatory or social outcry. Companies have tried, and the computer science world has tried to come up with this notion of alignment, and it's trying to cabin negative outcomes from these systems. The notion is broad. It needs help. It's called being helpful, honest and not harmful. If that's there's a third agent probably getting that a little wrong, but anyway, those are roughly the concept so that was an attempt to say, if you prompt it and you wanted to try and build a bomb or were thinking about harming yourself, it's supposed to have system prompts that say, no, no, don't. Don't answer that.

 

Deven Desai  24:15  

But we also have the copyright parasitic problem, right? So the next question is, well, if you want it to be accurate, what's starting to happen? And Cloudflare just came out with this. Cloudflare is an incredible choke point, and they're really going after the notion of closing the open web on the argument that these systems are now sucking too much copyrighted material. Well, the interesting question here is now, if you go to a chat bot and you want it to be remotely accurate about world affairs in the last year. I mean, good luck, right? It's going to have to a build and update quickly, and if it doesn't have access to that information, maybe it should be a license scheme that'll be interesting as it evolves. Then people are relying on a system that they believe is a human ish experience, very chat like very calm. Comfortable that's trying to be disclosing, but will actually have walls on the information, right? So this is old problems in copyright. If you're dealing with only public domain, I mean, we're dealing with a bunch of literature that's a small corpus from what 1930 something at this point is what the recent cut off is. That's a really outdated set of information and ways that we think about being human. So the next interesting frontier on this is taking it from being just generating text to AI agents. And so what I probed there was, again, it's an intriguing thing, some of these systems, because of the way they've been built, trying to actually be responsible to society, right? To be clear, the notion of these alignment guardrails is supposed to try and deal with the possible negative externalities. But what if the internal system, as we sorry, AI agents, just to make sure everyone understand the promise, we'll see if it happens, is that all of a sudden, you could tell a system not only give me a really cool plan that maybe I'm going on vacation, not only here's a hotel, etc, I want this type of hotel, but let's say I want two days of extreme adventures and two days of Spa, etc, etc. It should be able to not only recommend it, but even book it for you. Now, people have already started to throw around, oh my god, we're no way robots, which is ridiculous, because actually the back end of the internet already is extremely robust API's manage how these things talk to each other. So zit train has what I think is fairly overblown argument on lawfare that it will be runaway purchases. Well, no. Now what's more interesting, though is, how will that mediate consumption? Will we now consume even faster, but further, is it all even more buying into fully optimized whatever that looks like consumers' ability to get price? But here's where it will get tricky. What if the software agent has an internal prompt that says, prefer green products over price. Does the user get to know that you can flip that too, right? It could go the other way. What if the user says, but I really wanted green products, but its internal system has a preference for just price optimization. These things are coming. And so what we tried to do is, although Mark and I are very clear, anyone who thinks of that legal rights should be afforded to software, just wrong. Very bad idea. No need for it either. It will actually remove responsibility, just like we've done with corporate entities, in a way that's unnecessary. And then no one will everyone will go not it. My dog ate it, or the AI ate it, something like that. So what we're suggesting is embedding a notion of duty of loyalty and duty of disclosure, because agency law has figured out a lot of this stuff in the past, although they are not human agents. So as an engineering standpoint, you could actually build in an actual thing that tells someone this is where my system prompt is. So if, let's say, open AI has a side deal with someone in terms of preferences for shopping, it should be able to tell you that. It might tell you, Well, the reason that's our better choice is you don't get as many returns, but it should be not as automatic, which is what the industry is asking for warp speed commerce, but a little slowed down to possibly actually empower it, maybe, in the sense that you were talking mama, this might actually give users a little room to take back a little consumer power. It will take work. It might have to become a localized version of these models, which is coming talk about, if you want what that might look like. But I'll stop there.

 

Aaron Perzanowski  28:42  

I was that was fascinating. I'm going to take things in a somewhat different direction for a few minutes. So I'm, like, deeply skeptical about a eyes, kind of medium to long term economic viability. When this topic comes up, I like to remind people that the same folks who are dumping truckloads of money into AI a few years ago were trying to convince us that this meeting, this conversation, this event, would be happening on the metaverse. So I've avoided spending too much time fretting about the implications of AI because, you know, I'm not convinced in five years we're still going to be talking about it, at least talking about it, with the kind of, you know, breathless anticipation or worry that we often hear. So I want to, like I said, I want to go in a somewhat different direction and talk about, like, a potential positive consequence that doesn't flow from the technology itself, but that flows from the copyright litigation that that it is sparked. So about 15. 10 years ago, I think Jason and I wrote this paper about the relationship between personal use and copyright ownership, where we argued that certain acts of reproduction and certain creations of derivative works are lawful, or ought to be understood as lawful, at least in part by virtue of the defendant owning a copy of the work at issue, right? That ownership confers, as we put it, a unique entitlement to make use of a protected work. Not like I said, it's an old paper, so forgive the kind of very dated example here, but imagine we have two people who want to copy a CD, a compact disc, for those of you unfamiliar with the technology, onto their computer. One of them bought the CD at her local record store. The other one is a shoplifter, right? Are there acts of reproduction, lawful, right? And I think our intuition was that we ought to treat these two people different right. Their acts of copying, their acts of reproduction, are identical, but one of them is a lawful owner, and the other one isn't. And we think that fact matters, right? It matters quite explicitly under the first sale doctrine, right, where you have to be the owner of a lawful copy. But we thought it ought to matter for reproduction and derivative works too, and we saw that logic kind of operating like below the surface of a bunch of fair use cases. And that's true despite the fact that the structure of the fair use analysis doesn't really make those kinds of considerations obvious for court. So let me give you a handful of examples of the kinds of cases that we had in mind at the time. Think about Sega versus accolade and Nintendo versus Atari, right? So these are both cases. Sega and Nintendo both brought claims against competing video game developers for reverse engineering the sort of secret handshake between their their games and their consoles, right? And the facts of the two cases are remarkably similar, but the courts reached opposite conclusions. Right? Accolades, reverse engineering was fair use. Ataris was not. What's the difference between those two cases? Well, one crucial distinction is that accolade went out and lawfully purchased a Sega Genesis console and Sega Genesis games to reverse engineer them, Atari decided to lie to the copyright office in order to get access to the deposit copies of the games. Right? That, I think makes a big difference in the way the court looks at those cases, right? So in the Atari case, right, the court cites Harper and Rose discussion of the purloined Ford manuscript in the court said quote, to invoke the fair use exception, an individual must possess an authorized copy of a work. Now, I don't think that's true, right? That's a pretty that's a pretty broad claim, but I think it reveals something important about the degree to which we ought to care about this question of copy ownership. We saw similar reasoning in like, the glue versus Nintendo. Case, the Game Genie. Case, the chamberlain, garage door, 1201. Case. So we think there's, you know, some history there. Jason and I, both in articles and briefs that we worked on with his students over the years, have have tried to convince courts to kind of take this line of reasoning seriously in more recent cases, and those courts have politely declined our invitation to consider those things, right? So in the redigi case, dealing with resale of digital music, the internet archives control digital lending cases, these are the kinds of things we were trying to convince those courts to take seriously in the fair use analysis, right? The second circuit was not persuaded in either those cases. This brings us back to AI, right? So I imagine this is, this is old news for most of you, but like for people who aren't following this stuff super closely, the copyright class action against anthropic resurface this issue in a pretty explicit way, right? So judge also puts out this order on fair use, where he draws precisely this distinction, right? He says, For purchase copies of print books that were digitized for training data, the first Fair Use factor phase. Offers fair use. This is what he wrote. Anthropic purchased its print copies fair and square, and was entitled to keep those copies in its central library for all the ordinary uses. That's different from the 7 million books that they downloaded without purchase, without permission, where the court says it doubted that any defendant could meet its burden of explaining why downloading copies from pirate sites that it could have purchased was reasonably necessary for any subsequent fair use. So what does this mean to me? Potentially means right, that at least in the Ninth Circuit, there's this potential foothold to revisit what I see is kind of the second circuit's unwillingness to sort of engage in any meaningful way with these questions of like, what legal entitlements consumers get when they lawfully acquire copies of works, right. The case law here is in the US, especially still incredibly thin. We've got two cases that really address this issue. And so, you know, if we care about these questions about like, what do consumers, what sort of rights of use and alienation flow from lawfully acquiring Digital Works, I think Judge also reasoning kind of opens up a path for reevaluating some of those cases. And if that's the case, that's that could be a really positive development for consumers. Again, it has very little to do with the technology of AI, it has very little to do with the works produced using those tools. But I do think there's a there's a potential upside here,

 

Jason Schultz  36:55  

right? So I have a couple of different questions that are going to try and connect this. Also, I just you, just reminded me, though, that in in the red digit case and kind of some of these others, there was always these sort of gestures to their there probably is a way that, if you lawfully purchase, and like lawful purchasing was a big part of the kind of balance I think the court was trying to strike between protecting the rights holders interests and consumer rights on the other side, right? They're trying to draw this line, okay, we get for sale. It's this idea that you have some rights when you buy and own something. And it was this notion that in the digital world, right? I was trying to find the language from the opinion just now, but I couldn't find it. But that there's a theoretical way in which, you know, you buy the music and you don't have to sell your laptop, if it's on your laptop, right? You can do a kind of fair use transfer, maybe at some point, and sell that those digital files in some way, but not ridiculous. Like, it was just, like, ridiculous, too much, right? That technology, that, like architecture, was too much. But there was, again, this, I think there are these gestures in the law where they don't want consumer rights to completely disappear in a digital space, right? And I think, yeah, like, that's, I just wanted to reminding me that that was also still, I think there's a guilt factor, I think, and just in the decision about, like, we want redid you here to lose, but we don't want to completely shut out consumer rights, right? In a sense, which is great. Well, let me I have a couple questions for each of our panelists, but also do, how are we doing the mics today? Do your roaming mic or Okay, so are you going to do it all right? So if you have a question you want to get in the queue, raise your hand. Michael will keep track of the queue, and then he will come find you. But I want to come back Mala to some of the concepts you were raising philosophically, and also this paper with Jeannie, which, you know, I think about a lot, because I do think that this question of agency and volitional conduct in copyright law and mens rea is one that is still lurking beneath the surface and in all these decisions around, well, copyright and AI, but I think in general, even with a lot of the questions around responsible AI and all the sort of tech regulations, I mean These 700 plus bills that have been introduced in very introduced in various, you know, legislatures to try to regulate, AI, many of them are still trying to avoid, I think these questions, there are very few legislative efforts where I think they're trying to pinpoint who is the actor that we're trying to regulate. I mean, I think the default is the company, right? The company that makes the thing. But I think that that then companies, whether because of engineering or business interests or legal interests, are saying, Okay, well, what if we have almost nothing to do with the thing that a the, you know, the particular software, right? Yes, if we're host. It that's, you know, that's going to be hard. But to your point, Devin, like, what if it ends up being something that goes to a local model, or something that's operating kind of, quote, unquote, autonomously, right? And so I want to come back because, you know, I think this question of how much we should care at all in terms of consumption, right versus volition? Right? Do we want to think about so volitional, for those who may not know like so, in order to infringe a copyright, you have to allegedly do an active infringement. And therefore the act is a volitional thing. And in this idea that if it's just kind of some automated feature functioning in the background of the software, and no human has directed that thing to happen to like, just like you send an email, well, yes, it's true. There are a bunch of internet service providers like NYU and whatever in the middle, and they have the various immunities as well. But like, basically, if I send an infringing attachment on my email from my NYU account to Mala, you know, NYU didn't know about it, they're just, like, sending bits back and forth, right? It wasn't in a volitional act on their part. It was an automated process, right? Whereas my act was very volitional. I'm like, Yes, check out this new song, right? And so I think as we sort of get into this world of software where we prompt things, there is some automation, there is some calculation and computation happening, and then as these things are able to do more to devin's point right to take more actions to act upon things. Should we care more about mens rea and this kind of question or less?

 

Deven Desai  41:51  

Do you think? Yeah, I mean, and if so, when

 

Mala Chatterjee  41:53  

I guess both, in a certain sense, both, I mean, it's really it's really hard. I mean, I think it's kind of threshold matter. You know, you have this sense that if someone this isn't just a copyright thing, this is a tort law thing, this is a criminal law thing. This is like a vault. Is there a voluntary action? Is this a gust of wind that we can't really point to any, any sort of sense of an agent that we can attribute this as an action to. And of course, there's a concern about not caring about this, which is just as we have more like increasing capacity to sort of offload so called agency motivated conduct that would it would clearly satisfy it coming from a human then if we can just avoid that problem by offloading it to machines that can just do this. I think that seems like, that seems like a very bad that seems like a failure of the law to keep actually in step with what the realities of human life. But it is. It is hard because I think this is one of the reasons why it's like I feel like when I talk about this stuff, I say what I say isn't very satisfying. But I think caring more about mens rea, for example, doesn't have to mean it hasn't meant that it needs to look like what it has and it's not. This isn't like totally unprecedented ground. I mean, we construct, we construct mens rea in the context of corporations. We do it in a weird way, where we still want there to be like a human somewhere that has a mental state going on, which is strikes me as kind of like a strange way to approach this question, given what we're purportedly doing when we're establishing corporate personhood, and it seems like, well, maybe, you know, there it needn't be something that we can locate in a particular human's mind that doesn't have the right kind of relationship with this thing that we've called the person, but rather something that we can attribute to the structure in the same way and the way that it operates. And this is like the stuff that you know, Jeannie and my paper sort of calling attention to functional mental states, or functional sort of understandings of mental states, is trying to call attention to that there isn't. It's not, it's not outrageous for us to to construct manager in this way. But I think similarly to how things like mean different things in different areas of law, like causation means something different in tort law and criminal like, there's reasons for this. And so, you know, the question of how we should approach constructing the relevant mental state is going to be something that really depends on on the context. And I think rightly so. I mean, this is something that feels consistent with like or. Our intuitions and thinking about, like consumer attitudes towards when it feels like something has been invasive versus something is just, you know, feels like, No, you know, there isn't a sense that something has been taken away from me, like artists caring more about when they have this sense of you've incorporated my voice, my agency, into this by, you know, creating this thing that's like a stylistic extension of me that seems like the kind of context where I want to say, Yeah, I think there is something like that going on. It's either sort of, because I have this, it's like pseudo expression that's coming out of this, this content. But given that the expressive value in cases where it's like intentionally replicating an author, a particular artist style is just parasitic on the artist, it does feel to me like a kind of, you are sort of, you know, taking their voice, and it's a kind of, like compelled speech kind of thing, which obviously requires, like, a radically different conception of what it is to force someone to speak. And I feel that way, like very strongly, what to do about it is totally different question and whether we should do anything. And I tend to kind of think the solution to a lot of these problems, there's actually just leveling things down because there's just too many conflicting, competing claims that are at issue. So notwithstanding thinking that this is, this is the kind of pace where we've got a kind of compelled speech vibe, the solution, I don't think, is like stronger protections, but actually kind of accepting the fact that this is like, this is just how you know is, how is this different from when we make references and stuff like that? And can we, can we precisely say so, you know, very different intuitions based on on the context. But I certainly don't think we can keep avoiding these questions. And I don't think that we're going to find the kind of satisfying, one size fits all solution that people crave. And I think, you know, in some contexts, it will feel like the right that you know the thing you need is a particular individual human in this system that you can point to, but I don't think that's the right answer a lot of the time. But it's not like unprecedented for us to just given the corporation stuff construct this

 

Jason Schultz  47:28  

stuff, no, and I think it's one of the things where a lot of IP laws and a lot of tech policy laws don't want to look at individual user context, because the scale is just insurmountable, right? The context is going to be so different. But a lot of the systems, a lot of the software being designed is going to be come more and more personalized and individual. So Devin I want to come back to local models, right? So another thing I think that a lot of tech policy and IP law has felt fairly comfortable doing is holding companies, in many cases, responsible for what happens with their technology or their platforms, if it's a hosted platform, online, right? So, yeah, we have the DMCA takedowns. We have, you know, inducing infringement, if you're, you know, crockster and, you know, some of these other file sharing services. We've seen other things where we can sort of say, Okay, there's this court. Maybe it's an efficiency argument, that you're the, you know, these cost of water. You can sort of control the rules of the road, right? But one of the things that strikes me around this, which you mentioned right, is not only the capacity for people to start to have customized software that performs these functions, like either that they got from a company, or maybe that they even got from, you know, open source software and open model weights and all these kinds of new available technologies. And so I'm curious, like, we have these kind of senses. And Aaron was alluding those two of like, when you have individual consumers who are doing private, non commercial things, like recording television programs to watch later in the Sony versus universal case or some of these other cases, right? The courts are like, wow, we don't really want to crack down too much on the private actions of consumers, especially, quote, unquote, in their own homes, right? So even in the Copyright Act, the emphasis and is on public performances or public displays, implying the private performances and private displays of copyrighted works without permission are not even in the scope of the exclusive rights. And then one of the things that strikes me right is that in this current moment, and to focus on LMS, and you know, the visual models, the diffusion models and other models and other models that are creating and generating things. You know, so much of the copyright lawsuits have focused on training and memorization, I think because they the rights holders are a little afraid of trying to do what sort of happened in a lot of the music downloading lawsuits, which is start to name individuals who are doing things in their homes, or maybe children. Who are in school or messing around with friends, a lot of that kind of private activity. You know, there are going to be concerns about letting enforcement happen, especially when you start to get into subpoenas and discovery and the kind of invasive nature that that could have. And so I'm wondering, you know, not to put too many things together for your question, though, like, what do you think about this shift to then people building their own models, or having an app that builds a model for you, I mean, or a system that builds it, but then the company basically would have an arm's length relationship. What you do with that model on a device, if it has really no connection? How do you think that's going to change some of these questions that IP and tech policy have tried to divide between big companies that we can see, and maybe they can be a kind of a bottleneck, like you're talking about Cloudflare and others versus individual consumers in the privacy of their own home on a device that they think they own, et cetera, et cetera. Yeah.

 

Deven Desai  51:00  

So that's why I started with the broad issues around how this just drops the barriers to creation, right? IP is artificial scarcity, and we have a whole bunch of economic assumptions that are really old about and this was brought up yesterday. Oh, you have to have this or you won't get creation, etc, etc. Well, at one level, a hate when I missed something in my lit review, but my paper was very much in line with you guys about Hold on, because I was at Google during Google Books, I was like, Wait a minute. Wait a minute, it's actually got good law here, if you own a legitimate copy, there's a lot you're allowed to do, at least as far as far as software training, that's what that was about. So here, this reminds me of the work I did on 3d printing, which I called patents meet Napster, right? Whoa. What changed? It wasn't that people were good, right? Just, you know, people don't infringe patents are partially because it was expensive to do so, and when I wrote that years ago, I saw fifth graders with what were now outdated cameras taking pictures of their head, uploading it to a cloud software so that they could then put it on a dragon's body or a rabbit's body. And they did this over a summer, a spring break camp, right? And it's, you know, as usual, I was going, I'm not sure I can do that, but the kids can do it. Now. That's gotten a lot better. So to your version of this, right? So suppose, following what you guys have said and what I think is correct, right, you have your mini corpus, your library, and you've got software that is already embedded. If you look at the latest apples, they're all about the GPU, right? The thing that allows you to get this generative work to happen, and it's localized so someone sells you an off the shelf software that also works with that. Now you upload your corpus that you've legitimately bought. So you decide that you want to generate even more whatever you want, more Fox News, more MSNBC, more romanticy, whatever your flavor is, but you bought it in so far as it's cabin in your house, I think no one's going to care. But will that stay cabin? Will it start to go out in the public? That's the hard part, because we're network, right? And once it's digitized and on your computer, the odds you're not going to share that and what that looks like, I don't think IP law is remotely able to handle that, because the whole question of, was it substantially similar or not substantially similar? Good luck. They're already messy as it is, when there's narrow set of data for that, you know, and people can go back and forth how Ed children wasn't infringing, is a little beyond me, but okay, that's where the court went. I'm sure people who know copyright better than me can tell me I'm wrong, but I got confused on that one. But it's it's so much fact driven to your point, and then when you layer in personal use or not, the reason it reminds me of the printing thing is what we argued in that paper was actually, as was talked about yesterday. If someone can scan a part of the automobile on their own to then recreate something, especially when it's not being created by the company, that should be okay. But if there's some lingering patent on there and you've got strict liability, that doesn't make sense. But the system believes that most of it is B to B, and we're getting so much more to the theme of this consumer, individualized ability to build, create, play. It's a great question. I don't think the law is going to handle it well, but I would hope there's more space for for what folks have been talking about, that you should be able to create what you want using things you bought. But the nature of creation. Then the quick side note, James Patterson, by the way, openly admits that what he does is create a detailed outline that he then gives to a professional writer who writes the first draft, and then he doctors it. He pays them. Well, I would imagine in the near future, James Patterson should just upload the insane number of. Folks, he's written to this software and eliminate that other writer with his outline. It should work. So what does that mean for an individual? Should we be doing that with our law review articles? Probably not. But, you know, it may not hurt. You know, if you got a voice, you could say, well, get to me to that faster, and then I edit it

 

Jason Schultz  55:20  

well, so I want to take that and then switch to Aaron, and then we'll open it up. So Aaron and I are somewhat obsessed with libraries. We do a lot of work with libraries. We I'll brief hopefully, to save some of their capabilities in this world. And we're kind of in the process of working on a new manuscript about where they are today. And I wanted to shift the conversation a little bit to that, just in the sense that when you talk about anthropic, you know, anthropic is as for profit as they come. And this ruling from Judge also tries to make this distinction between, well, if they went out and bought a bunch of books, they have, on some level, a different status when it comes to how they can digitize, preserve and use those books on some level. Well, that's what libraries do, right on many levels, not just books, right, but they collect lawfully many things, and they maintain them. What do you think the implications of some of this are for institutional consumers? I mean, universities are lots of but to kind of get away from the commercial, maybe non commercial, institutional consumers of works. Do you think it's pretty clear, or do you think it's less clear?

 

Aaron Perzanowski  56:29  

I don't. I don't think it's clear at all. Right, so I want to go back to the point that you raised earlier, right, in the, you know, ridiculous opinion in particular, right, there's this, there's this attention paid to the right of the individual consumer, which I think is pretty convenient for the court when there are no individual consumers, actually, as you know, litigants in the case, right? So, so it's sort of easy to be, you know, magnanimous in that sense, and say, Oh, sure, you know, we're not interfering with consumer rights, but here we're worried about this for profit actor, okay, well, maybe that's a line to draw. The next case, we no longer have a for profit actor, we have a nonprofit actor, right? We have nonprofit library, and that doesn't seem to make much of a difference either, right? So as far as the second circuit is concerned, I have zero confidence that there's going to be any greater respect for the rights of individual consumers than we see for institutional actors. Right? Granted, are there more? Are there more sympathetic institutional actors with less, you know, aggressive and potentially problematic. You know, programs more responsible practices, yeah, for sure, right? I mean, look, I mean, it's no surprise. I thought what the Internet Archive was doing ought to be perfectly lawful, but they didn't make that case easy on themselves in a whole bunch of different ways. So yeah, I don't, I don't have any great faith that, you know, when an individual consumer is in this position, that a court is going to step in and, you know, be more likely to kind of recognize and respect their individual interests, and they have been for the institutional actors. I think that's, you know, I think, I think, you know, those those courts were very self consciously trying to distance themselves from those questions, but you know, we haven't seen any real evidence yet that they're going to treat individual consumers any differently. What does that mean for other sets of institutional actors? Right? You know, there are plenty of institutions that are still running programs that whether they call them controlled digital lending or not. Are controlled digital lending? My Institution being one of them, and I don't, I don't think there's a particularly high risk of litigation there, because I think there's a sense that there's a greater legitimacy, you know, with those kinds of more established institutions.

 

Jason Schultz  59:21  

All right, we have a question. So why don't we go to that? And the people raise your hands? Michael's in charge of the queue?

 

Jessica Silbey  59:26  

Yeah, I have a question. I'm I understand what everyone's saying, Aaron and you guys about the consumer copy and the Atari case and all that, but I'm just gonna say it irritates me a lot to hear sort of a concession to you have to lawfully own a copy before you engage in fair use. That's just not what the statute says. 100% agree, okay, right. So, so that anthropic decision is just, I mean, I there's an impulse there that, you know, there's sort of unclean hands and bad. Faith, but it's just never been the case. And the statute says a use, not use of a copy. And so, I mean, it's just wrong, you know, no, I mean, you just you can't engage in fair use if you have to own a lawful copy.

 

Aaron Perzanowski  1:00:14  

So, yeah, true, right? And that's the statement I quoted. I said, like, that's totally over broad, right? Like, that is not the rule that can't be the rule that shouldn't be the rule, but I think there are circumstances where, among the many things that courts consider, copy ownership, can make a difference, right? It can. It can nudge things towards fair use. It should not be on its own, outcome determinative, right? It

 

Jessica Silbey  1:00:40  

also just means that when you grab a photo from online, or you grab a bit of text, I mean, if we're all engaging in digital copying before we use the work for all the other things, because our whole life is online, no fair use would ever exist. So it just, there's a there's a it just, it just can't, you know, it just can't be the idea that, you know, we actually have to have libraries owning copies before they actually engage in lawful digital lending. Also just eviscerates the Digital Library, to your point. So I just, I don't, I don't like hearing the concession from people who've been such champions of online access.

 

Jason Schultz  1:01:18  

Well, I'll just add this. I don't know that it's a concession is more as an acknowledgement of kind of what courts have tried to do when they've tried to split these rules up. And that I agree, it doesn't make any sense, right? And this part of the reason I wanted to pose it in in this sort of format of a panel is I think that the distinction, like with mens rea, volition and local versus cloud et cetera, are all hanging on the precipice. And for courts to make these distinctions right, I think are fools errand, right and but I think this is going to come back to this question of how policy and law gets made in an era of essentially, you know, center hybrid, human, machine ownership, non ownership, etc, right? Like, for example. I mean, one of the things that when I worked at EFF we used to do, I was just looking at back in 2002 there was a lawsuit by the entertainment industries against replay. TV was a, it was a competitor to TiVo, right before TiVo went out as a DVR, and they sued replay TV, but they did it as a contributory infringement claim, claiming that people at home who were recording TV programs like on TiVo were the direct infringers, and replay was, you know, contributing to it. So as EFF we inter moved to intervene on behalf of five replay TV consumers who had bought their devices. And the two things that kind of happened, you know, one was the entertainment companies quickly coveted not to sue, to try to get rid of standing for us to be in the lawsuit. But they then settled the case basically because they're like, this is too complicated to explain, and we're probably there's too much of a risk that we're going to lose, because people are going to be weird about what? I can't record a TV program, right? And I think we're in those places with a lot of these technologies, but it isn't as clear as that old analog. I have a machine that I own in my home or have a book on my shelf, right? But I think, like conceptually, copyright law and a lot of other tech policy is as messy as those examples, but we just aren't in a concrete moment of noticing it in the same way. Sorry. Anyway, other questions, comments,

 

Deven Desai  1:03:29  

well on Justin's point, and I think little bit of what Aaron was saying, I think the hard part there was the institutional actors, right? They're trying to deal with a massive, well funded company claiming we should be able to grab 7 million books. And Google Books was up against that, and it was libraries, right, HathiTrust and Harvard and others that had legitimate copies for the purposes of this transport, right? All the little messiness of all the factors. So to your point, I think it probably should be very cabin to what Aaron was saying to a distinct little area, as opposed to the broader claim. And I think that might be where this makes more sense, because otherwise, right? If you're stuck with I get to grab any book I want to build my really big, billion dollar company, just no one's going to agree whether you think that's right, go for it, but I think they're going to be up against the realities of the industries, then where they go like that.

 

Jason Schultz  1:04:27  

I'm not as convinced. I mean, I think, I think right now, in this political moment, maybe, but I will say this, and this is going to my look. So imagine, yeah, that I have my own model that I'm training on my own machine right here. And I say to it, decide, you know, dear model, decide what content would be best for you to train on, and then go find it on the internet. Go get it. And then it's agentic, yeah, and I'm like, but don't tell me. I don't want to know. Just find it. Also, I probably won't understand why you pick content or. Whatever. And then I make my own model, and I'm running it, and maybe I share it with some friends and family, but not commercially. I just these are the distinctions where, yeah, I think, I think a lot of this is going to break down, because the technology will essentially enable us, at some point to do it anthropic and open AI are doing not at scale, not at the same thing, but like the acts of alleged infringement will be hard to distinguish other than who I am and what my motives might be. Other comments, questions, yeah, yeah, we only look at a couple minutes left, so then we'll wrap

 

Speaker 1  1:05:37  

it just just inherent. I think to many of the comments, many sort of AI, specific regulatory frameworks been proposed. Most are on ice in this country. Other jurisdictions around the world have have, you know, promulgated them, but a lot of the harms potentially of AI could be cognized as existing torts, existing violation of statutes is, do you have a view as to how desirable and important it is to have sort of a I specific regulatory frameworks, or is it equally Okay, or maybe in better ways that we work through Our existing regimes and structures, fact by fact, right? Case by case.

 

Mala Chatterjee  1:06:26  

Yeah, so I am a big fan of like, starting with these existing concepts, because I think it's, you know, the premise of creating these new sort of AI specific frameworks is that, like that, we're just supposing that this is different without actually interrogating the difference, and, like, whether it matters. So you're thinking about like my colleague is writing a book about deep fakes and intellectual property, and, like, arguing for introducing some kind of, like statutory right and your identity vis a vis deep fakes. And it's kind of like we have read publicity like why? Why can't we just use that like this? It's not clear to me at all why, why we need something new in this context. We need to change. We need to sort of update of, you know, how we think about publicity rights or not? Call them publicity rights. Call them something else to get, to get the kind of interests right in terms of what we want to protect. But it's not, we're not reinventing the wheel. And I think it's there. It's alarming to try to reinvent the wheel, because it because, even if you are right, that this is something, you know, that's totally new in some way, then it's like, well, then we don't, we shouldn't expect to understand it in a way that it in a way that it makes us to do these sort of, like, ex, anti intervention. So I definitely think that as a starting point, trying to use what we what we have, rather than sort of given to these, these like gut reactions, that this is something totally different, that stuff, definitely that's where we should start, just both in terms of like, at the justificatory burden for particular interventions, and also, yeah, to figure it, to figure it out, to figure it out, if it is indeed new,

 

Deven Desai  1:08:15  

probably anything that tries to be specific about this technology will come up short When the first iterations came out during the Biden administration, all the PhD ml students who sit near me were just like, oh, I can get around that real fast. I tend to agree with Mala. We identify the harms. Look what's already there. Don't go with just because I'm a conduit. Maybe that does need to be addressed, but it is a general purpose technology. And I think, unfortunately, maybe it's our community, but public at large, the sense that, oh, there's some IP that's going to be in trouble, which is a tale in terms of the bigger issues, makes everyone lose their mind, which is the history of copyright, in my opinion. I mean, just look at how much is the music section of copyright. It's weird just by sheer volume, and that's industry lobbying and all the stuff people know. So yeah, I would say probably not a good idea. It doesn't really fit. And there's not enough dialog back and forth with the computer scientists and the engineers to say, what's the harm if we start to specify that? In my experience, you get a few engineers in the room, they go, Oh, that's what you're worried about. I can build for that. But the more broad, oh, don't do this and try not to do that and just say no, they're going to ignore you and build what they want, unfortunately or fortunately, as you actually,

 

Jason Schultz  1:09:39  

yeah, I agree one thing, one thing I will say to the in this goes to a little bit of what you're saying, David is I do think the con the engineering response to regulation, is something that is under discussed. And I think we've seen this with CD 230 the DMCA takedown regime, a number of other tech regulations. Which is if you can re architect your information infrastructure in a way that avoids liability, and we saw this with Napster and rock stir and BitTorrent and like torrenting now, right? I mean, architectures will change in response to regulation. Maybe we're okay with that, like, maybe that's the world we still want to live in. But I think it's something that doesn't get discussed enough of what will be the adaptations that come from regulation if they pass. And I think there are some really useful AI regulations being proposed, especially around transparency and kind of reporting and responsibility on the environmental front and other fronts. So I think there are very specific ones. And I think that technology can be defined, I think generally enough, but I think trying to regulate behavior of technology is the most difficult of all, and the one that I see fail most often.

 

Announcer  1:10:50  

I have actually

 

Jason Schultz  1:10:52  

a conceptual question, very patient,

 

Michael Weinberg  1:10:59  

if you were comment, go ahead. But my question is, Jason, I think couple people have touched this, but Jason, you you came back this as an example of, you know, I'm going to sit down at home and build my own LLM and and therefore, you know, I'm the one responsible for it. And it seems like every, every mental model of this conversation has a version of that, but it seems like the practical engineering reality is, if you are sitting at home, and I'm sitting at home and training my LLM, I am either really tweaking an existing LLM or using a bunch of llms to, like, create Something new. And I'm wondering, if built into that example, you are assuming that all of your input llms are sort of general purpose technologies that will not have any liability for what happened, and because you are the person who sort of said, come together, my children to create this new model, you're the only one who has responsibility.

 

Jason Schultz  1:12:04  

You're asking is that, is

 

mich  1:12:06  

that right? How are you thinking about the relationship between like your LLM and the input LLM?

 

Jason Schultz  1:12:12  

The way I think about this is the deep seek moment, right? Where China released this model. Nobody outside of us nerds really cared how they did it, and suddenly it's the number one app on the App Stores. And everyone's like, Oh, interesting, new model, right? And so in my mind, what I'm imagining is someone, you know, either releases a model on App Store or buys this, you know, horrific friend device that creeps me out, that's being advertised in some ways, or whatever it is. And it's like, how can I help you? What do you what do you like to do? What are your hobbies? And it's like, Oh, I love music and movies. Oh, would you like me to put together, you know, take all of the music that you love and put together a playlist for you, or, you know, whatever, or curate a movie list or whatever, or create a video game for you or whatever. And so the consumer right, is just sort of engaging with a set of questions that it's being prompted with which from the from the maker of the model are Super General, right? Like this is a general purpose set of questions, right? And to say, Okay, well, then the makers of deep seq are responsible for any of the downstream problems that causes. At some point when a user goes through that dialog is going to be difficult. It's possible we could do it, but it's very difficult. I mean, there's a little bit of the controversy over California AI bills and how, you know, the one that got vetoed by Gavin Newsom, you know, questions whether you think it's too much to be responsible or not responsible in that technological context for making general purpose foundational models. But then the consumer is like, just playing along. And of course, as we know most llms and our kind of follow Alan Turing's Imitation Game of just trying to be liked and likable and your friend and convince you they're human, right? So they're like, Yeah, tell me more about what you love, and I'll give you it to I'll make it for you, right? And so that's the scenario where I don't know how to decide what the law should do about those situations and who and how consumer is defined right because, on some level, I'm the consumer, but on another level, this set of technologies is also me as consumer personified and maybe avatared, or whatever the right verb is. And I think the law has no idea what to do with any of that. I don't think you have different feelings. Different feelings, but that was kind of the scenario My mind is general purpose model. Then ask questions to consumer who knows nothing about the technology. Consumer gives honest answers, and then something happens, and then who's responsible?

 

Mala Chatterjee  1:14:35  

I mean, I was like, I have some intuitions about this case, and in relation to some stuff that Devin said earlier about just, like, Why? Why do we just that? The instinct that this is about IP, this is that's what we should be concerned with, just strikes me as so straight, like my, I mean, to set to my response, a lot of the stuff is like, it's like a reductio. It's not, you know, it's like, that shows us. Like, why is this different from me googling? And I think a lot of the stuff that people feel queasy about in imagining these scenarios, whether it's like, on the side of, like, you're an artist, or on the side of you know, you're a consumer who cares a lot about, I mean, we talked about this in the panel, like these people writing books about, like, Spotify playlists being the end. Of it, I think it's just worst possible case. Like, of course, I want you know things that are to my taste over just like, randomly. You know, maybe it's because I have very specific taste, but it's, it's just not clear to me, without a specific like, example, what is the exact problem, and is it in this sort of the use of it, qua generating stuff, or what we're doing with the stuff that we're generating? And the second, if it's the second thing, which I tend to think it has to be that I just, it's just not obvious to me that it's, going to be an IP issue that that we have with how things are being used, or if IP is the best, best answer to a lot of the stuff, maybe

 

Jason Schultz  1:16:08  

in some of them, are you saying we're irrelevant?

 

Mala Chatterjee  1:16:12  

I mean, I'm saying we should. We could broaden our conception of what, what we do. That's, that's, that's how I cope with it. That's why I say information. Nation. I know,

 

Jason Schultz  1:16:23  

I know we're at time. I just wanted to give Devin or Aaron a chance to throw in any other sort of concluding thoughts or responses to Michael's. Well,

 

Deven Desai  1:16:31  

I think it highlights the agent issue, because what could easily be built, I think, is your system, even if it's local, has to communicate with other things. So it may actually, of course, you can get a rogue version, but it might say, Oh, well, we also need your credentials for Spotify whatever. So when we build you these lists and do certain things, do you have a license which has a dark side, right in terms of something that could be incredibly creative, we're back to right. What did streaming really do? It cut deeply into the For Sale Doctrine, who actually has a physical copy of something that they can play with in remix? It's really hard. So I think each time we press on one thing, you're going to get this possible negative, that and and I guess that's why we stay employed to talk about it is this?

 

Aaron Perzanowski  1:17:20  

Is this the time to talk when we're almost out of time,

 

Jason Schultz  1:17:25  

we're already over time, exactly when you should go, yeah.

 

Aaron Perzanowski  1:17:29  

So like we've talked a little bit about personalization, we talked a little about like, lowering barriers to entry, to producing creative works. And those sound like good things. I'm not convinced that they are. One of the defining features, I think, of AI produced content, is that it's tacky, like esthetically, we're accelerating some really problematic trends. I don't need to see shrimp Jesus and Rambo Trump and Stephen Hawking is a professional wrestler. There's a reason that art is produced by people who invest a lot of time and energy, not just developing technical skills, but developing taste. And I think a lot of what AI reveals is how poor the taste is of of the average person, and that that troubles me, not an IP issue,

 

Jason Schultz  1:18:31  

definitely issue of consumption, right? All right. Well, on that note, join me in thanking the panel.

 

Michael Weinberg  1:18:40  

Thank you, everyone. So we are going to have, we have lunch now. We'll have a keynote to 1215 but on the subject of tacky art, I should plug we have a report, I guess, called models all the way down from the knowing machines project that actually explores the incredibly small number of people in the Upper Midwest who are responsible for the esthetic of most AI stuff, because they were on like a forum 15 years ago ranking images, and that's just kind of like what their deal was, and they created a data set. So if you want to explore the tackiness, check that out. We'll see you back at 1215,

 

Speaker 1  1:19:28  

people. The engelberg center live podcast is a production of the engelberg center on innovation, Law and Policy at NYU Law, and is released under a Creative Commons Attribution, 4.0 International license. Our theme music is by Jessica batke and is licensed under a Creative Commons Attribution, 4.0 International license. You.