Engelberg Center Live!

A New Global Copyright Order? The EU Directive in a Global Context (Episode 2)

Episode Summary

In this episode, we talk through the so called “unanticipated consequences” of the 2019 European copyright directive. How will this directive impact individual content creators, users, and smaller and medium sized UGC platforms? What about those that host non-audiovisual content? From automated copyright filters to blanket licenses for UGC platforms, the directive is a springboard to address a range of issues that are at the heart of current online copyright reform debates.

Episode Notes

On today’s episode we discuss the range of questions that the Copyright Directive raises. While the possibility of global contagion of the EU Directive remains uncertain, we speculate on the (im)possibility of transplanting the licensing  mandate of the Directive in the US context.
As filters are promoted as a silver bullet to copyright enforcement, we ask whether they even work, and who they work against. We discuss the particular impacts on users and individual content creators that lose out most in this negotiation between institutional copyright holders and large UGC platforms.

Our guests for this episode:

Kat Geddes, PhD candidate NYU Law

Jamie Greenberg, Corporate counsel, Wattpad

Meredith Rose, Public Knowledge

This episode - along with other episodes in the series - has been approved for one CLE credit in the Area of Professional Practice  category. The credit is appropriate for both newly admitted and experienced attorneys. Please email engelberg.center@nyu.edu to obtain CLE credit and for an accessible version of the transcript that includes CLE codes.

 

Episode Transcription

Amba Kak (00:03):

Hi, welcome to this podcast mini series, hosted by the NYU Engelberg Center on Innovation Law and Policy. My name is Amba Kak and I'm a fellow at the Center. Now, even if you weren't paying close attention, it was hard to miss the controversy around Europe's new Copyright Directive. First proposed in September, 2016, but in the weeks and months leading up to the final vote last year, there was a crescendo of advocacy and that familiar rhetoric about how it was nothing less than the future of the internet at stake, reminiscent in many ways of the 2012 SOPA-PIPA protests here in the US. The Directive on Copyright in the Digital Single Market, or for short, "the Copyright Directive" was approved and adopted in 2019. And member States of the European Union now have two years within which to implement national law. This is a three part podcast series that examines the potential impact of this new law.

 

Amba Kak (01:04):

On the first episode, we got into the weeds of what this directive actually says and what it leaves to interpretation with two leading copyright experts from Europe. We concluded, among other things ,that while the final version of the directive does make many of the right noises, especially with proportionality safeguards that run through it, what this will all mean in practice remains a whole different matter. Especially with the legal uncertainties and risks for both platforms and users that it leaves in its wake.

 

Amba Kak (01:36):

So that's what we're going to discuss today. Our guest for today's podcast bring very different perspectives to that question of the follow on impact of this directive. In a global context, looking at how it relates to debates that are ongoing in the US and elsewhere. We have three guests today, first Meredith Rose policy counsel at public knowledge, which is a consumer advocacy organization based in Washington, D C.

 

Amba Kak (02:04):

And she focuses primarily on copyright issues. Jamie Greenberg, who is Corporate Counsel at the platform Wattpad. Wattpad, for those that don't know, is one of the world's largest social storytelling platforms for readers and writers who share written word content and Jamie deals with all manner of legal issues for the company. Last but not least Kat Geddes a PhD candidate at NYU law school. She first became interested in the topic of the European copyright directive while she was teaching the course on copyright X at Harvard law. And it has now made its way to her current thesis work on algorithmic governance. Now, although article 17 (previously article 13 of the directive) got really infamous as the upload filter provision, the final text, curiously, puts more emphasis or primary emphasis on best efforts licensing. So, a requirement for UGC platforms to try and license the content that users upload. Now, if after diligence that doesn't happen as, you know, inevitably it won't, uh, comprehensively then a requirement to take down in response to notice from rights holders, um, but also to make sure, (and here's the kicker) that content stays down. So to prevent the reappearance of such copyrighted content. Now here is where the requirement of content filtering technologies, although not explicitly mentioned kicks in.

 

Amba Kak (03:38):

So I'll start with you, Jamie, Wattpad is that medium-sized platform that was certainly not the target of this directive, but will arguably or potentially still be subject to it. Can you explain how you see these dual requirements of both licensing and filtering apply to a platform like yours?

 

Jamie Greenberg (03:56):

Yeah. Um, I think, um, an interesting place to start is that I kind of introduced Wattpad as a platform. Um, we are, we are, uh, active in every territory in, in Europe. So that means, um, we have, um, language uploads, uh, in every European language, um, and the idea, um, around best efforts and, and this is something maybe, you know, we're a Canadian company, but this is something that, um, is a pretty, um, I guess maybe North American legal idea of best efforts, um, is, is a very, um, interesting one. Uh, the problem that we face is that best efforts is often interpreted as, um, doing, uh, the utmost, even to the detriment of your own best interests. And so the idea that a, uh, user generated content platform, um, would, uh, need to filter content before it's uploaded on a best efforts basis is a pretty untenable prone prospect for a platform like ours. And, and again, in context, we, we are active in every territory in Europe. Um, but, but more than that, um, you mentioned that this law is focused on, um, the large content, uh, companies like Google or Facebook, which have tens of thousands, if not hundreds of thousands of employees and offices around the world. And, uh, we're still a scale up despite our relatively large size, uh, with 200 employees. So, you know, our copyright take down process, we abide by is the DMCA and we have a small copyright team, but, pretty agile, and we're able to execute those, um, those DMCA take down requests pretty quickly. So the idea that we would be able to filter content and written content more specifically, um, before upload would, would mean that we would have to imply, employ, um, a very sophisticated algorithm. And we have a division, uh, in our company, which focuses on, uh, machine learning and AI, and, uh, we've done a deep dive and the idea that we can ourselves create a content filter, uh, would be premised on the idea that we can actually create that filter and have it accurate enough to actually filter content.

 

Jamie Greenberg (06:40):

And the truth is, is that, um, we basically have found that,creating such a filter is near impossible, that the best machine learning models could achieve about 70% accuracy, um, and that, you know, in order to do so, we'd almost have to have perfect information. So in terms of maybe a licensed pool of content that we can match against, um, in order to, to, uh, see if there's any, um, any unique matches and we'd also have to be able to match for context. So I think, you know, when, when, when we, as a platform, which focuses on the written word, look at the idea of a content filtering model, we have to really point that out because, um, regulators might say, well, uh, you know, online plagiarism checker's like, turnitin.com can achieve near a hundred percent accuracy. Well, when you're dealing with, um, fiction, uh, it's often a matter of context, and that's why the DMCA take down process works so well is because we're able to look at the context of any potentially infringing material, whereas a filter, um, machine learning, Um, can't do that and therefore, um, makes it an accurate, and it's, so it's not as if, uh, it's not an analogous to, um, image or video, um, algorithms that that can do so very efficiently.

 

Amba Kak (08:07):

Jamie, if I may, because, uh, before we open the, the "filtering" can of worms, I wanted to go back to the licensing requirement. As I mentioned, article 17 ends up putting a lot of primary emphasis on licensing as a first order requirement. Now, um, somewhere in your response, you actually alluded to the fact that creating a filtering system requires as a foundation, that the company has a clear licensed pool of content with which it can match against. And that, that is possibly unrealistic for the written word. Can, can you explain that?

 

Jamie Greenberg (08:45):

Yeah, there is, it's, it's a very good question. There is no, um, there is no current, uh, licensing regime that would apply. Uh, there, we would need a pool of, um, of every published work, uh, by every publisher, uh, effectively in the world to match against the, the idea that, um, that, that pool of content would be, um, extensive. And it would be almost perfect because you need buy in from every publisher, uh, of every published book or else there wouldn't be the ability to match. So the very, you know, you, you got to it in your question, the very idea that we would have a pool of content to license against, um, is, is, is a massive issue in and of itself.

 

Meredith Rose (09:38):

Yeah, I think that really highlights the fact that, you know, different content industries are really at different levels of maturity when it comes to copyright management, um, structures and institutions. Um, but why don't we take the paradigm case of the music industry here in the U S. Meredith: maybe you can help us understand what you see as the kind of challenges with, uh, your, uh, copyright directive, state licensing mandate, based on the environment here in the U S

 

Meredith Rose (10:07):

Yeah. So I think, uh, one of the biggest hurdles to this is just the sheer complexity of the licensing environment within the United States. Um, so even just limiting it to music, for example, which is one of the sort of hot button topics, right? Um, for one thing you have even sort of setting aside sound matching on an algorithmic basis is just technologically very, very difficult to do. Audio fingerprinting is very difficult. Um, and when you get into certain kinds of music like classical, you can have a real problem with, um, high similarity tracks that are actually different tracks with different copyright statuses and different rights holders. So even sort of setting that aside, um, the legal framework for licensing in the United States is this (of music specifically) is this wildly complex and arcane, uh, system. So, you know, let's say you actually do find and license individually with every single rights holder that has created, has had some input into music that could conceivably end up on your site.

 

Meredith Rose (11:12):

And that you've preemptively tracked out every one of these people in licensed with them. So first that cuts out any site that doesn't have the time or resources to devote to this huge search and license obligation. Um, so any smaller site, basically that, that can't engage on this in-house is going to be cut out of the ability to do so. Um, so even just taking just a song, no, no video to it, just music. So that's under the American system, that's two different copyrights. You have to premptively clear. You have to clear preemptively, that's the composition, uh, copyright and the actual sound recording copyright, right on the composition side, you have frequently had multiple songwriters on each song. Um, I saw a statistic that I think he said the average, uh, um, I think the top 10 songs of 2017 had an average of eight songwriter credits, uh, on each track.

 

Meredith Rose (12:06):

Um, so you have to clear, uh, every single one of those which can be assigned and sub-assigned and contextually reassigned, and often the rights holders themselves or the songwriters are not sure who is the person who is supposed to be administering different sub-rights. Um, that may be a little bit easier with the advent of, uh, this thing called the mechanical licensing collective, which was just established under the Music Modernization Act in 2018. They're still in the process of setting it up. Um, but that's just for one of the sub, the bundle of rights, that makes up copyright in the United States. You also have to clear it with, uh, ASCAP or BMI, these things called performing rights organizations. So that's just on the composition side. O n the sound recording side, you would have to clear it with the label, uh, which seems relatively straightforward until you start digging into older tracks where labels have been brought, uh, they've been bought up and they have been split and the catalogs have been shuffled around, uh, during acquisitions.

 

Meredith Rose (13:04):

So that's just, that's just for a track. That's assuming there's no video involved. Um, as soon as you have to sync it with video, then you have to license a different set of rights. So you don't even have to go through that. That's a total free market, wild West. You would just have to directly find everybody that you need to license with and licensed directly with them, um, and or their assignees, or their heirs who may not even be aware that they hold the rights to these, and then multiply that by the fact that everyone can now create content. Uh, and you now would have to, as a practice to be fully in compliance with this, you would have to preemptively license with essentially anybody who could make a song that might end up on your platform in the future. The flip side of this is, you know, this, as I pointed out in the beginning is difficult to impossible for small platforms to undergo.

 

Meredith Rose (13:55):

Um, you know, if you're not a Google, if you're not an Amazon, you're not going to have the budget to do this in house. So what do you do? Well, you can have some kind of intermediary spring up that could handle this sort of blanket licensing, uh, for you. But then the question is who controls that intermediary..

 

Amba Kak (14:09):

And do those intermediaries exist today?

 

Meredith Rose (14:12):

They do for certain kinds of rights and certain kinds of contexts. So ASCAP and BMI, um, are two sort of famous examples of, uh, they're called performing rights organizations or PRS. And they administer a very specific, right. They administered the public per formance, right to musical compositions. Um, they don't administer any of the other sub rights or any other kind of copyright it's just for musical compositions and the public performances thereof, except for what are called sync rights. So there are even some kinds of public performance rights that they are not legally allowed to administer, and they are able to go in and issue blanket licenses.

 

Meredith Rose (14:50):

So they issue licenses to bars, to music, venues, to radio stations, to any place where there is this kind of, you know, huge variety of music that could come up. Um, but those have been under antitrust consent decrees since the 1940s, uh, because when you only have, in that case, when they began, there were only two of these organizations that administered the entirety of published music in the United States, you have real competitive concerns about what they can use their market power to do. So if you say, well, we'll just create this sort of intermediary who can do blanket licensing. Well, then you have real governance questions about like, well, who oversees that intermediate entity who puts a check on their ability to use or misuse their market power, um, you know, you can do everything from creating a sort of public-private hybrid entity like we have right now with "Sound exchange", which is, uh, an organization that administers some kinds of, uh, licenses for sound recording copyrights.

 

Meredith Rose (15:51):

Uh, you can do something like the ASCAP-BMI model where it's technically a private organization, but it's overseen by Antitrust authorities. You could have totally free market solutions, uh, which again, have real anticompetitive problems you could do, um, compulsory licenses under the statute, uh, you know, which is fundamentally set by law, but at the same time, you know, makes it very difficult, slow to respond, to changes in market conditions. So it becomes very, very complicated. Um, you know, and assuming we're, we're looking at the sort of European model envision of the Copyright Cirective, you know, there's no, there's no mandatory licensing situation provided for the copyright directive. So presumably they think that there's going to be some kind of private sector solution that springs up to solve this problem. But then you have to ask the hard question of how do you oversee that.

 

Amba Kak (16:41):

Thanks, Meredith. I think that gives us, um, really a flavor of the very fragmented and fraught, um, context of collective management in the US and I know that there are equivalent kind of horror stories all over the world. Um, so I think it underscores the point that before you go put all of your eggs in this kind of elusive licensing, um, bucket in the way that the European directive envisions, we need to confront the challenges of what incentives would even drive this kind of, you know, blanket, collective license regime for UGC websites and how to govern and discipline the bargaining power of the organizations that will eventually administer these licenses.

 

Amba Kak (17:23):

Let's move on to the filtering mandate. Katt. I wanted to start with you. Now, Article 17, as we mentioned, it doesn't explicitly require filters, but it definitely nudges in that direction. So Katt I'll pose the question to you. How should we understand the use of filtering tools, particularly in terms of its impact on users?

 

Katt Geddes (17:44):

Sure. So, um, as you've alluded to, there will be an amount of pressure faced by platforms to adopt filtering tools like content ID, um, to avoid the kind of direct liability that the director will impose on platforms just for having corporate infringing content on their platforms. Um, since they're no longer protected by the DMCA-style safe harbour, um, that was previously available under the eCommerce directive. So as you mentioned at the, at the outset filtering technologies, like content ID have been the subject of extensive advocacy and criticism because they are very crude technologies. Um, basically they match digital fingerprints, um, with uploaded content against sort of a reference database of copyrighted content. Um, and there's a high incidence of false positives. Um, there have been many sort of famous examples that people point to of the algorithm clearly not working. So for example, when Justin Bieber uploaded, um, the music video to his song, "Pray", I believe it was, um, it was flagged as a copyright infringement because, um, the default setting, the default response to any, to content ID from universal music group was that any match should be flagged as infringement.

 

Katt Geddes (19:10):

Um, and the use of these tools and their increasing use with the Copyright Cirective is really concerning, not just in terms of, um, how they entrench the market power of existing players, who YouTube, for example, was able to spend a hundred million dollars developing this algorithm, um, and smaller players will not be able to do that, but the impact on users is, is significant. Um, it basically shifts the burden from copyright holders who previously had to prove that content was infringing to users who now have to prove that any content flagged by the algorithm is not infringing. And this burden shifting means that a lot of content is removed, that is not infringing, um, because many users won't appeal, algorithmic claims of infringement. Um, and this has a serious chilling effect on their freedom of expression and results in the suppression of a lot of legitimate content.

 

Katt Geddes (20:05):

It also denies users the benefit of years of carefully and very expensively litigated interpretations of the copyright exceptions and limitations that are available, um, to EU member States under the InfoSoc directive. So what constitutes caricature or parody or pastiche algorithm can't engage in any of that kind of nuanced balancing analysis, because it's just a crude, digital fingerprint matching tool. Um, and so it will flag as infringing content that might be in the public domain or content that might, um, constitute one of the exceptions under the infrastructure directive like parody, um, or the user might have obtained permission from the right holder, but the algorithm has no idea. Um, and the other, the other final thing I want to mention is that depending on the particular policy of the platform, um, if an algorithm flags content as infringing, it may deny the user who uploaded that content, the ability to monetize that content, um, which means that the ad revenue from that content will go to the copyright owner, even though the content itself is not actually infringing. So there's kind of burden sharing burden shifting, sorry, that occurs as a result of using automated filtering tools is deeply problematic, not just for the market share of small platforms, but for the impact on users.

 

Amba Kak (21:30):

Ya that was really useful because we discussed last episode, how recent European court of justice jurisprudence too, is increasingly framing the concerns with filters as a negotiation between the rights of platforms, um, UGC platforms and the rights of content owners. So I think arguments like the one you just made really help recenter users of these platforms as a primary stakeholder, and potentially those that have the most to lose from these kinds of algorithmic filtering solutions. Um, Jamie, you had something to add?

 

Jamie Greenberg (22:03):

Yeah, I'd love just to tie, um, Merediths and Katts points together. Um, I found it really interesting, um, to hear Meredith's summary of, of all the collection societies in the U S and I I'd like to point out that, you know, the drafts of the EU member, state laws around this directive, none of them make any, um, any, any nod at all towards, uh, creating these collection societies, um, in Europe for this law. So, I mean, I just look at it for texts. There, there's no provisions around, um, a licensing regime for texts, um, and then going to, um, to, to Katt;s points around content ID. Um, so if you have, if you, even if you have, um, perfect information through some type of, uh, uh, private public partnership, uh, um, collection society, um, the idea of applying that to parody or pastiche, uh, through the use of an algorithm or machine learning is, is nearly impossible. So we're in a situation in Europe where, where there's laws that are being implemented, but the, the, the kind of follow through of those laws and to how they'll actually be applied and what that'll do to use it rights, um, has been something that's, that's been sorely missing in, in each specific jurisdiction

 

Amba Kak (23:32):

Right. Now I mean, the EU copyright directive is not the first endorsement of upload filters. Even in the EU we've seen, uh, the terrorist content regulation that suggested the use of filters to remove extremist Content. So Meredith, how about here in the U S? Can you help us historicize and understand the use of filters as a copyright enforcement solution?

 

Meredith Rose (23:56):

Uh, it comes in a couple of different places. One is, I think the idea of, um, notice and stay down has sort of long been part of the conversation around copyright enforcement online. Um, you know, it's sort of been mostly a pipe dream, frankly, uh, because it would require these large scale filters in the last few years, uh, as content ID specifically has become more robust. Um, I think it's reentered the mainstream in a big way, sort of generally the impulse towards, uh, I think the response to this is kind of very instructive, which is, you know, the response to the, the numerous and very well-documented limits of even content ID, um, has general the response from content has generally been just "nerd harder", uh, you know, be better, make a better algorithm, do this, that, and the other thing, um, you know, and we've seen all kinds of iterations on this, uh, from encryption, the encryption debate to hate speech, to, uh, you know, even something as mundane as filtering for swear words in kids websites.

 

Meredith Rose (25:08):

Um, so, you know, filtering itself is, is not ubiquitous, but I think, you know, people see these sort of smaller scale applications of filters in a lot of different locations, uh, in their life. Um, you know, you also see, uh, issues like payment processors. So if you've ever tried to send money to a friend using Venmo, for example, or another payment processor like that, uh, if you, you know, I've had friends who, uh, are Persian heritage, and when they tried to send money for, you know, Iranian new year, uh, their payment will get held up by an automatic scan of, you know, essentially finding the word Iranian or Persian, and we'll flag it as potentially funding terrorism. Uh, you know, so these are very basic and very, uh, often over-inclusive algorithms. Uh, and I think that a lot of people, because we have gotten used to seeing very small scale, very simple algorithms and various corners of our life it's led to this kind of impression that, Oh, well, clearly, you know, if they could pick this up, then they can identify not only this Justin Bieber track, but every sort of modification of this Justin Bieber track that people are invariably going to make to try to get around the existing filters. So they think it's a winnable arms race, which I think technologically is a big question mark.

 

Katt Geddes (26:25):

Yeah. So I wanted to jump in here because, um, it was my comment, it was sort of triggered by a comment that Meredith had made, um, a very specific recent development, which is contributing to sort of platform reliance on filtering algorithms. YouTube announced in March that were increasingly going to rely on technology to automate content removals because of the coronavirus pandemic. So they were going to rely on technology more and more to replace some of the review that was previously done by human reviewers. Um, and I think this is really interesting as we move more and more of our lives online because, um, it sort of speaks to this like very significant clash between the directive on the digital single market and the GDPR, because article 22 specifically States that data subjects have a right, not to be subject to a decision based solely on automated processing. But if platforms increasingly rely on automated content removal, um, that comes into conflict with the obligations, they will also bear under article 22 of the GDPR.

 

Amba Kak (27:31):

Right I think that's a crucial point that the opposition to the use of these kinds of automated tools like filters is going to come from many quarters. And we've seen this in the pushback to the copyright directive, that the concerns raised range from privacy, they impact freedom of expression. Um, but increasingly as you point out, they will also come up against evolving ideas and principles and rules around algorithmic governance. Um, Jamie, did you want to come in?

 

Jamie Greenberg (27:59):

I think it was Meredith that mentioned like, you know, you should just "nerd harder". Um, it's, it's interesting, you know, that, that seems to be the, the, um, the, the response, but when, when we bring to them the technological challenges around nerding harder and what will, what that will mean for, you know, platforms outside of YouTube and Facebook that, you know, the large tech players, um, sometimes we see their virtual eyes glaze over a little bit, um, cause it's not that interesting to them. Um, but the fact remains that, that, you know, the lawmakers are so, um, are oftentimes so focused on limiting. Um, particularly I would say more Google's impact on, uh, on their media, um, on their, on their media players, on their national media players, that they, they tend to ignore how that'll affect different platforms. And, you know, when we, when we bring the conversation to these governments and explain, Hey, um, you know, we have X number of million users that are engaged, that are writing, you know, that next level of fiction, um, on our platform.

 

Jamie Greenberg (29:17):

And, Oh, by the way, these are also, you know, these are young, these are often marginalized, um, uh, users, uh, are, we have a majority of, of users who are women, um, who wouldn't necessarily have a voice on the larger media platforms. We often get a very sympathetic response. So, so we've been pushing hard, um, to have, uh, texts, um, excluded from the law, um, based on a, uh, the need to kind of promote diverse voices in different countries in Europe and be the kind of technological impediments to, um, implementing a licensing and filtering regime that is actually, uh, effective.

 

Amba Kak (30:02):

Yeah. The Directive also has a safeguard for small and medium enterprises, but it's defined as those companies that are less than three years old, I think, yeah. Three years old and with, um, an annual turnover of less than 10 million. So that's arguably a very small, small slice of, um, SMEs with,

 

Jamie Greenberg (30:23):

I wonder how many yeah. How many companies actually fit that definition that have a real impact on, um, on the audio visual or written word sectors in each country. I mean, that's, it's a pretty,

 

Amba Kak (30:37):

Yeah, yeah. It's a narrow definition. I think it also raises the issue that while kind of big tech companies, so to speak, were created in an era when these constraints did not, did not exist, but now these constraints are being put on, on newer players, potentially making it even harder for them to compete, especially when they're trying to build out these, this kind of niche for themselves.

 

Jamie Greenberg (31:01):

And I, I would actually argue that it, a law like this further entrenches, the power of large media companies who can manage, um, this law, uh, versus, you know, smaller platforms who either are going to have a lot of trouble managing the requirements, or may never start up because what they're doing is so prescribed under the law.

 

Katt Geddes (31:28):

Um, yeah, I just wanted to add, um, the three year grace period for small startups, um, in subsection six is probably going to be unhelpful for all of the reasons that you guys have laid out. But also because startups will really struggle to raise funds if their business is expected to become on viable after three years. Um, and also the, the point about entrenching, the existing market share of big players, um, you know, big, big platforms like YouTube or audible magic that are able to collect enormous amounts of licensing revenues. Um, as a result of licensing out their filtering technologies, they're also probably going to get an enormous amount of extremely valuable data about the users of the platforms, to which they licensed those technologies. And that makes the anti-competitive effects of this directive even worse.

 

Meredith Rose (32:19):

Yeah, I think, uh, one aspect that we sort of touched on, but I just wanted to make sure we, we talked about a little bit, is that not only does this sort of entrench the power of larger media companies, but it also runs the risk of really creating some perverse incentives on the part of platforms. So there was somewhat famously a few months ago, an instance where, uh, Lindsay Ellis, who's a popular YouTuber who does a lot of commentary, uh, on Disney movies specifically, uh, did a whole video called woke Disney in which she sort of criticized, um, some of Disney's moves towards social justicey kind of messaging in it. She played some clips from some songs, uh, and part of her contract, she has a contract with an outside company, um, which part of that contract is she's not allowed to monetize any of her YouTube videos, uh, which is fine.

 

Meredith Rose (33:16):

And she never has. Uh, but universal came in, I believe, uh, and was able to sample match to a clip that she used in one of her videos and then monetized her video. And so they started getting ad revenue off of, off of its use, even though it was very clearly a fair use. Um, you know, Lindsay Ellis, uh, objected to YouTube, uh, YouTube ultimately just decided that they were going to let the monetization stand. Um, at least initially I haven't followed up on this and it may have changed since. Um, but YouTube very much has an interest in not aggravating universal, uh, you know, as, as a major, uh, content input to its site. And so you have these kinds of situations where, um, you have users very much at odds, both with the platform, uh, and with the, uh, media company, uh, whose content is being put up and that can put folks in a real bind. And so to the extent that these are forcing platforms and, uh, and content conglomerates to work together, hand in hand, that seems fine on paper until you realize that there are user interests that may not be best served, uh, when forced to choose between the rights of an individual user and the rights of, you know, this massive sort of, uh, multinational entertainment corporation that the platform has to play nicely with.

 

Katt Geddes (34:37):

Yeah, just, um, following on from Meredith's comment about the sort of like "woke" Disney video, um, there has been a lot of scholarship about the user labor exploitation that occurs on big platforms, which use automated filtering technology. I'm an extreme example of this. Um, a lot of fan made meme videos of Baauer's Harlem shake, um, were flagged by content ID and actually earned his label mad, decent over four point $5 million in ad revenue. And that was all derived from the user labor that had gone into these videos, nost of which probably would have qualified as fair use. And that's a pretty extreme example, but, um, it kind of just touches on the fact that users really are, um, are the underdog in this situation, because as Meredith said, um, copyright owners, particularly institutional copyright owners, um, and platforms have this perverse incentive to work together at the expense of user labor.

 

Amba Kak (35:40):

Right, let's move on to the other kind of big elephant in the room with the copyright directive, which is how this kind of a direct liability regime for UGC platforms, uh, would impact or interact with the safe harbor. The DMCA-like safe Harbor that has pretty much been the global standard till this point. Now the directive itself does some legal acrobatics that suggests that the directive creates an exceptional regime, which does not totally eclipse, but, you know, only modifies the scope of the existing safe harbor. And actually, as we discussed on the last episode, this modification was already sort of getting codified through recent Court of justice jurisprudence on the distinction between active versus passive, um, UGC platforms. Now lets bring this to the, um, US context, Meredith would you see this kind of regime as a clear break from the DMCA safe Harbor? And have we seen the same kind of chipping away at the standard or attempts to do that here too?

 

Meredith Rose (36:45):

So, yeah, I would say so. The copyright directive fundamentally just imagines a different kind of framework, uh, than section 5 (12) of the DMCA, which is the law that we're currently operating under the United States. Um, you know, having said that there has been a push of the last several years to revisit, uh, section 5 (12), um, which has thus far functioned as sort of the best solution to a problem that is fundamentally intractable, which is, you know, sort of what we've been talking about this whole time. Uh, there actually was a report just released, uh, the morning that we are recording, uh, from the copyright office in the United States, kind of talking about section 5 (12) and their sort of thoughts on how in their mind, you know, the, the law would need to be tweaked. Um, you know, fundamentally there's been a lot of push to kind of answer this question of how do US-based tech companies, which are the majority of them, um, you know, far greater number than European based tech companies.

 

Meredith Rose (37:51):

Uh, how do they thread this needle? Uh, and so there has been, you know, there've been suggestions to include in a trade agreement language that says, well, as long as they're complying with US law and the DMCA, then they can't be held liable for, you know, potential copyright problems in Europe. Um, and the, they would be, I guess, immunized from, um, suit under the Copyright Directive. So it's very much an open question. Um, but because the two laws fundamentally just rely on such different frameworks and points of, um, you know, burden shifting, um, I think it's something that, that folks are just going to have to grapple with.

 

Amba Kak (38:30):

Alright. And as this reform process kickstarts, do you see that the content industry, for example, is endorsing Europe's copyright directive as an example that the US should draw from, or that is better than what currently exists?

 

Meredith Rose (38:47):

Yeah, it's definitely been held up. Um, I know there's been a lot of, so the copyright office had round tables about, uh, the, about this intermediary liability question and about the EU copyright directive. I think the, the issue with that right now is because it hasn't yet been implemented on a national level yet. We just really don't know how it's going to work or if it's going to work in practice. Um, and so that's really been the sort of Achilles heel of those, uh, suggestions thus far is that we just, we don't know. Um, you know, so I think one of the suggestions that has been floated around is, well, why don't we just use this as kind of a petri dish? Why don't we sit back, see what happens, see if this actually works in practice or not, and then come back and have that discussion later.

 

Meredith Rose (39:37):

Um, you know, I think there are very few people, even frankly, from content, uh, the content lobbying side, who would say, we absolutely need to do this right now. Uh, we need to follow Europe's lead. We need to immediately harmonize, uh, because they, they don't know if it's going to backfire. I mean, I remember hearing that Poland has made some statement that they weren't going to implement it at all. Um, so because it is so uncertain right now in Europe, how this is going to work, if it is going to work at all. Right now, I think the mood is on the content side, one is sort of hopeful this, uh, sort of hopeful optimism that this will work out well, uh, for their interests in Europe, uh, from the tech side, it's a little more trepidation. Uh, and skepticism

 

Amba Kak (40:19):

Yeah, but perhaps if not the kind of complicated and at this point, pretty uncertain framework of the directive, it is clear that the kind of automated enforcement, uh, solution, uh, propaganda is on the rise. And that's something that is becoming increasingly, uh, sold as a kind of silver bullet to a lot of the content regulation problems, both in, within copyright and outside of it.

 

Meredith Rose (40:40):

Yeah, absolutely. I mean, that's, that's always been, I mean, that's been a talking point for a very long time, so there's nothing particularly new about this. It's just now that, uh, that, you know, obviously that's found a second wind, um, and it's, you know, like you said, it's, it's been held up as a silver bullet. Um, but again, it's very easy to kind of push back and say like, well, but we don't actually know if it's been a sort of a problem now we're going to see how this actually works in practice. And so it's sort of a little bit of anxious waiting, right?

 

Amba Kak (41:06):

Yeah. Speaking of "in practice". So, like I mentioned at the start, uh, member States have about two years to implement, I was trying to ask our copyright experts on last, um, on the last episode, what countries they think will come up with more restrictive interpretations versus less. Um, their opinion was that it seems like the political compromise that is reflected in the directive as it stands, which is in some ways making the right noises. But on the other hand, extremely vague and difficult to implement is a result of that negotiation. And it is unlikely that member state governments will do much to clarify that because they don't want to take a position one way or the other. It is true that Poland has challenged the constitutionality of this directive. Um, there too, it feels like recent Court of justice jurisprudence doesn't inspire too much confidence, but all of that to say, I just wanted to ask Jamie as someone that's probably watching this from a more practical and legal-risk perspective. Um, yeah, do you have, you got wind of, um, certain countries that are already in the process of implementing national laws. What have you heard and what are you worried about?

 

Jamie Greenberg (42:11):

Yeah, undoubtedly the, the, the, the leader in implementation, uh, is, is France. Um, we've, uh, we've, we've already we've. I think they released draft regulations, um, to transpose a law, uh, that they were, uh, prior to, uh, uh COVID um, I think they were supposed to be debating it in the national assembly. Um, like right now, uh, that's been delayed from all I can, from what I can see there hasn't been any news out of France on that for the present time. So, uh, I imagine the two year implementation maybe, uh, drawn out a little bit due to do a pandemic. Um, but France is definitely the one that's taken the lead in, in, uh, implementing, um, or transposing the law. Um, and they've also, uh, been quite aggressive in their interpretation of the law. Um, I think it was Julia redA Uh, who's a copyrightscholar in the, in the EU who wrote an article, um, uh, comparing, comparing a France's transposition, um, to the directive and, uh, and they they've taken a relatively strict approach to it.

 

Jamie Greenberg (43:28):

Um, from our perspective as a, you know, a UGC platform, um, you know, we're, we're concerned, uh, I radically, you know, we, we have studios deal with, with Lagarfere who is, you know, one of the largest media companies in France. So we actually work with the media companies. Um, but, uh, the, the, the focus seems to be for the most part on, on, uh, audio visual works in France, but, you know, we won't know that until, um, the, the final text is released and that hasn't been, that hasn't happened yet.

 

Amba Kak (44:06):

Okay. I think we'll stop there. Thanks. All of you for being part of this conversation. Um, the copyright directive in many ways, as we saw today is somewhat like a springboard to address a whole range of issues that are central to the, uh, to, you know, online copyright reform efforts globally. And now we've heard a lot about the risks of the global contagion of this directive, uh, but in some ways its complexity and its vagueness seems to limit its own ability to be transplanted anywhere else. As Meredith pointed out, the licensing environment for music content in the US is extremely complex and fragmented like elsewhere, and the incentives for a kind of blanket UGC license of the kind that the directive might encourage don't seem to exist today. And even if they did, they would raise some serious governance challenges. Jamie pointed out that moving out of music, when we look at the written word, the licensing paradigm is even more elusive. When it comes to the filtering mandate, we heard in today's conversation, the kind of "greatest hits" of why copyright filters cannot be a silver bullet and how it's often users and individual content creators that lose out most in this negotiation between institutional copyright holders and large UGC platforms, both of whom benefit financially from the exploitation of what Katt termed "user labor" on these platforms. The future that the copyright directive nudges towards is one that sort of invisibilizes or sidesteps the harm to users and to individual content creators. So as other jurisdictions draw from its example, it will be crucial to recenter this impact. Stay tuned for the next episode. This podcast is licensed under creative commons attribution share alike 4.0 international license theme music by Jessica Batke under creative comments, attribution share like 4.0 international license. [inaudible].