Engelberg Center Live!

Fake Symposium: Fake Social Media Movements in Culture, Advocacy, and Policy

Episode Summary

Today's episode is the Fake Social Media Movements in Culture, Advocacy, and Policy panel from the Fake Symposium. It was recorded on September 22, 2022.

Episode Notes


- Chris Lewis, Public Knowledge

- Michael Livermore, University of Virginia School of Law

- Andrea Matwyshyn, Penn State Law

- Michael Weinberg, Engelberg Center on Innovation Law & Policy at NYU School of Law (moderator)

Episode Transcription

Michael Weinberg  0:01  

Welcome to Engelberg Center Live!, a collection of audio from events held by the Engelberg Center on innovation Law and Policy at NYU Law. Today's episode is the Fake Social Media Movements in Culture, Advocacy, and Policy panel in the Fake Symposium. It was recorded on September 22 2022. 

This panel is focused on things social media movements and social media movements in general, and how they connect with policymakers. Not because it's the only way to look at it was that as a way to, to as a jumping off point, to understand larger societal engagement with these sort of online movements, understanding their purpose, their value, what we can learn from them, we have a fantastic panel, I have heavy, long biographies of all these folks in front of me, and I could read them all. But I think we would just be here for all day and night. So I will instead read the first sentence of everyone's biography. And then I will, I will have a little bit of a framing setup and jump quickly into questions where I will stop talking. So first, to my immediate left is Christopher Lewis, who is the president and CEO of public knowledge, period. Chris also has a great, great, great friend and a former colleague, we we used to share the same office. So next to Chris is Andrea McCuistion, who's a professor in the law school and engineering school at Penn State. He Associate Dean of innovation at Penn State law University Park, and the founding faculty director of both the Penn State pilot lab policy innovation lab of tomorrow, an interdisciplinary Technology Policy Lab, and the usea danessa, sing a song song, Magnolia lab for gender and economic equality, a technology equity lab and clinic. So longer, for example. And Michael Livermore is a professor at the University of Virginia Law School of Law, and serves as the director of the program in law, communications, and environment place, also here. So thank you all for joining us, for this is probably a blanket statement, but I will just make it for the panel that I am, that I am moderating, which is thank you for joining us for a panel that probably felt very amorphous, when you got the email. And hopefully, we'll open up some very interesting lines of discussion. So as I alluded to earlier, I think one of the things that I want to try and tease out in this panel is a kind of two part question. Right? So we're using mass public comments, as an example of sort of broader internet fakeness issues, right? There are all sorts of social media movements, online, all sorts of groups online. And they can be hard to think about as an amorphous group. And so thinking about them in the context of specific advocacy is a way to think about them, but certainly not the only way. I don't think that this conversation will be strictly boxed in by that, but instead, we're using it as an anchor. And so if you are thinking about these groups, I think there are at least two big questions that you need to struggle with. Um, the first is understanding which ones are real and which ones are fake.


And as a spectrum, right, it is easy to imagine the ends of that spectrum, right there is there is the citizen who is deeply informed, read up on the issues and files by their own pen, a 20 page comment with citations tied to facts and law. This is a person who probably exists in the world. And their comments is their opinions are probably easy to categorize as real, right? It's a real person who's a real research and really engaged. On the other side of the spectrum. You have a robot who has procedurally generated a name, and a set of comments that are completely disconnected from the issue or the question or anything else. This is clearly fake, and probably in many ways can be dismissed. But when you're in the middle, you have I think, a little bit of complexity, right? You are in a place where there are organizations that will set up a form that will allow a commenter to enter their name and use the text provided by the organization. and to serve as their comment. There are organizations that do that, but allow users to edit the comment before it goes in. There are organizations that will just based on their email list, send a bunch of comments in on behalf of the people on that email list. There are organizations where, you know, they'll pay to advertise and reach out to people who had no existing connections at organization. All of these are kind of comments from a person with some kind of connection to something. And so understanding what we're thinking about what is real and fake on this spectrum, it can get a little bit complicated, and it brings a little bit of nuance. And these kinds of, of labeling issues are again, kind of examples of the broader dynamics that consumers face on the internet, or anyone faces on the internet, when they're trying to understand a mass online phenomenon. So first, you have this sort of categorization question of Is this real? Is this fake? Is something in the middle? How do I care? Why do I care? What's going on? And then the second question is, you know, if you're an agency, trying to understand all this incoming information, what does it mean to assess and consider this information? What is? What do you think you're trying to get from all of these public comments? What do you what do you imagine the information you're trying to get? What are the what does the system suspect you're trying to do with this information? And this becomes especially important as these especially as agencies, but even you know, companies are doing mass sentiment analysis, any of these any of these efforts to take large amounts of information, and synthesize and distill it? The question is, what what are you trying to synthesize it for? Or what do you what is the purpose of your summary? How are people going to use it? And these questions are also very much tie up with larger questions around, you know, computer security, risk, and trustworthiness online generally, like how you think about the purpose of the systems that you are building. So this is a very broad and blue sky, framing quite a framing setup. I want to have a bunch of questions, I want to ask the panelists to take a take wherever they see fits, we'll also have questions from you all in the end. But the first question I have for you is, is kind of with the very premise, right? Are these distinctions that I'm describing, even meaningful? Right? Is it? Is it actually hard to determine which Internet content which comments which anything else? Is Real or fake? Or am I just creating a problem? That doesn't actually exist when you look at it more directly? And should anyone should decision makers? Should agencies? Should someone on the internet be spending any time trying to categorize this? Is this a worthwhile enterprise at all? As with all these questions, I will open them to the floor and I invite anyone to jump in who has thought?


Michael Livermore  8:20  

Sure, from one, Michael to the next. So it might make sense to take a step back, given that we have a kind of a crowd here, that's kind of maybe we'll have different understanding of the administrative process. And it's like, what is the role of these public comments anyway? What are we talking about? So just generally, what we're talking about here is, you know, in the US broadly, administrative agencies have a very important role in making lots of profound decisions, public policy decisions, including things like how much we're going to control greenhouse gas emissions and how we're going to regulate internet service providers. The way they exercise that control the policymaking function is under statutes where the delegated authority, one of the most relevant statutes for this process is called the Administrative Procedure Act, which sets up a way for agencies to exercise their authority through something called notice and comment rulemaking. So the noticing coming rulemaking process has evolved to include a process whereby agencies solicit public comments from the from anybody who wants to send in these public comments and agencies are actually obligated under law to consider the what the comments say. Okay, so that's the process. And over the years, what happened is, you had a fairly insider process, say in the 70s, where only a handful of interest groups would, would participate, and gradually that evolved into a really broad mass process where hundreds of 1000s and even millions of people participate in notice and comment rulemaking and many of you may have actually seen emails like this where software an organization that you're part of sends you maybe even public knowledge, send you an email and says, look, there's a rulemaking process is very important. You know, please comment, I'll make your voice heard kind of thing. Okay, so So that's just a little bit of background on what we're talking about. This question of fake comments came up in the after the FCC has major rulemakings on net neutrality. So the Federal Communications Commission had several rulemakings that touched on the question of internet governance. And in that process, the agency received many millions of comments, and it became clear that some of those comments, let's just say were a little strange, right? I don't wanna say fake because that's a loaded word here, right. But there, they were odd. And what ultimately turned out to be the case is that a large number of them were computer generated, you know, like, like, kind of like the stereotypical guy in the basement in Queens sent in like a million comments. And not only were they bot generated, but many of them had, essentially had gone through the phonebook and put random people's names on the comments. Okay. So as a consequence of that FCC rulemaking, there's been lots of concern on this in in DC, and there's been concerns in other rulemakings, there was an SEC rulemaking the heads and Miss attributed comments. And so Congress has paid some attention. And then the Administrative Conference of the United States, which is this kind of entity that worries about these things, convened a group of experts on it to do a report. And that's I worked on that report. So I'll just say a couple of kind of narrative with those framing remarks. A couple of points on the question. So for that report, we actually just aggregated the issue into three categories. So one is the bot generated comments. So that presents one set of issues that's worth kind of thinking about separately. Another category is Miss attributed comments, which could be bot comments, but don't need to be I could actually miss attribute comments to you, if I wanted to, right, any individual could do that without computer help. And then finally, there's mass comments where mass come in. So it could be campaigns, they could be form letters, they can be solicitations, there's lots of different things. Those can be but what characterizes them is that there's a lot of them. And you can imagine, you know, these are, you know, sets that could potentially overlap. And and I think just to kind of, again, kind of stick with the question. If we're thinking all of these are fake in some general sense, but it's very useful to consider these distinct categories, because they present different challenges, different problems, and we might want to think about them very differently. So the the final, maybe just opening remark that I'll make on this, is that one of the things is we what struck us during this process was that all of these new kinds of comments really provided us an opportunity in a way, which I think is broadly true with the whole notion of fitness, of thinking about like, what is the thing that you care about, right? Like why do we have this public comment process? What's What's the utility function here? And that might help us to consider how to deal with these new, these new challenges. And so what I'll just give you my thoughts on that again, briefly. So the public coming process is usually thought of, by experts, insiders, in the world of Administrative Law, folks at agencies courts, for the most part, as a technocratic process. The goal of the public comment process is to get information that agencies can use to improve their rulemaking. That's like the paradigmatic thing. And that's often contrasted with the idea of a broad vote. So courts say like the public comment process is not a vote, you can go like find courts that that will say that, and people in administrative agencies say that all the time. If you ask them. Oh, what do you do with maths comments, they will often say, Oh, we ignore them. And they just kind of like with a straight face. And so that would be kind of one way of thinking about this stuff. And if that's true, because it's like a different way of thinking about mass comments, or bought comments, or even miss attributed comments, like maybe they're not such a big deal, because it's just about the information content. And you know, whether it's from a computer or not, like who cares, a mass comment can just be duped. That's no big deal. Even a mis attributed comment, it's probably not a victim most of the time. On the other hand, you know, you might think that the public comment process is about getting a different kind of information, which is political information. What do people in the community actually think about this? And if agencies are supposed to or does care about that, then you will think very differently about bot generated comments, because then it's creating a misimpression within the agency about support one way or the other. So that's problematic from an information perspective. Another way that's very uncommon, but I don't think it's a bad way necessarily to think about the public comment process is that it's a form of participation. It's a form of exercising political power for people in the community, for the you know, broadly for Americans. And if if that's true, then these really raise even more serious problems, right? Because you don't want to buy exercising political power, right. You don't want someone exercising political power illicitly under their name, right? That's a big problem. At the same time, the agency's D duping public comments mass comments is a huge problem if What if they're supposed to be taking into consideration you know, everyone gets a voice and they all To be weighed. And so, so I think what's interesting about the these kinds of fake comments is that they require us to think deeply about what the process is for on some questions that, frankly, we've fudged for the last 50 years. And, and then once, if we can get that, at least somewhat clear in our minds, it'll can give us a framework for thinking about how to address these different challenges.


Chris Lewis  15:22  

I'm gonna go alright. That was really good. You actually covered some of what I what I wanted to talk about. So alright, so given the background that Michael just gave, let me suggest an answer to the options that you offered. Since I don't have to explain them. So our experience is public knowledge comes from company Communications and Technology law. So largely work at the Federal Communications Commission, the Federal Trade Commission, the copyright office, and the Congress. And I, I like the way you set up the options of looking at Michael, because I believe that the system was set up for a reason that we have agencies and regulatory bodies, there with delegated authority to do the work that Congress can't do, at least in technology policy, we really feel that way about it, that there's a level of expertise that is needed to get policy right, that Congress does not have the time, or the expertise to dive into. And so there's a real purpose to use your word, Michael, in having agencies with narrow authority, but the power and communications, which is a great example. And the net neutrality proceeding, that really kicked off these concerns. Is is a good example, as well, I would suggest that the problem with the Congress and the net neutrality proceeding were only problematic from a political perspective. Because the agency's authority is limited by statute to specific decisions. And, and only so many options within those decisions. They are only allowed to under the if you take net neutrality, for example, they are only allowed to create rules that deal with certain types of entities. They're only allowed to deal with rules that deal with certain concerns about those entities say non discrimination, which is at the core of the net neutrality rules. And, and that's it. And so when Congress says this is what you're allowed to work on, but you're the experts figure it out. A comment process is meant to help them be as smart as they can, as the experts. And so you know, fake comments if they can be D duped, if comments that appropriate. people's names fraudulently can be weeded out. I think biologic getting what you want out of the process. That's the argument I would make. Because the rest of the comments are from folks who are experts, be it my experts of public knowledge or folks from industry who disagree with us, but they can lay down the legal and technical arguments that empower the agency to do its job and make a decision one way or the other. I'm really interested in ways to hold folks accountable for how they use tools to generate mass bought comments or, you know, misappropriated comments, because it will make the agency able to do its job better. But in the end, policymakers, especially people in Congress, they do often just put their finger up in the air and say, This is what I think is the right thing to do based on what I'm hearing from my constituents. And so that information about high volumes of comments is instructive to them as the oversight body for the expert agencies so that if they don't like a decision that the agency made, they can and are empowered to under the Constitution to make changes and supersede the expert agency based on political reasoning. The agency is not really supposed to be doing that. The courts when when these cases inevitably get reviewed by the courts and challenged, it's based on the record that the agency had in front of them and how they justify using that record for the decision that they made. And so there's a system and a role that makes sense here for the purpose of the issue, the purpose of courts, and the political purpose. Have the Congress that is accountable to you, the public.


So So that's just the thinking about agency. I think it gets far Messier when you talk about comments in other contexts, because when you're in danger, I know you study this more than me, when you get into context about people who are lying about themselves on the internet broadly, or in situations that are far more important to the functioning of a society. That's where the ability to have, you know, to have real accountability for folks who create big things is going to be more important than at an agency where they can weed them out or where they can ignore them.


Andrea Matwyshyn  20:46  

So to start with, answering the question, I think these distinctions are very important for many reasons. But before I go into that, I must include the obligatory disclaimer, anything I say here today comes from me and my capacity as a law professor, and should not be deemed to be attributable to any agency that I'm appointed to or work with. This is just me being the law professor me. Okay, so, I'd like to offer a little bit of a framing taking a step back. This is primarily a set of comments derived from my Cardozo Law Review article with my co author, computer scientist Miranda Mowbray from the UK. And also, the arguments will be a bit of a preview of the article that I'm finishing now called Super spreaders. So I'd like to start with offering perhaps an articulation of some of the socio technical ways that we might frame the experiences of harm that we're seeing in internet contexts in a technology contexts broadly. So what my co author and I argue and fake is that, you might be able to categorize all of the technology harms that we currently see today, essentially into four buckets. The first is manipulation of content and authenticity. The second is impersonation. The third is sequestration, meaning algorithms, nudging us into dark corners that we aren't necessarily sure of what other people are experiencing. And the final one is toxicity. So that would be things like regading, or DDoS, attacks, etc. So why does this matter? Well, these experiences are the socio technical descriptions, but they're not necessarily mapping cleanly on to our existing legal categories of redress, or the paradigms that judges are used to working with. So what a challenging undertaking, but a worthy one, I think is is to try to map these two experience realities, one socio technical and one legal into a step toward a workable framework in particular that is first amendment sensitive, so that we can unpack what's actually new here. And where do we need legal tweaks? And where do we need to, perhaps merely reengaged with traditional frameworks in order to apply them in ways that assist with remedying some of these four problems have missed that I just listed it, the acronym is missed, and we have a whole spider theme in the article. But I'll just skip over that. So what's no two things I think are new here. The first is that the ability of technology to amplify and merge the current always on reality with the in the background machine learning enabled databases of digital dossiers on people were the sets of assumptions that are made about people, whether they be correct or incorrect, are working in tandem to allow for the creation of a circumstance that enables an internet long con, as we call it, meaning that people are able to have a foot in the door at time one and two, as my hacker friends would say laterally move into another point of exploitation at time two. And the way that these databases and algorithms talk to each other enable this kind of a reality which is slightly different in its amplification and speed from what we had before allowing for new kinds or at least morphed kinds of Kannada Street and fraud. The second is borrowed from the little cited but I think very interesting half of the Eisenhower quotes about the military industrial complex. The second part of that quote that I find We didn't know existed until I started doing this research was that Eisenhower cautioned about the emergence of a scientific technical elite with its own set of interests that could do damage to our society. And this admonition is consonant with some of the dynamics, again, from the world of information security, which is my primary domain. Shout out to the hacker peeps, I guess.


It is the emergence of the combination of high end advertising techniques with high end techniques from military psychological operations, there's a revolving door of personnel between some of the more clandestine organizations into certain types of commercial enterprises that engage in very tailored content creation, engagement, both with respect to selling products, but also with respect to changing political opinions. And so we termed this dynamic, the SIOP industrial complex. And we go through some of the history of SIOP and we go through some of the perhaps uncomfortable convergence, even from the very beginning of the use of the skills of advertising to help change political hearts and minds and how that has evolved over time. So those are the two dynamics that are different. So we have the midterms, we have these two dynamics that are different. And what we offer is a framework that there's a chart, if you like boxes with things written in them, there's a chart that offers basically a three pronged. There's a triad of prongs, well, there's a cube that doesn't really have prompts either. Anyway, there's a chart that has three types of elements in it. So the approach is called the nice evaluation. So N stands for the nature of the fakery. And here we reached into the philosophy of trust. What is it that philosophers of trust have thought about in terms of the things that make people and products trustworthy? And also the philosophy of lying? What are the categories of lying. And so depending on the category of lying, you have a potentially different set of legal consequences that I'm happy to go into. But for the sake of moving things along, I'll stop there for now, the second piece of this three pronged approach is looking to the intent of the failure. And this is central to the way that the First Amendment analyzes these issues. And intent is something that courts have been comfortable deducing generally. And so there we have the ability of courts to kind of up their game to engage with these issues in ways that are understandable and can be brought along as a sort of scaffolding from time to time. And then the final piece in this framework looks to the sensitivity of the context, there will be certain contexts where the same type of fakery will cause significantly different quality of harm. And so the punchline of our article is that you look to these three variables, the nature of the fakery, the intent of the Faker, and the context sensitivity, in order to map whether we should most appropriately have a criminal intervention of civil intervention or regulatory intervention, and what kinds of case law consequences can sprang from this. So I'll stop there for now.


Michael Weinberg  29:07  

Thank you. Chris, I want to follow up with you first. Public Knowledge is an interesting organization because it is in this context, especially because it both files, substantive comments, right pages and pages and pages. And also, sometimes itself and sometimes in coalition works with organizations that are pulling together NASSCOM, us. And so I'm curious how you think about the different purpose? Have you spent a lot of effort in the advocacy world since a lot of effort on both of these things? Yeah. So what is the reason for doing them? And I'm also curious, in the end, like how should How do you think how do you think agencies think about them? How should they think about them, especially in the context where you mention And maybe it makes sense to do maths comments, right? If it makes sense to do math comments, why have them in the first place?


Chris Lewis  30:09  

Right? Okay, I'll start with the the last question is, it makes sense to D dupe them? And tell them how many people provided mass comments, because agencies are small, they don't have a lot of staff. If you can find systems to do and deliver a message and say this many people deliver the same message over and over again, okay, message delivered. And we understand the volume and the political context or political importance of that volume, both at the oversight level of Congress and at the agency, if they even want to factor that in which really, I would say they shouldn't be. But it's there. So for for efficiency sake alone, I think that's important.


You know, I subscribe to the category, obviously, that Michael is setting up that there's a political importance to an A free speech, importance and a participatory importance to allowing anyone in the public to comment into these spaces. And we encourage it for the political reasons. So when we work in coalition, you know, the record setting comments, net neutrality proceeding, a lot of them came from our coalition. And, and it demonstrated to Congress that because they had not acted for years, and had left the decision to the agency, that it showed them that there's great interest in these protections, from our perspective, and that they had an interest in making sure that any weakening or wavering of the agency on net neutrality protections should be looked at seriously by the Congress. And lo and behold, after the 2015, Net Neutrality rules, which we were in favor of, they were repealed by the next FCC by the next administration. So there was a wavering in the agency based on political elections. And I think, you know, the fact that millions of people weighed in, showed folks that it was important, I think it had a factor. It was a factor in when the Senate looked at overturning the repeal of net neutrality, why you saw a bipartisan vote to overturn that decision. Unfortunately, we couldn't get one in the house. So the politics matter. The expertise of the agency matters, too. Because too often we see the policy makers that have delegated authority in Congress not had the time or the capability to really, you know, get into technology at the level of an engineer, you know, I'm sure get into it environmental issues have the level of an environmental expert. And, and yes, it means that there are folks in the advocacy community like public knowledge, who develop an expertise to support the rulemaking processes at the agency level. But I think that's important on behalf of the public to have that there to match the political use of comments. And I like that the fact that the agency and the courts that review, their decisions are set up to deal with the expertise part, and Congress is set up to deal with the political part, I think that's intentional. And when those bleed, I think you get less quality in your in your policymaking? And I'd ask you,


Michael Weinberg  33:47  

what I'm thinking of in my head is the sort of Mike magnetic nerd harder question, which is, you've set up you set up two different frameworks, right, that sort of thinking about these types of comments, either in the agency context or in a broader internet context of the sort of the mist, the mist framework, or the nice framework, that sort of nature intent, context framework. You know, Michael set up this sort of information gathering dissipation political info, when faced with a lot of information, many people have an instinct to kind of reach for a robot to help regardless of whether or not that robot is going to be helpful. How do you think about, you know, using any one of those frameworks or another one that makes sense to you? Which of this stuff lends itself to wrote to leaning on automation or AI or, you know, robot as a category of assistance? And which of it is, is sort of foolish to think that this is something that just can be fixed with the right algorithm, or the right software assistance?


Andrea Matwyshyn  34:55  

So I think technologically speaking, we're dealing with a moving target in all cases, but we do don't live in the world yet, at least, where you can get perfect moderation with algorithms. It just it's no substitute for the more powerful processing power. If we want to sort of objectified the human being as a machine, which, you know, we should be careful with that. The human brain is still more powerful than any computer that people have invented at this point. And it's no substitute for understanding the nuances of cultural context, to have an algorithm programmed by a small team who undoubtedly have done a good job within their experience. But the world is a complicated place. So I think this gets to asking questions about which kinds of information inputs and people or quotes that we meet on the internet, in the sometimes not so friendly neighborhood of the internet, which are we deeming to be trusted, meaning that we are relying on them whether we should or not, and which we deem trust worthy, which in the way that the philosophers who have looked at this view is an analysis of the skill set of a person in context. So when we're assessing expertise for assessing whether someone is worthy of our trust, it is both a combination of looking at the individual but also the particular context of the individuals operating. And I think that's reasonably true for looking at technological interventions as well. It means


Michael Weinberg  36:51  

is that is that something that can be done technologically or is that something that is it is such a human experience that to try and automate it at scale,


Andrea Matwyshyn  37:00  

there are certain things that can be automated, which are, which are the thing when you think about that? Here's an example. There are definitely circumstances where the owner or slash operator of a system has superior knowledge about say, the country of origin of a person who is posting something on the internet, it may not be accurate 100% of the time, because the person may be using Tor, there may be 20 proxies, right. But there is a baseline of information that the person operating system has that an average user, subsequently looking at that posted, information does not necessarily have. And that is an example hypothetically, of the sort of automated labeling that in theory could be created, or in some cases does already exist.


Michael Weinberg  38:04  

I have a follow up questions for that. But I want to jump to Michael first. And that is you laid out this framework, the sorts of roles that agencies view comments and the information gathering the participation, the political information, Chris commented on how he thinks that it is true that they balance all those things, maybe has a preference for away from the political and towards the more substantive areas? What is your sense of how agencies are balancing those things? And if you want to editorialize, how they should balance those things, but also in finally, if there's a role for technology that is separate in those categories, or there's some grand unified vision have to have technological support.


Michael Livermore  38:53  

Great. So So there's, yeah, there's a couple of dimensions


Michael Weinberg  38:58  

of hardware question. I'm sorry. Sure.


Michael Livermore  39:00  

So there's, there's three things, there's what agencies do, there's what agencies say they do. And then there's what agencies should do, maybe. And then yeah, so then there's the interesting technology technology question. So as part of this report for AKs, we talked to a lot of agency officials, and they tended to say, Oh, we we take an entirely expert view, like all we're doing is looking for substance, that's the only thing that matters to us. And so mas comments we ignore, or we we look at the one that's representative, and if there's anything of substance, we take it into account, but otherwise, it doesn't matter how many times something we receive, we receive something, we're not worried about pot comments, because they're mostly kind of unsophisticated, and if we notice if a bot gave us a good comment that was substantively useful, we would take that into consideration. We don't care. That's kind of the idea.


There's two parts to that. So one, is it's irrelevant what level of the age See we were talking to. So when we were interviewing folks, we're mostly talking to career civil servants who have been an agents for us for many years. And, you know, and so their bosses are the political appointees. These are the career civil servants. And so if we think about what legitimates someone like that in their environment, its expertise, its recourse to this notion that they're in neutral experts who are just providing a, you know, kind of a impartial perspective. And so it's very sensible for them to then make a kind of claim that all they're interested in is this substantive neutral information. Right? So it's very much fitting with their role. When you talk to people or you get feedback from people who are kind of higher up more kind of political appointees more on the level of like an administrator or a commissioner for talking about the FCC, or an Associate Administrator at another agency, like the Environmental Protection Agency, they're gonna say, oh, yeah, of course, we care about the number of comments that we received, right? Like, that's just part of our job. We and they would never say publicly that they don't take that into consideration, because that would be like a crazy thing for them to say, like, oh, yeah, we just we don't care how many people like or don't like what we do. Like, they don't say stuff like that. There's only like I say that on the hill. And so So yeah, so they would, of course, say, we take the weight of opinion very seriously. It's part of what we care about, we want to do things that are good for the American public. And what they would do is they would kind of merge that they would say, Look, when we're doing our job correctly, it should work both ways, like what we decide to do and applying our expertise will be what is politically popular, because people want us to do smart things. Right. So that would be kind of how they merge that together. There's another piece of this, which is the role of courts. So as I said, agencies are legally obligated to consider the comments that they receive. Courts have articulated what that means over time. So one of the things that they've articulated is it's pretty clear, although not maybe crystal clear. But it's fairly clear that agencies can take the balance of opinion into account, they're not legally barred from that some people have kind of claimed something like that, and then never heard of that. But that's wrong. Like if an agency were to say, look, we got a lot of negative comments on this rulemaking, we didn't get very many positive comments. And so that's good. That's that's caused us to rethink a lot and make some changes, that's going to be an acceptable answer. Agencies can say something like that. Showing that an agency made a change in response to like, lots of comments is not going to like may undermine the legitimacy of an agency decision. That have been said, what courts generally do is they focus on substance, the substance of comments, and they will, and the courts will say agencies are not obligated to take the balance of opinion into account, right. So you can't go to court and say, look, there were more negative opinions about this than there were positive opinions, that doesn't get you anywhere, right with a court. Of course, it was like we don't care. And so with the way that like litigation will work is you go into the court, and you say, we presented the substantive argument to the agency, and they didn't respond to it appropriately, they didn't make, you know, changes that would be that would be appropriate if they didn't explain why they didn't make those changes. And so given that, that's what courts are looking for, you know, it's very sensible for agencies to focus on those kinds of comments. So what agencies tend to actually do, I would say, in practice, putting aside their whatever they say, is they focused on the comments of the actors that they think might suit them. Right, they have the have the resources to take those comments, and then turn them into litigation. And then what the way they take them into consideration is they might take, if there's something that's like that they find useful and take into account, they will do that. But for the most part, what they want to do is make any changes that they need to plus make any responses they need to in order to essentially bulletproof the rule from litigation challenge, right. They know what they want to do, right. And it's just a matter for the most part, maybe some tweaks aside, and then what they're mostly trying to do is make sure that that survives the judicial challenge. So there's a little way in which the way we talk about the public comment processes is like neutral thing where we collect information because it is not necessarily, you know, doesn't necessarily fit with the reality, which is it's much more about interest, group clash, and litigation and so on. That's, that's also the reality. And then, okay, so that's kind of what agencies say, what agencies do. And then agencies are thinking about Congress, but the reality is just, you know, one other thing that I'll just kind of put on the table, this moves us into what agencies maybe should do, we should be very careful about how we think about the public comment process because even putting aside mal attributed or mis attributed by generated and questions around the validity of maths comments, putting those cabinet in those for a moment, we stopped the real recognize that the folks who come in a rulemaking are totally non representative. Right? They don't, it's just a it's not a random group who goes and say random, that's exactly what it's not. It's the opposite of that. It's totally selected. And so, so we have to think about how do we feel about that, that doesn't necessarily mean that we ignore them.


And so one way we might analogize it is, you know, there's There's juries, right. And so people have talked about using juries in the rulemaking process kind of select a small number of people course jury members or rent are somewhat randomly selected, but they're far from representative. You can have polls, right, where pollsters really try to get a representative sample, if you really what you want to do is tie your rulemaking to public opinion or what people thought actually out in the world, you would probably do polling would be your best way of getting it, that information wouldn't be through a public comment process. But you can also analogize this to voting. And voting, of course, which we think of it's very democratic. legitimating is based on a totally non representative sample, like the people who vote in elections are very, very different from a random selection of people. They have more money, their age is non representative. There's all kinds of ways that they differ from the public at large. And so you know, so that's a reality. And we might think that the public comment process could be legitimated on kind of similar grounds. It's this participatory grant. The idea isn't that what we really want to do is tie the rulemaking to like, say, majoritarian preferences or something like that, what we want to do is create an opportunity for people who want to be politically engaged, to be politically engaged in a way that they actually exercise power, exercise, power over the government. And if that's our goal, then actually the non representativeness of the participators isn't necessarily a problem. But then the agency can't do kind of what it currently does, which is essentially for with respect to many things, ignore the number of people who are participating, ignore what most people have to say. So I think that it's agencies are in a real bind, and have been for a long time. Again, the fake stuff, right, that we've seen in the last few years, the different kinds of fake comments, really only just exacerbate, and then focus attention on this really long standing tension in the administrative process. So I guess what my Oh, one other thing I'll just kind of put on the table too. That's an important nuance. There's lots of different kinds of rulemaking. We've been talking about the FCC net neutrality rulemaking, which, like John Oliver talked about and television and Burger King did a commercial about, I don't know if you guys remember that. But there's literally a commercial on television, for like hamburgers, and, you know, net neutrality together. I don't know if public knowledge had anything to do with that, maybe. But, but in any case, that was, you know, that was super high profile. And I mentioned, you know, greenhouse gas emissions, and so on. But like, there's only a handful of those rules every once in a while, not even every year. And so, you know, most rules, there are hundreds and hundreds, if not 1000s of rulemaking. So even major rule makings with millions of dollars, or, you know, many millions of dollars of consequences every year, and there's very technical, they're not the kind of thing that anyone would get, you know, really all that interested in. And so we just have to recognize that the kind of process and the kinds of concerns that we bring to bear for like the mega rulemakings, that really important stuff are very different than the kinds of concerns that we're going to have on you know, your FCC, even within the FCC, tweaking the permitting requirements around this or that, or EPA, you know, having a technical rule having to do with like a tech, you know, a particular technology for water, pollution control, important, these are all important things. But they're not the stuff of John Oliver, or Burger King commercials. And, you know, so the things that we're worried about are quite different. So with the respect to the ship question, I do think that we really need to tailor our understanding of what agency should be doing the mix of politics and technocratic you know, understandings, the reality of partisanship in the administrative process, which is like pretty under theorized for administrative law, people who tend to ignore that, you know, every few years, you know, the agency just switch to like another side, and they switch positions. And that's clearly not a technocratic, you know, just a technocratic thing. And so thinking a little bit more deeply about the role of partisanship, not just politics, but specifically partisanship and the administrative process, I think, is also kind of on the table. And so anyway, that's, you know, that those are just more like, it's not like what they should do, but what they should be thinking about when they decide what they should do.


Michael Weinberg  49:16  

Andrew, I want to hurt one way to I think one way to understand your your most recent paper is attempt to bring procedures thinking information from the information security world into this world of like understanding what is real and fake on the internet. And so, I wonder, how prepared are policymakers writ large, right, policymakers, agencies or government or however broadly, you want to define that? How prepared are they to bring those frameworks in? And maybe more importantly, how does the world look different as those players in the space or you know, advocates or anyone else begin to incorporate? These kinds of understandings into how they see the internet and the Internet intersecting with their world.


Andrea Matwyshyn  50:08  

So let me connect that with another bit of philosophy. And there are folks in other fields that have given these kinds of questions, a lot of thought in admittedly slightly different contexts. But I think that by learning from the work that's already been done in other fields, we sometimes can see ways of looking at our own field, a new so let me connect some useful frameworks that are created by the philosophers, Helen Lanciano. And Ian hacking. And my version is in the spirit of I, I'm trying not to do violence to their thinking, but I'm only willing to say in the spirit of not that I'm qualified to be applying their their frameworks directly. But the spirit of the insights of those two philosophers points us to potentially a way to merge some of the core First Amendment concerns that do impact both private sector and the public sector. With these questions of fakery, so in particular, there is a debate and discussion in the philosophy literature about how to develop cultures of criticism that lead us to better places. So as we all know, philosophers have been debating the definition of truth for a very long time for 1000s of years. So I really appreciated the great framing comments that we had at the beginning of this conversation. So what can we do with that in light of that reality of thought? Well, we can ask ourselves of how do we nevertheless create shared baselines shared baselines of understanding? And there are at least four ways that we could identify shared baselines that show up in society in court discussions, etc. So the first is hierarchy, we have certain designated points of determining certain questions decisively, what am I talking about? NIST, NIST sets standards, NIST has NIST cheese and NIST chocolate. It's not asserting that that is the best cheese for the best chocolate, it's just a baseline. And so you can assess your cheese or your chocolate, based on how much it deviates from the NIST standard. So that's hierarchy. The second and this is the acronym help h elp. The second is expertise. So expertise, we have the question of how do we determine expertise generally, it's something some combination of years of work in a particular field credentialing, etc. We decide how we generate that category. But that's something that certainly we see in courts and the way that people differ to each other in conversation. The third category, which I find the most engaging and interesting is a set of legacy processes. And when I'm talking about legacy processes here, I'm thinking about Punxsutawney Phil, everyone recognizes a weird form of legitimacy of Punxsutawney Phil, it's not clear that we trust Phil's prognostication abilities on the weather. But it is a community building experience. And people love that groundhog. They love him. There's a whole festival around him. Anyway, I digress. So that is a legacy thing that is kind of a shared joke that we're all part of. But yet, we're not going to let Phil the Groundhog give us financial advice necessarily either. And Phil's track record of prediction is, I think, worse than 50% correct on whether springs coming. And then the final P is process. And so here I'm thinking about things like civil procedure, criminal procedure, the way that we establish trust worthiness of systems by assigning beforehand the flow of the information and the way that things will work. So those four categories of baselines can help us potentially create hooks of stability and otherwise tumultuous reality potentially, and also merge with the First Amendment. Various strands of First Amendment doctrine. Part of what makes this whole set of questions so interesting for me and so complicated in my mind, is that you're dealing with five to 10 different strands of First Amendment


cases here. And so you're dealing with, for example, you know, Alvarez's discussion about the fact that false speech should primarily be regulating contexts leading to fraud, lying to public officials perjury, impersonation defamation, but they're going to be the court is going to be hesitant to be accepting of other contexts. Citizens United, even that case, which certainly reasonable people will disagree about the consequences of it. Nevertheless, that case upheld the disclosure based requirements. So that gives us an insight into some of the approaches that could work in these contexts, Ward V. Rock against racism dealt with amplification requirements, that line of cases is directly applicable to some of the things that we're talking about. And perhaps Perhaps most creatively, low V SEC deals with personalization requirements. And that's a path that we haven't necessarily engaged with yet. And then there's a second bundle of moderation issues from New York Times v. Sullivan, and ACLU, Reno. And that's coming attractions. And my next article.


Michael Weinberg  56:15  

Thank you. I have lots of more questions. But I think we're at a point where I want to invite questions for the audience. I will say that a tick that I developed and I was in Washington is whenever I open the floor to questions, I have to remind people that questions end with a question mark, and not a period. But with that caveat, please feel free to come on up. We've got two mics here. And I welcome questions. Don't be shy. It's always great to be the first person. All right. Everyone, I guess you heard a joke over and over and over.


Unknown Speaker  56:56  

So you know, I run an organization that occasionally participates in these kinds of comments. I do want to say what this happened to me and other organizations are working on and flooded the copyright office with a thought basically like clicking the button, random person, you get to comments on this very specific thing. It was terrible. It was embarrassing. And a credit didn't feel right. Right. It was not what the copyright office was looking for an appropriate. So you want to just kind of preface that with a strong opinion on this. Before I ask my question, thank you all so much. So one of the things that I think about a lot is this comment from the author Jia Tolentino, where she said that one of the issues with like internet activism is that it collapses the difference between saying something and doing something. And so what I wonder what some of these public comments is, whether or not you feel that theatre because there is little space, beyond social media to comment on things in a technocratic or institutional way, and be just in a state comment. Systems are pretty difficult to use if you're gonna have to show up or just kind of ignored or very difficult to weigh in on local topics, or people don't actually want to invest the time to go before chanting time. So it's nice. So it's easier to click a button and say, Well, I did my thing. And I made a comment, because public knowledge told me to or library teachers want me to discover a position of children's names, told me to how do you feel like you have an entry out? Like, how do you feel like sort of the ways in which trust in saying you're doing and sort of this like, quote, unquote, slacktivism, which I don't like, but slack, of thinking about things plays into this when it comes to public? Vomit? Thank you.


Andrea Matwyshyn  59:05  

So I think that one of the first cuts on your excellent question is to determine which comments are from actual humans, and which comments are potentially driven by some of the dynamics, for example, that Michael was mentioning, where people's identities were impersonated and stolen. And so that identification of a security problem in essence, would be the first cut that I think is something that in any context we can engage with. After that first cut on the security issues at that point. And I have to be a little restrained here, but I think looking at At the question of whether content is offensive, problematic, etc, which is the ideas that's a different bucket than the question of whether there are problematic things from the standpoint of, for example, are there malicious links in the content? Are there ways to disrupt and damage the content, the comment process through the content itself? So that's, that's one cut. I'm very protective of the First Amendment. I think, you know, people deserve to have a voice. So I'm a fan of hearing from all parties. And I should probably cabin my comments, they're happy to talk more when I'm not constrained


Chris Lewis  1:00:56  

when you sat in some thoughts, because I wasn't familiar with who you quoted, but when you talk about online activism, I would differentiate between say hashtag activism, you know, the, hey, everybody tweet about this today, sort of activism versus agency commenting, or even contacting your elected official. Because the first one, a tweet, is speech, but not really action. When there are structures set up when we as a society set up structures, to facilitate the public's opinion being shared with their elected representatives, or the agencies that our elected representatives on our behalf created to craft policy. That's much closer to action in my mind. And it's what makes it so important. It's what makes what happened to you so important and why? You know, I started out by saying that, you know, misappropriated comments, those sorts of things are serious, and we need to find tools to make sure that we weed those out and identify them. It's noteworthy that the FCC, at times has fined companies in some of the proceedings that Michael was talking about that don't get a lot of attention, but find them because they gave false information. Because it undermines the process and what the process was created for. So I would just draw a distinction between some of those I, I worked, I worked as a local elected official, I understand the challenge of getting your opinions to elected officials. And so the more we can set up structures that facilitate that, I think it's, it's the onus is on the elected officials and, and us as their people that they represent, to, to value setting up those structures in ways that they are productive. You know, when I was on a local school board, and we took public comment on a topic, we you know, those stories, the qualitative information that we collected, you know, was always best used when we could cite to it and justifying our vote, whether you agree with us or disagree with us, you would know, oh, Chris voted this way. Because he put more weight in these arguments and these facts or these stories, and then you hold them accountable. And that's why we have elections.


Michael Livermore  1:03:37  

And you Yeah, just to add a couple of maybe thoughts to that. So just to pick up on one of the points you raised is I think very true that many people are like kind of hungry for opportunities to participate, they're really interested beyond like voting once every year, maybe once every couple of years. And so one of the things I think we should really take a step to appreciate is the way that technology in the last 2030 years has radically lowered the cost of becoming aware of what you know, these agencies are up to, used to have to like, have, you have to go to the Law Library to like read the Federal Register, which like nobody is going to do like occasionally, like a law student might do it, right. But and so you can just Google what the agency is up to, it's on their website, there's all kinds of information that you can get your hands on. And then you know, it's just a matter of clicking up you know, writing up what you want to say and clicking it. So this is really amazing and and we should celebrate that celebrate the fact that it is so much easier to participate in this in a potentially meaningful way and government decision making. Now having been said one of the one of the things that's happened as a consequence of this is yes, we've lowered the cost, like the actual financial cost barrier of accessing information. But, you know, these rulemakings they're still incredibly sophisticated that are happening at the federal level. So it's, you know, when we say anyone in theory can access the information you need to know a lot, you have to have a lot of background knowledge, you have to have specific expertise, you know, you usually have to have a graduate degree in something to make heads or tails of these things. And so that's like the new barrier to participation is, in as much as if you want to participate in a really serious, substantive way, you have to essentially be an expert. Now, that's true at the federal level, I really liked to shifted the conversation to state and local level because there's a lot more play there, I think, for people to participate in, you know, one of the I would just say, a kind of a hope, is that the same way that the federal government has adopted these tools to facilitate participation? You know, maybe that's something that over time can filter down to state and local governments. Because in a way, what, for folks who aren't necessarily gonna be kind of technocratic experts? Really what's happening, I think, is that they're weighing in with values. They're saying, like, these are my thoughts, I care a lot about climate change. I care about our democracy, right? Or I think you're crazy for caring about climate change, or, you know, what are you worried about? Our democracy is great corporations should just tell us what to think. And so, you know, whatever people's values are, right, but, but they're looking for opportunities to express those. And then I think another thing that we've really not figured out, and and I think you have to look globally, for examples of, of alternatives here is a process that's not merely about, okay, we're going to accept comments and do something like with them, right? Like this kind of one directional thing, like the government puts out some information says, Okay, this sort of thing about doing, we will listen to you will make a final decision. And that's kind of that's the conversation, right? Because especially when we're talking about values, I think people are fundamentally interested in kind of communicating with each other. I think a lot can be had socially if people are communicating with each other. And so thinking about structures, that are not structures of communication that are not just this kind of two way conversation at best, but actually allow for more lateral communication on these matters of public concern would be really wonderful. And there are examples of that outside outside the US. The one problem, I will just note, because Nothing's ever easy, is that it's much more time consuming, right? And so there's going to be it's going to matter, it's going to it's going to affect who can participate as well. And so they're like, for example, in Taiwan, they've had public deliberative processes on things that have had more of these kind of lateral conversations. But, you know, I can imagine that it's the case that that's going to be a self selected group of people, and so not probably representative. So, you know, none of this stuff is easy. There's no really easy solutions, but I do. I do like the idea of more state and local experimentation. It would be wonderful to see that it be some federal money to provide grants for that kind of thing.


Michael Weinberg  1:07:47  

I thought we were going to end on the inspiring note of it's fantastic that so many people want to be involved in democracy, but I think it is more appropriate that we end on the note nothing is ever easy. And so. So with that, please join me in thanking our panel. Thank you all right.


Announcer  1:08:10  

The Engelberg Center Live! podcast is a production of the Engelberg Center on Innovation Law and Policy at NYU Law is released under a Creative Commons Attribution 4.0 International license. Our theme music is by Jessica Batke and is licensed under a Creative Commons Attribution 4.0 International license