Senator Klobuchar ‘pushing’ social media companies to improve content moderation


Sen. Amy Klobuchar’s new bipartisan social media bill, which she introduced Feb. 9 alongside Republican Sen. Cynthia Lummis, is effectively two bills in one. It is a thoughtful and promising attempt to develop content-neutral ways to reduce “social media addiction and the spread of harmful content”. It is also by far the most ambitious attempt in the United States to require detailed transparency reports from major social media companies. As such, it deserves the careful consideration of lawmakers on both sides of the aisle, including consideration of important First Amendment issues, followed by swift congressional action.

Nudges and Interventions

S. 3608, the ”Nudging Users to Drive Good Experiences on Social Media Act” or the ”Social Media NUDGE Act”, requires the National Science Foundation and the National Academies of Sciences, Engineering, and Medicine to conduct a study initial, and ongoing biennial studies, to identify “content-agnostic interventions” that large social media companies could implement “to reduce the harms of algorithmic amplification and social media addiction.” After receiving its report on the initial study, expected a year after the law was enacted, the Federal Trade Commission would be required to begin a rule-making process to determine which of the recommended social media interventions should be rendered. mandatory.

What interventions are the authors of the bill thinking of? The bill lists examples of possible content-neutral interventions that “do not rely on the substance” of published material, including “screen time alerts and grayscale phone settings,” requirements for users to “read or review” social media content before sharing. , and prompts (which aren’t further defined in the bill) to “help users identify manipulative and micro-targeted ads.” The bill also approvingly discusses “reasonable limits on account creation and content sharing” which appear to relate to kill switch techniques to limit content amplification.

In addition, the bill goes into detail by requiring social media companies to publish public transparency reports every six months, and by focusing on correcting some of the weaknesses in the current transparency reports that reviews have noted. For example, it requires large social media companies to calculate “the total number of views for each piece of publicly viewable content posted during the month and randomly sample from the content.” This would also require information about published and viewed content that has been flagged by users, flagged by an automated system, deleted, restored or tagged, edited or moderated. This focus on reporting detail is a welcome addition to other approaches that remain at a higher level of generality.

Critics blame algorithms for many social media ills, and policymakers around the world are seeking to hold social media companies accountable for the online harms they algorithmically amplify. But no one at this point really knows how social media algorithms affect mental health or political beliefs and actions. More importantly, no one really knows what algorithm changes would make things better.

Skeptics of an algorithmic solution to the ills of social media focus on the difficulty of disentangling cause and effect from the world of social media. “Does social media create new types of people? asked Joseph Bernstein, BuzzFeed’s senior technology reporter, in a 2021 Harper article, “or just reveal long-obscured types of people to a segment of the public unaccustomed to seeing them?”

Other skeptics point to a real weakness in an algorithmic solution to the problems of disinformation, misinformation and hate speech online. “It’s a technocratic solution to a problem that’s as much about politics as it is about technology,” says New York Times columnist Ben Smith. He adds: “The new right-wing populists fueled by social media lie a lot and stretch the truth further. But as US reporters who interviewed Donald Trump fans on camera discovered, his audience was often aware of the joke.

Even though cause and effect are difficult to discern in social media, it is undeniable that algorithms contribute to hate speech and other social media information disorders. The problem is not that the algorithms have no effect and we imagine a problem that does not exist. Nor is the problem that nothing is working to counter the effect of misinformation and hate speech online, or that we don’t know anything about effective interventions. The problem is that we don’t know enough to impose algorithmic solutions or require specific technical or operational interventions, especially those that over-surveil certain populations.

Until much more is known about the extent and causes of online problems and the effectiveness of remedies, legislators should not seek to impose specific techniques in legislation. It’s all about experimentation and evidence, not hunches about what’s most likely to work.

The NUDGE Bill takes this evidence-based approach. This requires government science agencies that draw on the expertise of the academic community to take the lead in generating recommendations for algorithmic interventions. To prevent the agency from improvising on its own, it explicitly prohibits the agency from commissioning any intervention that has not been addressed in the reports of the national academies.

Some improvements needed

Several improvements to the bill seem important to me. The first is to give researchers working with national science agencies full access to all the information they need to conduct their studies. The bill improves existing public transparency reporting, but it does not provide the necessary access to internal social media data for approved researchers. What the transparency reports mandated by the bill make available to the public may not be enough for researchers to determine which interventions are effective. They should have broad and mandatory access to internal social media data, including internal studies and confidential data on how content moderation and recommendation algorithms work. Only with this information can they empirically determine which interventions are likely to be effective.

The bill is careful to require scientific bodies to conduct ongoing studies of interventions. A second improvement to the bill would require the FTC to update its mandatory interventions in light of these ongoing studies. The first set of mandatory interventions will almost certainly be only moderately effective, at best. Much will be learned from follow-up evaluations after the first set of interventions has been implemented. The FTC should have an obligation to update the rules in light of new evidence it receives from scientific agencies.

The cloud on the horizon

As promising as it is, there is a cloud on the horizon that threatens the entire company. The bill’s goal of reducing harmful content is at odds with its mechanism for content-neutral interventions. How can scientific agencies and the regulatory agency determine which interventions are effective in reducing harmful content without passing judgment on the content? As noted by Daphne Keller, it’s actually not that hard to slow down social media systems by inserting circuit breakers such as limits on the “number of times an item is displayed to users , or an hourly rate of viewership growth.” Such rules would restrict any speech that exceeds these limits: both breaking major news such as videos documenting the death of George Floyd, as well as the latest piece of viral COVID misinformation. .

But the most fundamental concern is that policymakers do not want rules that are neutral in their effect. They want interventions that enable the rapid dissemination of genuine breaking news and insightful new commentary on issues of public importance while preventing hate speech, terrorist content and content harmful to children’s health. They want, in other words, technical proxies for harmful speech, not slowing down interventions. all down.

Keller is rightly concerned about whether the neutral circuit breaker rules “would have a neutral impact on user speech” because she believes the First Amendment might frown on rules that have a disproportionate effect on some content, even if the rules do not evaluate the content itself. For this reason, it is important that the political community engage in a thoughtful assessment of the First Amendment implications of the NUDGE Bill. My gut feeling is that just as the courts have allowed race- and gender-neutral attorneys to achieve disproportionate gains for minorities and women in affirmative action cases, the courts will allow a similar remedy to content-neutral proxies to filter out harmful online content. But supporters of this bill need to consider how to position it for an inevitable First Amendment challenge, even as they begin the process of advancing it through the legislative process.

Previous Where to buy, prices and all about the PacSun collection
Next Russian government websites mysteriously shut down as invasion continues