The Battle Over Free Speech Online Is a Volcano That’s Ready to Blow

Posted on by

Yves here. Given the routine inaccuracies in traditional media, and their near pervasive refusal to issue corrections and make retractions, the uproar over social media is not about accuracy. If you have any doubts, see how many publications have eliminated the historical venue for making complaints, an ombudsman, which is Potemkin accountability. Recall how the Washington Post refused to correct or retract its PropOrNot piece which attacked this site and other alternative media venues. The Post instead issued a bizarre “We don’t stand by the accuracy of our reports” disclaimer, which was ridiculed by the Columbia Journalism Review.

Traditional media regularly publish quotes that are factually challenged as part of their fetishization for accuracy. It is hard to see what the beef is about when the source of allegedly inaccurate information discloses his real-world identity, or it is widely known.

By Marshall Auerback, a market analyst and commentator. Produced by Economy for All, a project of the Independent Media Institute

Donald Trump threatened to close Twitter down a day after the social media giant marked his tweets with a fact-check warning label for the first time. The president followed this threat up with an executive order that would encourage federal regulators to allow tech companies to be held liable for the comments, videos, and other content posted by users on their platforms. As is often the case with this president, his impetuous actions were more than a touch self-serving and legally dubious absent a congressionally legislated regulatory framework.

Despite himself, Trump does raise an interesting issue—namely whether and how we should regulate the social media companies such as Twitter and Facebook, as well as the search engines (Google, Bing) that disseminate their content. Section 230 of the Communications Decency Act largely immunizes internet platforms from any liability as a publisher or speaker for third-party content (in contrast to conventional media).

The statute directed the courts to not hold providers liable for removing content, even if the content is constitutionally protected. On the other hand, it doesn’t direct the Federal Communications Commission (FCC) to enforce anything, which calls into question whether the FCC does in fact have the existing legal authority to regulate social media (see this article by Harold Feld, senior vice president of the think tank Public Knowledge, for more elaboration on this point). Nor is it clear that vigorous antitrust remedies via the Federal Trade Commission (FTC) would solve the problem, even though FTC Chairman Joe Simons suggested last year that breaking up major technology platforms could be the right remedy to rein in dominant companies and restore competition.

In spite of Simons’ enthusiasm for undoing past mergers, it is unclear how breaking up the social media behemoths and turning them into smaller entities would automatically produce competition that would simultaneously solve problems like fake news, revenge porn, cyberbullying, or hate speech. In fact, it might produce the opposite result, much as the elimination of the “fairness doctrine” laid the foundations for the emergence of a multitude of hyper-partisan talk radio shows and later, Fox News.

Given the current conditions, the Silicon Valley-based social media giants have rarely had to face consequences for the dissemination of misinformation, or outright distortion (in the form of fake news), and have profited mightily from it.

Congress has made various attempts to establish a broader regulatory framework for social media companies over the past few years, in part by imposing existing TV and radio ad regulations on social media companies, introducing privacy legislation in California, as well as having congressional hearings featuring Facebook, Twitter and Google, where their CEOs testified on social media’s role in spreading disinformation during the 2016 election. But an overarching attempt to establish a regulatory framework for social media efforts has seldom found consensus among the power lobbies in Washington, and, consequently, legislative efforts have foundered.

As the 2020 elections near, the GOP has little interest in censoring Donald Trump. Likewise, Silicon Valley elites have largely seized control of the Democratic Party’s policy-making apparatus, so good luck expecting the Democratic Party to push hard on regulating big tech, especially if their dollars ultimately help to lead the country to a Biden presidency and a congressional supermajority. As things stand today, there’s not even a hint of a regulatory impulse in this direction in Joe Biden’s camp. As for Donald Trump, he can fulminate all he likes about having Twitter calling into question the veracity of his tweets, but that very conflict is red meat for his base. Trump wants to distract Americans from the awful coronavirus death toll, which recently topped 100,000, civil unrest on the streets of America’s major cities, and a deep recession that has put 41 million Americans out of work. A war with Twitter is right out of his usual political playbook.

By the same token, social media companies cannot solve this problem simply by making themselves the final arbiter of fact-checking, as opposed to an independent regulatory body. Twitter attaching a fact check to a tweet from President Trump looks like a self-serving attempt to forestall a more substantial regulatory effort. Even under the generous assumption that social media giants had the financial resources, knowledge, or people to do this correctly, as a general principle, it is not a good idea to let the principal actors of an industry regulate themselves, especially when that arbiter is effectively one person, as is the case at Facebook. As Atlantic columnist Zeynep Tufekci wrote recently, “Facebook’s young CEO is an emperor of information who decides rules of amplification and access to speech for billions of people, simply due to the way ownership of Facebook shares are structured: Zuckerberg personally controls 60 percent of the voting power.” At least Zuckerberg (unlike Twitter’s Jack Dorsey) has personally acknowledged that “Facebook shouldn’t be the arbiter of truth of everything that people say online… Private companies probably shouldn’t be, especially these platform companies, shouldn’t be in the position of doing that.”

One thing we can quickly dismiss is a revival of the old fairness doctrine, which, until its abolition in 1987, required any media companies holding FCC-broadcast licenses to allow the airing of opposing views on controversial issues of public importance. That doctrine first came under challenge in 1969 on First Amendment grounds in the case of Red Lion Broadcasting Co., Inc. v. Federal Communications Commission. Dylan Matthews explained in the Washington Post that “[t]he Court ruled unanimously that while broadcasters have First Amendment speech rights, the fact that the spectrum is owned by the government and merely leased to broadcasters gives the FCC the right to regulate news content.” In theory, the idea that the broadcast spectrum is still owned by the government and merely “leased” to private media could arguably be extended to the internet broadband spectrum, so that social media companies and digital platforms, like broadcast media companies, would have to abide by a range of public interest obligations, some of which may infringe upon their First Amendment freedoms. “However,” Matthews went on to point out, “First Amendment jurisprudence after Red Lion started to allow more speech rights to broadcasters, and put the constitutionality of the Fairness Doctrine in question.” It is unlikely that this would change, especially given the configuration of the Supreme Court led by Justice John Roberts, which has tended to adopt a strongly pro-corporate bias in the majority of its rulings.

The FCC still retains some discretion to regulate conventional media on the basis of public interest considerations, but Philip M. Napoli, James R. Shepley Professor of Public Policy in the Sanford School of Public Policy at Duke University, has argued that “the FCC’s ability to regulate on behalf of the public interest is in many ways confined to the narrow context of broadcasting.”

Consequently, there would likely have to be some reimagination of the FCC’s concept of public interest, so as to justify expanding their regulatory remit into the realm of social media. Napoli has suggested that:

“Massive aggregations of [private] user data provide the economic engine for Facebook, Google, and beyond.…

“If we understand aggregate user data as a public resource, then just as broadcast licensees must abide by public interest obligations in exchange for the privilege of monetizing the broadcast spectrum, so too should large digital platforms abide by public interest obligations in exchange for the privilege of monetizing our data.”

That still would mandate changes being initiated by Congress. As things stand today, existing legal guidelines for digital platforms in the U.S. fall under Section 230 of the Communications Decency Act. The goal of that legislation was to establish some guidelines for digital platforms in light of the jumble of (often conflicting) pre-existing case law that had arisen well before we had the internet. The legislation broadly immunizes internet platforms from any liability as a publisher or speaker for third-party content. By contrast, a platform that publishes digitally can still be held liable for its own content, of course. So, a newspaper such as the New York Times or an online publication such as the Daily Beast could still be held liable for one of its own articles online, but not for its comments section.

While the quality of public discourse has suffered mightily for the immunity granted by Section 230, the public doesn’t have much power to do anything about it. There is, however, a growing coalition of business powers that have bristled for many years at their inability to hold these platforms accountable for the claims made by critics and customers of their products, and to prevent the expansion of Section 230 into international trade agreements, as it had already seeped into parts of the new USMCA agreement with Mexico and Canada. A New York Times story about the fight explained that “companies’ motivations vary somewhat. Hollywood is concerned about copyright abuse, especially abroad, while Marriott would like to make it harder for Airbnb to fight local hotel laws. IBM wants consumer online services to be more responsible for the content on their sites.” At this point, it is necessary to be prepared for the sophistication and capacity of business lobbies in Washington to initiate a national controversy like the recent headlines of struggles at Twitter and Facebook with Trump to serve a long-term regulatory goal.

Oregon Senator Ron Wyden, an advocate of Section 230, argued that “companies in return for that protection—that they wouldn’t be sued indiscriminately—were being responsible in terms of policing their platforms.” In other words, the quid pro quo for such immunity was precisely the kind of moderation that is conspicuously lacking today. However, Danielle Citron, a University of Maryland law professor and author of the book Hate Crimes in Cyberspace, suggested there was no quid pro quo in the legislation, noting that “[t]here are countless individuals who are chased offline as a result of cyber mobs and harassment.”

In addition to the issues of intimidation or targeted groups cited by Citron, there are additional problems, such as the dissemination of content designed to interfere with the function of democracy (seen in evidence in the 2016 presidential election), that can otherwise disrupt society. This is not a problem unique to the United States. Disinformation was spread during the Brexit referendum, for starters. Another overseas example is featured in a Wall Street Journal article that reported in June, “[a]fter a live stream of a shooting spree at New Zealand mosques last year was posted on Facebook, Australia passed legislation that allows social-media platforms to be fined if they don’t remove violent content quickly.” Likewise, Germany passed its NetzDG law, which was designed to compel large social media platforms, such as Facebook, Instagram, Twitter, and YouTube, to block or remove “manifestly unlawful” content, such as hate speech, “within 24 hours of receiving a complaint but have up to one week or potentially more if further investigation is required,” according to an analysis of the law written by Human Rights Watch.

It is unclear whether Section 230 confers similar obligations in the U.S.

Given this ambiguity, many still argue that the immunity conferred by Section 230 is too broad. Last year, Republican Senator Josh Hawley introduced the Ending Support for Internet Censorship Act, the aim being to narrow the scope of immunity conferred on large social media companies by Section 230 of the Communications Decency Act. The stated goal of the legislation was to compel these companies to “submit to an external audit that proves by clear and convincing evidence that their algorithms and content-removal practices are politically neutral.”

Under Hawley’s proposals, for example, Google or Bing would not be allowed to arbitrarily limit the range of political ideology available. The proposed legislation would also require the FTC to examine the algorithms as a condition of continuing to give these companies immunity under Section 230. Any change in the search engine algorithm would require pre-clearance from the FTC.

Hawley’s proposal would fundamentally alter business models of social media companies that today depend on huge volumes of user-generated content. But it certainly will not solve the problem of fake news, which has emerged as an increasingly controversial flashpoint in the discussion on how to regulate social media. The problem with Hawley’s proposal is that it could potentially require digital platforms to engage in further content moderation. Ironically, then, efforts to retain the platform’s political neutrality could well create disincentives against moderation and in fact encourage platforms to err on the side of extremism (which might inadvertently include the dissemination of misinformation).

Public Knowledge’s Harold Feld noted that Section 230 does not exempt the application of federal or state criminal laws, such as sex trafficking or illegal drugs, with respect to third-party content protection. But he recognized that it by no means constitutes a complete solution to the problems raised here. In his book The Case for the Digital Platform Act, he proposes that Congress create a new agency with permanent oversight jurisdiction over social media. Such an agency could “monitor the impact of a law over time, and… mitigate impacts from a law that turns out to be too harsh in practice, or creates uncertainty, or otherwise has negative unintended consequences.” To maintain ample flexibility and democratic legitimacy, Feld proposed that the agency have the “capacity to report to Congress on the need to amend legislation in light of unfolding developments.”

Regulating the free-for-all on social media is unlikely to circumscribe our civil liberties or democracy one way or another, so the First Amendment enthusiasts can breathe easy. The experiment in letting anybody say whatever he/she wants, true or false, and be heard instantly around the world at the push of a button has done less to serve the cause of free speech, or enhance the quality of journalism, than it has to turn a few social media entrepreneurs into multi-hundred millionaires or billionaires.

We managed to have the civil rights revolution even though radio and TV and Hollywood were regulated, so there is no reason to think that a more robust series of regulations of social media will throw us back into the political dark ages or stifle free expression. Even with the dawn of the internet era, major journalistic exposes have largely emerged from traditional newspapers and magazines, online publications such as the Huffington Post, or curated blogs, not random tweets, or Facebook posts. Congress should call Trump’s bluff on social media, by crafting regulation appropriate for the 21st century.

That may have to wait until after the 2020 election, but it is a problem that won’t go away.

Print Friendly, PDF & Email

42 comments

  1. S.

    I could see this being printed in the NYT. It rehashes Russiagate narratives, which warps the issue of monopolies over public dialogue and expression into criticism of these monopolies for not censoring their content in the right way, and also completely ignores media consolidation when it invokes the civil rights era. Media back then was regulated, yes, but an environment with thousands of newspapers and magazines is not in any way comparable to one where most people get all of their information from Facebook/Instagram, Twitter, and YouTube.

  2. Yeh

    This problem isnt a function of social media, its a function of reduced trust in society and how to make sense in a world where there’s so much information from so many sources plus an alientated society and our desire to know more than we can. Steve Pinker is right to a large extent. News outlets are incentivised to produce outrage. Combine that with the sheer reach and power of the internet and you got a machine that continuously rouses people’s worries for profit. There are only so many things EVERYONE needs to worry about and not constantly. There are only so many friends you can keep in touch with.

  3. divadab

    This article is advocating censorship, not by the corporate platforms, but by an as yet unformed govt oversight agency according to a vague idea of future legislation.

    Censorship is a fraught area, full of “I know it when I see it” logic since so often political “truth” depends on your point of view. And malign parties will use censorship to suppress views they don’t like regardless of the truth – or, more accurately, because they don;t like inconvenient truths to be widely disseminated.

    And the wild west of the internet is threatening to people used to managing and controlling mass information through the legacy media, now of diminishing relevence – I haven’t watched the teevee news for over 15 years except in a hotel or airport and I know I am better informed as a result of scanning a wide variety of global web sources, including NC!

    SO I am leery of censorship – my inclination is let er rip and the truth will emerge – but clearly when irresponsible parties are inciting the stupid and easily-led with BS a la Pizzagate (Alex Jones – what a loon but sometimes dangerously yelling fire when there is not only no fire but not even a risk of fire) – then to me THAT is the point of censorship.

    But what the platforms have done is not to censor particular articles, but rather to silence the person deserving of censorship entirely and forever. This seems to me to be really stupid and heavy-handed and doesn’t really solve the problem but rather creates more problems and makes heroes of the merely hysterical and demented.

    SO what is the mechanism which provides a remedy to online fakery and incitement in the particular but does not ban and suppress and confine to outer darkness people with unpopular or unconventional or uncomfortable views?

    I hate the idea of an agency constantly monitoring – this is anathema to our legal system, which is complaint-based – so why not set up a complaint-based system with the power to review and censor? It could be a quasi-judicial setup, funded with a tax on the larger internet platforms, for example – mostly automated but with a process of human review for items garnering heavy complaint volume. It seems we cannot rely on the platforms to do this without actually suppressing non-inciteful free speech by banning and deplatforming and creating new problems. And the quango setup would have the power to fine purveyors of falsehoods and incitements – I don;t see this working unless people can be fined heavily enough to be a deterrent to future Pizzagates.

    1. Marshall Auerback

      Well, 1st Amendment rights are not absolute and as I pointed out in the piece, we didn’t devolve into a fascist state because Hollywood or broadcast media was regulated. So that’s a red herring.

      You “hate the idea of an agency constantly monitoring”. Again, a mischaracterisation of what these agencies do already. In fact what you go on to propose is very similar to what Harold Feld and I suggest! But you can’t have such a system set up in the absence of having the agency in place in the first place.

      1. Carolinian

        Oh I have to agree with divadab that the “fake news” problem is a fake problem and indeed an effort to censor inconvenient dissenting views on the internet. And who is pushing the “fake news” meme? The MSM of course. Given their own irresponsible way with the truth these days perhaps they are the ones who should be regulated but that would be wrong too. You either have free speech or you don’t.

        The real problem is concentrated power in the communications platforms that really have influence–the broadcast networks and cable news outlets and those remaining “national” newspapers. If we are going to have a new fairness doctrine then perhaps it should be applied to cable companies where the deregulation of the 1990s was a big disaster. If there was a genuinely progressive news channel on cable rather than the faux version–MSNBC–would someone like Sanders or Gabbard be able to get a fair hearing?

        What our media landscape needs is more competition, not less.

        1. Marshall Auerback

          Conventional media concentration is a problem in regard to getting the full spectrum of views, as opposed to those generally limited by the MSM. But competition, as I pointed out, does not necessarily solve the problem. More market competition gave us Alex Jones. How far does one want to take that? Nazi Radio?

          For the record, I don’t concentrated markets and monopolies in general. At the same time, I’m not Cato Institute/Chicago School libertarians. I follow Joan Robinson, Edwin Chamberlin, Schumpeter, JK Galbraith, Alfred D. Chandler and William Baumol in distinguishing between traditional competitive markets with constant or diminishing returns and imperfect markets, mostly in the traded sector and infrastructure, characterized by economies of scale and scope and/or network effects (social media falls into the latter category)

          In competitive markets with no increasing returns, I do not favor “concentrated markets and monopolies.” If a monopolist tried to buy up all the shoe shine stands or taco stands in the US in order to jack up prices, more competition is definitely the answer, and I personally would have no trouble with a DOJ antitrust lawsuit.

          It is my belief, however, shared with the economists I mentioned, that in the increasing returns sector any predatory behavior by oligopolies or monopolies (a real danger at times) should be checked by the countervailing power of government regulation and/or labor unions, not by breaking Facebook into 20 mini-Facebooks.

          As far as broadcast media goes, even when we had the “fairness doctrine” the big 3 networks didn’t compete for viewers on the basis of which could be “more fair”. But when the doctrine was eliminated, we got more competition, but did this create a new problem of misinformation? I believe so. Competition is not the panacea you think it is.

          1. Carolinian

            Almost all of the recent big media mergers have required government approval. So not only is the government not breaking up media concentrations, they are allowing them to become even more concentrated. And I disagree with your notion that “misinformation” is a problem. If we had Nazi radio then nobody would be listening to it and even if they did then shutting it down would be the Nazi solution, not the American one.

            If they elites are worried about rightwing extremists–and from where I sit I’d say that fear is quite exaggerated–the solution is to reestablish some sort of social contract that will keep such extremism marginalized. But I suspect they are more worried about the reasonable voices that oppose them rather then the nutcases.”Extremism” and “hate speech” are just an excuse.

        2. norm de plume

          ‘What our media landscape needs is more competition, not less’

          What it needs, like health and banking, is a public option.

      2. Divadab

        Well thx for yr Frank response, Marshall- “devolved into a fascist state” I think overstates the case where the mechanisms of democracy still exist tho attenuated. (Not to do the Pollyanna dance or anything). Yes we have a federal government controlled by cartels and where foreign client states overtly control foreign policy. And for decades (at least since Reagan I never understood why that actor they hired to play President was so and remains popular) of intense propaganda denigrating government and weakening and destroying aspects of government that benefit citizens universally. All this you can learn with a few hours of internet browsing – could you say this in a true fascist state like China?

        The problem of course with your proposal is that the federal government is so systematically corrupt that any Censorship agency could and would be neutered by the cartels even if it were enabled by legislation which is itself unlikely. Much easier to accomplish at the State level where citizen initiatives can be used as for (Re)-legalization of cannabis.

        The one good thing about a two-party system is it’s better than a one-party system but that’s it. The USA needs a Berlin Wall moment but I’m conflicted about the chaos and misery this necessary thing will cause- in the meantime the best approach is to disconnect and live local build a strong network to get stuff done – the trump show is at best a distraction and the created hysteria very worrying.

        In any event thx for putting your views out there and for the dialog.

        1. witters

          Lambert, we got a ostensible definition of Fascism here: “… a true fascist state like China.” Clarifying.

      3. Anarcissie

        Well, I for one was astounded by your ‘Regulating the free-for-all on social media is unlikely to circumscribe our civil liberties or democracy one way or another, so the First Amendment enthusiasts can breathe easy.’ If one kind of speech can be shut down by some authority, so can another, and given the regulatory capture customary in our plutocratic social order (I am writing from the US) one can pretty well guess which kinds of speech are going to be targets. On the other hand, if as you say your proposed regulation is unlikely to circumscribe, etc., etc., then there is no need for the agency in the first place, since regulation will have no significant effect.

        If the Internet media were held to be common carriers (which they’re fighting against so they can squeeze certain of their customers) they would have to treat all customers equally, and (as with print media) the producer of certain kinds of expression, like libel, would be legally responsible for it. The current regulation and the additional regulation proposed seem to be designed to preserve certain people’s monopoly powers, not to protect the public from anything but less expensive communications. Fake news is not a problem when large corporations do it, is it?

        The idea of ‘fake news’ being a problem is laughable when one considers, say, the terribly respectable New York Times and its adventures with instigating the invasion of Iraq. Or just consider the torrent of falsity which television stations emit every second. It’s called ‘advertising’. We’re used to it, aren’t we?

        1. Divadab

          Regulated common carriers yes!!!! This way they don’t get to be editors or censors but rather provide infrastructure within which libel and incitement can be regulated (policed?) and the poster holds liability, not the carrier. But this is rather in conflict with the social media business model, no? Could you imagine Biden voting for something like this let alone championing it? Beholden to the monopolists as he is like the rest of them?

      4. nn

        Well, in Czechoslovakia we had revolution despite state censure. Still I don’t think it’s good idea to go ahead with censure just because it doesn’t confer total power and the one having the power may not use it to immediately build concentration camps.

      5. occasional anonymous

        “Well, 1st Amendment rights are not absolute”

        They should be. Unless you are directly advocating violence, anything should be permissible.

        1. Marshall Auerback

          Had to laugh a bit a this one:

          First Amendment rights should be absolute!

          And then introducing qualifications to circumscribe that absolute right.

          Once you accept that free expression, like liberty itself, is not absolute, then it’s perfectly legitimate to have a debate on where and how we draw the boundaries.

          1. occasional anonymous

            No, it isn’t up for debate. Direct threats of violence, but that’s all. Once you start saying it’s acceptable to create boundaries outside of that context, it is a slippy slope (and I don’t care if you consider that a fallacious argument).

            NC would be high on many people’s chopping block, I imagine.

    2. jonboinAR

      I think if you want to support the existence and dissemination of “unpopular or unconventional or uncomfortable views”, even if just passively by allowing them access to your platform, you’re kind of stuck WITHOUT a “remedy to online fakery and incitement”. It comes with the territory. The definition of these latter is to a good deal in the mind of the reader or listener. As soon as you task an individual or organization with policing “fakery and incitement” you become subject to some extent to their whim. I don’t think it a coincidence that we lacked access to much of the information that we have now at our fingertips before we entered the wild west territory of the modern Internet. I don’t think that it was just that a platform was lacking to put that heretofore missing info out. The traditional news platforms were built in filtration systems that were probably inevitably controlled by people with agendas. You try to reinstitute in the new social media what we consider the responsibilities to “The Truth” that the old news-type organizations fulfilled, and you’re back where we were then.

      1. Divadab

        Well libel and slander have well-established legal definitions as does incitement. Slap a libel and incitement case or two on Alex Jones and the cost of dealing with it would be very educational to young Alex, no? Instead they ban the yahoo and make a hero out of him what a useless bunch of maroons playing whackamole and shrieking like my great aunt discovering a mouse in her flour bin.

  4. Larry

    Matt Stoller’s proposal to treat social media like telecoms appealing. Facebook would have to charge for access to it’s platform and would not be able to sell ads. So it would become more like your phone line potentially. This might not tamp down the fires to engage audiences, but it would certainly kill off bots and restore ad revenue to more localized sources.

    Splitting up social media could be effective in a competition sense. Instagram is principally an image driven social media platform that was built with privacy controls (unlike FB). Splinter FB, Whatsapp, and Instagram and you instantly reduce Zuckerberg’s power regardless of what other changes are made. FB is a dying social media platform in the sense that no young people use it (at least in the US). FB had to buy WhatsApp and Instagram to remain relevant. Allowing these giants to buy any nascent competitors is a disaster that let’s one person have immense control on the landscape and competitive environment.

    1. Geof

      Facebook would have to charge for access to it’s platform and would not be able to sell ads.

      This is key. Treating users as products rather than customers is at the root of many of the problems with social media. Advertising has progressively ruined one medium after another. It lobotomized TV, spam wrecked email, robocalls corrupted the telephone, and ads rendered social media stillborn. It is only recently that the Internet has gotten so bad – we used to have a mass Internet that worked. I think that social media monpolies funded by advertising are the main cause of the failure.

      Another major cause of that failure is political dysfunction resulting from oligarchy and inequality. We have seen consistent drum-beat messaging from media marching in lock-step attempting to locate the blame elsewhere – Russia, racism, trolls. The idea that regulating media can make up for system collapse elsewhere is misbegotten. The problem isn’t there, and the solution isn’t either. Censorship treats the symptoms, not the disease, and it’s liable to make the disease worse.

      As to censorship, I see no signs that any authorities would do a good job. We have YouTube censoring anyone who contradicts the advice of the World Health Organization, even though WHO got the virus wrong and advised against masks. And while governments twiddled their thumbs, 4chan, of all places, apparently got the virus right.

      What sense does it make to concentrate power in authority when the political hierarchy is headed by a barbarian? Eric Weinstein has made an interesting comparison between the chaos of the Internet and the lock-step of political and media authorities. The Internet is full of truth and falsity – but flawed diversity is a lot less dangerous than a messaging monoculture (especially one embedded in the interests of a single class). We saw that monoculture during the primaries with black-outs of many candidates. These days, consistent messaging across authoritative sources is as likely to be an indicator of falsity as truth. I’ll take diversity and making up my own adult mind over child-like deference to conformity any day.

      P.S.: Knocking out advertising would have other sulatary effects, like reducing over-consumption without picking market winners. I think that a massive reduction in advertising would be one of the top ways to address climate change, and numerous other social issues besides.

  5. Amfortas the hippie

    I’m pretty much an extremist regarding the First Amendment.
    Can’t go very far past “yelling ‘fire’ in a theater”…or “kill them all”…as limits, without breaking out into hives.
    But that sense applies to actual flesh and blood humans, in my book.
    Not immortal legal fictions.
    I think that’s somewhere near the root of this…”Corporate Personhood”.
    the Internet Backbone, much like the “airwaves”, belongs to Us, as near as I can tell.
    and that means it’s a Public Utility, and should be nationalised(and, yeah, ain’t no way that’s gonna happen without major changes in our political system)

    1. Alternate Delegate

      Seconded.

      As opposed to “money talks”, actual free speech is very much about the ability of individual people without status to be able to say “bad things” about others, especially about corporations and people with power and status.

      Very critical of the speech obstacles built into our current system, from national security and intellectual property to hate speech and libel laws.

      As usual, the answer to bad speech is more speech.

    2. Susan the other

      Free Speech is fine. We can and should say all the nasty, obnoxious one-liners we want in response to any given situation. It’s the same thing as a screech or a growl. It’s a reaction. But when we print our comments on a web site like this one, they take on a different obligation. Public influence should be rational, not reactionary. That’s the nuance that gets free speech all tangled up. Freedom of the press isn’t actually free speech. MA’s explanation of the situation is pretty good. I was thinking Trump was shrewd to threaten to withdraw Twitter’s indemnification because Trump’s twitters themselves remain in a grey area. The merit of his comments could go either way – it depends on the listener. So as far as regulation goes, maybe Trump can’t win this one. It seems quite reasonable to have a quid pro quo for the indemnity for 3rd parties – and I don’t think that’s going too far because we all have a sense of clarity on what is a good way to think about most things. That instinct is pretty amazing when you think about it. And, like it or not, we’ve got the Church Lady spying on us most of the time anyway. Under the rationale that they are maintaining law and order no doubt.

  6. OpenThePodBayDoorsHAL

    He concludes with “that may have to wait until after the 2020 election” but he doesn’t go on to mention that the current media and social media regulatory, access, financing, and fairness situation will be the main factor in deciding that election

  7. Rick Shapiro

    The deterioration of public discourse in America and the world is a direct result of media giants whose business models depend on facilitation of slander. Well, burglars have a business model too. We should not only remove any sort of shield against lawsuits, but also forbid platform dissemination of anonymous statements. If that should destroy the business model of the media giants, the world would be a better place.

  8. Barry Winograd

    Online entities in the business of distributing content for public consumption should be held to the same standards of defamation and invasion of privacy as other publishers and broadcasters. This approach still permits significant leeway for what at times is extreme commentary of public figures under the standard established by the unanimous landmark 1963 decision in New York Times v. Sullivan. The case dealt with a lawsuit by Alabama officials against the NYT for publishing an ad seeking contributions for Martin Luther King to assist his defense to Alabama perjury charges. The NYT v. Sullivan standard allows legal action for reckless disregard for truth or actual malice in the publication of false statements. Even if there was a public policy reason to distinguish the online industry at one point in time to allow for growth, that day has certainly passed. For those interested in delving further, here is a link to the Oyez page for the NYT v. Sullivan case: https://www.oyez.org/cases/1963/39.

    1. Yves Smith Post author

      You are way way behind the state of play in defamation case law.

      It is virtually impossible to win a defamation case in the US (and I have a top lawyer who specialized in that area). Gawker was a real outlier and I have no idea how that came about. I think the key fact was they were able to get Florida as a venue.

      For instance, in most jurisdictions, you are subject to anti-SLAPP, which means you have to effectively plead your case twice which means twice the legal bills . The resulting delay is deadly to plaintiffs, because memories fade and thus any testimony the plaintiff would rely upon can be more readily picked apart.

  9. Senator Blutarsky

    The article uses a general approach with regard to the problems with section 230.
    This section has been in effect for a while, but the current uproar in the internet community
    isn’t about section 230 primarily.
    Rather, the main complaints content providers on youtube have is, that there are more and more restrictions and that content is taken down arbitrarily, that they have many non-transparent rules and that youtube doesn’t apply their rules consistently.
    At least that’s what I hear youtubers mention as their main issue with it. Same with twitter.

    The fact, that you get a warning after publishing a video in which someone says “covid” or “corona virus” is, from my point of view, almost bizarre.
    Or it’s being taken down because the content is not in accord with alleged “established facts”.
    I think the violation of the first amendment is blatant is this case. You call that democracy?
    It makes it impossible to lead a civil discussion about issues without hate speech, racism or anything else that might warrant actions from the content platform.

    The actions against alleged “fake news” is also a slippery slope.
    You need to provide clear criteria for identifying content as fake news. The content providers don’t have that and there is no way to dispute a decision to take down content or add a “fact check” to it.
    Is some media going on and on about the “russia colusion” real news even after the Muller-Report was published? If you take available facts or prove as a basis, I don’t think so.
    So, many people view the categorization of fake news as biased.

    The restrictions on content publisher on social media have increased in recent months and they aren’t seen enforced in a consistent manner. And that is perceived less and less democratic.
    The more intrusive a social media platform becomes on it’s content, the more it has to justify the exceptional treatment through section 230.

    1. Michael von Plato

      My Dear Senator,
      RE: “The fact, that you get a warning after publishing a video in which someone says “covid” or “corona virus” is, from my point of view, almost bizarre.
      Or it’s being taken down because the content is not in accord with alleged “established facts”.

      I searched youtube.com for videos mentioning both “coronavirus”
      and “COVID-19. There are hundreds of videos available.

      https //www.youtube.com/results?search_query=coronavirus
      https://www.youtube.com/results?search_query=covid-19

      Doesn’t that make your claim that “you get a warning after publishing a video in which someone says “covid” or “corona virus”, uh, BIZARRE??

      1. Senator Blutarsky

        Thanks,
        I gotta be more careful with my sources, sorry. That won’t happen again.

        Every time I look at a video and the person is referring to covid, that person (not always the same person) says, “I’m not supposed/allowed to say the word” and says something like “current health situation”.
        That gave me the impression, that they’ll get into trouble when they say that.
        Maybe, they just don’t want to take the risk of being demonetized.

        Sorry for being factually wrong, I jumped to conclusions. I’ll stop doing that.

        1. Anarcissie

          Youtube is somewhat automated, and things that are taken down are often quickly replaced by another copy from the same or another account, or by something similar. Eventually this procedure will be completely automated, and one can imagine that as in China certain phrases will become unsayable. I was especially impressed by the famous instance of the two doctors who said something against lockdowns and were taken down almost immediately.

      2. Senator Blutarsky

        Now that I had the time to check the links, I understand the problem.
        Of course, there are tons of videos on covid on youtube. They are videos by news channels, medical infos, documentaries and current updates.

        I meant something completely different and I didn’t make myself clear.
        I was talking about podcasts, reviews on movies, tv-shows and so on.
        These content providers mostly make a living from monetizing their videos.
        And they are very cautious to use the words “covid” and “corona virus”.

        For instance they make a remark like “I know it’s hard these days because of …”
        and then they say “current health situation”.
        But I might be wrong about the videos being taking down after all.
        It just maybe their fear of demonetization that makes them so cautious.

        So, all in all, it was too much assuming one my part.

  10. FedUpPleb

    We managed to have the civil rights revolution even though radio and TV and Hollywood were regulated, so there is no reason to think that a more robust series of regulations of social media will throw us back into the political dark ages or stifle free expression.

    You are forgetting the increasing monopolisation of media, including the internet, over the last several decades.
    You are also underestimating how much even independent voices are at risk of censorship online. The technology and infrastrucure on which they rely, hosting, DNS, DDoS protection, virtualisation, search, and ad tech, is controlled by an ever smaller number of big players, who increasingly throw in their towel with the take-down first strategy.

    Matt Taibbi’s recent article covering the censorship of Micheal Moore’s ‘Planet of the Humans’ is illuminating in revealing the scale of privatised corporate censorship. Facebook’s large ‘deletion centres’ point to an institutionalization of private censors offices for online platforms. Presumably, they maintain additional centres for filtering content in various ‘markets’ like China, Turkey, or maybe even the good old US of A. Left unregulated, this portends inevitable disaster for online discourse, and I cannot fathom how civil rights orgs and especially lgbt people are not worried about this.

  11. TheCatSaid

    It’s even worse than the post indicates. A web developer posts about having been censored after posting sourced links–and then they actually seized her domain name! See here.

    Another tech person responded saying perhaps the only way to avoid censorship was by using a distributed Internet platform such as zeronet.io

    There are other such platforms (FreeNet etc), using peer-to-peer hosting. If any NC readers have had experience with such approaches I would be interested to learn more.

  12. Marshall Auerback

    I think your concluding line is exactly the argument I was making.

    The point of referencing Section 230 is 1. Because it largely provides the legal foundation today and 2. It’s inadequacies are highlighted (much as you do here).

  13. The Rev Kev

    I heard this discussion about social media described as like a bookstore and a publisher. A book store cannot be held to account for the books that he is selling as he has no control over the contents. A publisher on the other hand can be held to account which is why they have teams of lawyers checking out new books before they are published.

    To date, social media has claimed that they are like book stores and so have immunity on what they display in their media. But now social media has taken to monitoring and censoring the contents, banning some people, demonetizing others. That definitely makes them publishers and their free ride may be coming to an end. So it is not so much about free speech but about aligning with other formats of publishing. We know that social media is censoring people already so this would make it official.

  14. TheCatSaid

    [The links in my earlier comment didn’t come out right because of the device I was using. I’m posting again now, with more context and working links:]

    Censorship is also being applied by suspending domain names. The web developer in question said she had never encountered this before. She had experienced social media censorship of posts with linked sources–but never suspension of a personal domain.

    In the same twitter thread, another tech person suggested people might have to start using decentralized internet such as zeronet.io These seem to be peer-to-peer systems that don’t rely on a specific hosting server.It appears there are several platforms like this, presumably using blockchain technology. Could this be a viable approach to getting around censorship?

    1. Alternate Delegate

      Hard to tell, of course, but it sounds like this is related to a discussion about off-label use of ivermectin. Normally I’d be at least somewhat sympathetic to the idea that poor medical advice can be harmful.

      But we are also in a situation where expensive drugs – cough, remdesivir, cough – are being pushed hard by the usual suspects (et tu, Fauci?), while the discussion of alternatives is being given the internet equivalent of the tear-gas-and-watercannon treatment. Who’s doing this? How would we be able to find out if this is being coordinated by PhRMA or PhRMA-adjacent forces? An absence of evidence is not innocence.

      In the absence of transparency: By their deep pockets ye shall know them.

  15. Pelham

    American taxpayers paid for the development of the internet, therefore it belongs to us even more than the airwaves. With this standing, we can demand whatever we want of those entities using the internet. And if they don’t like it, like newspapers and magazines they can go out and create their own medium to express themselves.

    Separately, I propose simply repealing Section 230 for any online service that in any way edits, censors or “fact checks” the content generated by its users. Users, however, would be required to verifiably identify themselves so they could be held legally liable for damages from any defamatory content they post.

    Sadly, one result of such unrestricted content would likely be more presentable versions of 4chan and its murderously dangerous ilk. With that in mind, I hereby propose that the internet be closed to the public as a clear and present danger and restricted to its original role linking research institutions for scientific purposes.

    I’d be truly sorry to lose NakedCapitalism, but I’m willing to make the sacrifice because as things have been going, none of us are headed anywhere good.

  16. rjs

    what would this lead to? i remember the 60s, when censorship was used to throw politically incorrect poets in jail for using forbidden 4 letter words…

  17. crosslakeJohn

    We have pissed away imperfect but established processes for social discourse so that a few platform owners can sell ads.
    It’s that simple.
    Google digital ad revenue in 2019 ~$135byn.
    What a waste.

    Related — I am continually amazed at the number of well-intended highly educated people whom I am fortunate enough to meet who do not understand the distinction between reading the news and opinion pieces in a newspaper vs reading them online.
    It is so significant that it simply cannot be overstated; so here goes for clarity — when you buy a newspaper or magazine, nobody knows which articles you read or skip or indeed if you line the birdcage with the purchase. When you read the same content online, you must click through a headline or teaser in order to get to the content of interest to you. This is tracked. The owners of these online platforms have a mandate to maximize shareholder value, and articles that get high z-scores (lotsa clicks) generate more ad revenue than those that do not, and the owners know which ones those are moment-by-moment. If an article that asserts/reports “X” as true generates ad revenue, then the platform owners will create and publish an article that asserts the negative of “X”, on the gamble that if “X” generates revenue, then the negative of “X” is high odds to generate a lot of revenue too.
    This is one explanation for why the “truth” seems so blurred to normal citizens who are trying to understand the world by reading the news online. For every article that asserts “X”, you will have no trouble finding an article that asserts the negative of “X”, and that article seems just as credible.
    Ultimately, news articles online about benign events like the “Opening of the Flower Exhibit at the Dallas Zoo” generate trivial revenue and cannot be manipulated into “X” vs negative “X” in order to maximize ad revenue.
    Please I beg of you NC Readers, if you do not get this, please read it again. We have destroyed civil discourse so that a handful of our leaders and tech oligarchs can sell ads.

Comments are closed.