Can you take-down web content democratically? If yes, under which conditions?

Last week, I did a presentation at the VOX-Pol Workshop on Countering Violent Extremism Online and the Impact on Civil Liberties at Harvard University, Berkman Klein Center for Internet & Society.

It covered basic requirements for democratically legitimate content regulation (incl. take-down), especially related to:

  • the justification of content regulation: concrete & specific norms (e.g. criminal law), and
  • procedural requirements to prevent errors & abuse (depending on tech and business developments).

I illustrated my theoretical accounts by explaining the details of the recent German “hate speech” law, binding social media platforms to take down illegal content upon user request within 1-7 days (with the threat of severe fines). Emphasis was given to the fact, that the law does not only provide for self-regulation in content-regulation (privatization of law-enforcement), but covers 22 complex offenses (instead of incitement to hatred, only), e.g.:

  • Encouraging the commission of a serious violent offense endangering the state,
  • Treasonous forgery,
  • Forming of criminal or terrorist organizations,
  • Insult and defamation, and
  • Forgery of data intended to provide proof.

I finished with an overview about main criticism related to the law, especially the lack of

  • user rights of objection (counter-notice procedures for cases legal content was taken-down by accident), and
  • meaningful public oversight about private content regulation.

All aspects need to be included into the new discussion of automated content regulation, with help of AI.  A paper is due to follow, the presentation can be downloaded here.

Leave a Reply

Your email address will not be published. Required fields are marked *