Only a few external links this week, as there feels to be a lot to dig into here, even if the articles themselves are shorter. Xiao Wu's post in particular is part of a larger paper (linked at the top of the article).

  • Redress/Repress: What the "War on Terror" Can Teach Us About Fighting Misinformation
    Harvard Kennedy School. Alexei Abrams & Gabrielle Lim (2020)

    The authors here take a materialist approach to advocate "redressing" the sociopolitical root causes of both terrorism and misinformation. They argue that people flock to conspiracy theories (a prime example of politicized misinformation) when they feel that the "traditional process" is not working for them. When the game is rigged against you, do you continue to believe the ruling elite who tell you to ignore your experience and "keep working hard?"
    I actually think that the authors connections between terrorism and misinformation are correct, if with some caveat that has to do with what exactly counts as [mis]information. Conspiracy theories, such as those that surround the Plandemic nonsense, have a habit of resulting in real-world violence and even stochastic terrorism, Wired.

  • How Not to Know Ourselves (Platform Data and Technological Determinism)
    Data & Society. Angela Xiao Wu (2020)

    We can only tell you what we know, we cannot tell you what we do not know (Rove's famous unknown unknowns). The technological determinism of platform technologies (and the corporate sinew that animates them) creates the "Us" that fits into categories they create. OKCupid and PornHub are the most salacious, and somewhat easiest, examples. Does PornHub's categories truly reflect a "human sexuality?" Where do their trending searches come from, and do they mean anything? PornHub argues it reveals something about our "desires," but those desires do not exist in a vacuum, nor are they innate or unchanging. If we take Foucault seriously in his «History of Sexuality», then sexuality itself is a created and social phenomena.
    Applied to politics, to policy, to the outer world, these technologies reify and reinforce the existing outlook and status quo if left unchecked. See bias in facial recognition, bias in AI lending, racist sentencing algorithms, sexist Facebook recommendations...Technology will not free us if we don't do the work first.

  • The Problem of Speaking for Others (PDF)
    Cultural Critique. Linda Martín Alcoff (1991,1996)

    Alcoff also wrote «The Future of Whiteness», which I reviewed here a long time ago.
    Alcoff investigates the moral questions that come from "speaking for others." And now here we are, decades later, with issues of racial representation (see whitewashing in Bojack Horseman, TheAtlantic), and of course BLM, and issues of white allyship.
    Difficult questions, ones which I think the so-called "purity politics" of the Moral Left are ill-equipped to answer, when the moralizing itself does not generate even more problems and tension.

  • OpenAI's New Language Generator is Surprisingly Good—And Completely Mindless
    MIT Technology Review. Will Douglas Heaven (2020)

    The machines are coming. How do you even know a human wrote this? Or your next email, a viral NYTimes article, a pop song? The era of copywriters is ending. All hail our silicon overlords!
    Ironic that we'll put them to use writing useless marketing emails that nobody but other AIs (spam filters) will ever read. The bot inflection point looms ever nearer. NyMag