“Ethical Markets recommends this round-up of articles by the Columbia Journalism Review on the need to address the weaponizing of algorithms on social media platforms, to addict users though “engagement” and “recommendation”, which work to rapidly circulate falsehoods, conspiracy theories, death threats, hate speech — all to drive profits in their advertising-driven business models. We welcome this from CJR although they are late to the process of actually making the kind of policy recommendations we have been advocating since the seminar we held with our partner, the World Academy of Art & Science, November 19, “Democracy Challenged in the Digital Age“ (see my paper for this seminar www.worldacademy.org).
Our recent editorials on this growing threat to democracies worldwide include “Steering Our Powers of Persuasion Toward Human Goals“: “Steering Social Media Toward Sanity“; “Let’s Train Humans Before We Train Machines“, my book reviews of Shoshana Zuboff’s “The Age of Surveillance Capitalism” (2019) and Rana Foroohar’s “Don’t Be Evil” (2019 ), as well as our TV shows with Constitutional lawyer and former speaker of Florida ‘s House of Representative Hon. Jon Mills: “Privacy in the New Media Age, pt1″, “Privacy in the New Media Age, pt 2”; “The Future of the Internet“; “Artificial Intelligence“, both with NASA Chief Scientist Dennis Bushnell, all distributed globally to colleges and libraries by www.films.com or free on demand at www.ethicalmarkets.tv.
We have proposed that all personal information be secured by grounding ownership rights in English settled law since 1215 in the Magna Carta‘s writ of “habeas corpus” which assures that all people own their own bodies. We have updated this as “information habeas corpus” assuring that we also own our brains and all information they generate in our URLs info-habeascorpus.com and i-habeascorpus.com.
Feedback and cooperators welcome.
~Hazel Henderson, Editor“
What should we do about the algorithmic amplification of disinformation?
By Mathew Ingram
The results of the 2020 presidential election. The alleged dangers of the COVID vaccine. Disinformation continues to have a significant effect on almost every aspect of our lives, and some of the biggest sources of disinformation are the social platforms that we spend a large part of our lives using—Facebook, Twitter and YouTube. On these platforms, conspiracy theories and hoaxes are distributed at close to the speed of light, thanks to the recommendation algorithms that all of these services use. But the algorithms themselves, and the inputs they use to choose what we see in our feeds, are opaque. They’re known only to senior engineers within those companies. or to malicious actors who specialize in “computational propaganda” by weaponizing the algorithm. Apart from hoping that the companies will figure out a solution, even if that goes against their financial interests, as it almost certainly will, is there anything we can do?
We invited some veteran disinformation researchers and other experts to discuss this topic and related issues on CJR’s Galley discussion platform this week, including: Joan Donovan, who runs the Technology and Social Change research project at Harvard’s Shorenstein Center; Sam Woolley, an assistant professor in both the School of Journalism and the School of Information at the University of Texas in Austin; Anne Washington, an assistant professor of data policy at New York University and an expert in data governance issues; Khadijah Abdurahman, an independent researcher specializing in content moderation and surveillance in Ethiopia; Irene Pasquetto, who studies information ethics and digital curation as an assistant professor at the University of Michigan’s School of Information; and Lilly Irani, an associate professor in the department of communication at the University of California in San Diego.
Donovan, whose specialty is media manipulation, disinformation, and adversarial movements that target journalists, says she believes the US needs legislation similar to the Glass-Steagall Act, which put limits on what banks and investment companies could do. This kind of law would “define what these business can do and lay out some consumer protections, coupled with oversight of human and civil rights violations by tech companies,” Donovan says. “The pandemic and the election revealed just how broken our information ecosystem is when it comes to getting the right information in front of the right people at the right time.” The way that Facebook and other platforms operate, she says, means that “those with money and power were able to exert direct influence over content moderation decisions.”
The broader problem is not just the amplification of conspiracy theories about vaccines or other beliefs, says Woolley. “In Myanmar and India, we’ve seen disinformation on social media grow into offline hate, violence, and murder,” he says. The difficulty in proving that statement X led to the murder of Y shouldn’t stop us from holding the platforms to account, Wooley feels. “We know billions of people use these products, that they are key sources of information. If they prioritize, amplify and massively spread content that is associated with offline violence and other, societal, harms then they must be held accountable.” Washington notes that “users have no control, as in zero, about who they are compared to and what features are selected for that comparison. Instead of blaming people who take the wrong turn, I would instead suggest that we consider why we are building roads like that in the first place.”
Facebook, Twitter, and YouTube are often seen through a primarily Western lens, because they are based in the US and events like the 2020 presidential election draw a lot of attention, says Abdurahman, but the algorithmic amplification of disinformation is a global problem. And while Facebook in particular is happy to brag about how many users it has in other countries, it doesn’t always follow through with other services that might help with moderation of content. For example, Abdurahman says, “in Ethiopia there are 83+ languages. As of June there was only automated translation available for Amharic (the national language) and neither human or automated translation for any other languages.” In other words, she says, Facebook, Twitter, and other social media companies “literally could not see/hear/understand the content circulating on their platform.” Facebook in particular, she says, has reached the point where it should be treated—and regulated—as a public utility.
Below, more on algorithms and disinformation:
- Public interest: Pasquetto says we got where we are because “we have been prioritizing platforms’ interest (i.e., profiting out of the attention-economy business model) rather than the public interest (i.e., curating and giving access to good quality, reliable information over the Internet).” Also, she says, we have a cultural problem that involves “blind faith in the positive power of innovation” best embodied by Facebook chief executive Mark Zuckerberg’s famous motto “Move fast and break things.” The other reason we are where are now, she says, is that certain people figured out how algorithmic amplification works “and made use of it to amplify their own perspectives.”
- Bias to action: Irani says her study of innovation showed her the mechanics behind this drive to move quickly. The source of this ethos, she says, “is a management theory response to a couple of changes that made long-term planning less effective,” including the rapid rate of technological change and the globalization of trade. “All of this led to this idea that McKinsey consultants called ‘the bias to action,'” she says. “This crucially assumes the organization doesn’t face the consequences of their mistakes or bear responsibility for the social costs of all that pivoting.” In other words, companies get to move fast and break things without being accountable for what they broke.
- New rules: India is implementing new restrictions on social-media companies like Facebook, Twitter, and WhatsApp because of what that country says is a growing problem of hoaxes and hate speech. The new guidelines say that in order to counter the rise of problematic content, the companies must establish “grievance redressal mechanisms” to resolve user complaints about content, and share with the government the names and contact details for “grievance officers.” These officers must acknowledge complaints within a day and resolve them within 15. Russia, meanwhile, has said it is “slowing down” Twitter because it won’t deal with negative content.