From the election misinform GameStop: cease social media algorithms from hurting us

Misinformation, outright lies, conspiracy theories, and fringe movements have always had real-world consequences. Fascists in Italy and Germany, once a small band of pariahs and gadflies who sported funny looking hats and mustaches, managed to hijack those countries’ political systems after World War I, putting them on a collision course with the globe’s liberal democracies. We might be at such a crossroads once again.

Small groups of committed enthusiasts are using the power of social media and its algorithms to make their otherwise quixotic and bizarre ideas go mainstream. These kinds of movements have become more commonplace and their velocity has increased. The most recent case: Reddit’s WallStreetBets group of merrymen (and women) driving GameStop’s share price to the stratosphere in a bid to squeeze hedge funds out of short-selling positions. While the first set of folks who pumped up the stock did so without algorithmic complicity, the buying frenzy quickly spread beyond their circle thanks to AI selecting and recommending stories, news, and testimonials that glamorized the populist campaign.

Mom and pop investors are already getting hurt as GameStop’s market value falls like a stone and once again reflects its book value. The lies spread online about the “stolen election” will further curtail the Republican Party’s appeal in the suburbs, making it less likely it will win presidential elections and weakening our democracy in the process, which hinges on the balance provided by two competitive parties. This is on top of the toll the Big Lie has already taken, including the Capitol riot.

So what to do about the collateral damage that sometimes occurs when social media amplifies lies and fringe ideas through its use of algorithms? Thus far, the solutions that lawmakers and pundits have advanced are heavy-handed and often centered on the outright ban of innovative technology. They run the risk of making mis/disinformation and conspiracy theories worse.

The problem of algorithmic amplification

Understanding why these solutions fall short requires us to reframe the problem itself. Users of social media, both those who post content and those who consume it, benefit from their exchange of information — whether it’s real news that informs them about the world or conspiracy theories that indulge their fantasies and basest desires. While this interaction might prove relatively harmless to these individuals, it creates what economists refer to as a negative externality. This occurs when the actions of two or more parties to an economic exchange create harmful spillovers that affect other people in society. Think of a protest occurring in real life organized on a conspiracy theorists’ Facebook page. The negative externality occurs when the protest becomes violent and results in property damage and fatalities.

There are several ways that we deal with minimizing negative externalities in the real world; the digital world is no different. (We’ll get to some of those potential fixes in a moment.)

Whatever the ultimate fix, we need to first understand what many smart techies claim is the source of the harm to society from the spread of digital lies: algorithmic amplification. To maximize engagement on their sites, social media companies have to figure out how to share content with their users instead of putting the onus on them to intentionally seek it out. Digital platforms tend to do this in a way that generates more ad revenues; advertisers in turn seek more views and clicks. Platforms employ strategies that show users content they will find relevant and interesting, and that serves as a gateway to more content.

Enter artificial intelligence (AI): It selects and recommends content personalized to each user (whether it is posted by a user’s connections or posts that her connections like), or content posted by people the user follows. The idea is that consumers will be more likely to click on that material and share it. Consider YouTube: While its community standards prevent its algorithm from recommending so-called borderline content (e.g., lies about Covid-19), the platform is designed to engage users both in terms of the length of time they spend on the site and their overall interaction based on what they watch.

Since YouTube tends to recommend videos with more likes, comments, and watch times, it may feed users stronger and more extreme content. Because the most engaging content is often the most polarizing, sexualized, and otherwise extreme, the YouTube algorithm may recommend videos that glorify violence and espouse conspiracy theories. A person may start by viewing “alt-light” content questioning the veracity of the 2020 election and, in short order, be exposed to “far-right” videos praising Neo-Nazis. Radicalization and polarization may ensue.

Why killing algorithmic-amplification isn’t a fix

It is no wonder then that some people working in the digital space point to algorithmic amplification as the ultimate culprit of harm produced online by social media. They therefore want to ban it, or at least impose a moratorium. But it has yet to be established that algorithmic amplification is in fact the source of the problem and, even if it is, that banning it would be the right solution.

First, it’s not clear that algorithmic amplification is the cause of the spread of mis/disinformation. Conspiracy theories far precede digital platforms and the internet; they are as old as the written word. Politicians who have spread conspiracy theories and incited violence through modern means include Mussolini (radio/film), Hitler (radio/film), Perón (radio/television), Milosovic (television), and Rwanda’s Hutu Power (radio). We also learned on January 6 that when politicians and their tagalongs deliver speeches in the flesh they can also spread lies and inspire chaos. Their ability to amplify conspiracy theories the old fashioned way may be more powerful than any algorithm.

Besides, people susceptible to believing conspiracies could also be the type of people more likely to stay on sites such as YouTube for a longer period, in which case they would actively search for hardcore content without an algorithm’s help.

Second, even if algorithmic amplification is responsible for the spread of falsehoods, it is not obvious that the costs of AI-aided content selection outweigh its benefits. All manner of businesses that market and sell their wares on Facebook rely on its algorithm to capture eyeballs for their ads and drive traffic to their site. A ban threatens millions of jobs and consumer satisfaction, since AI can also push truth and content that is not only highly valued by users but that is socially beneficial.

Third, there are always unintended pitfalls to banning behaviors even when they clearly contribute to social harm. Take narcotic drugs. Treating drug addiction brings public health costs, regardless of whether the drugs are legal. But there are additional costs if they are outlawed, from enforcing bans to violent cartel turf wars.

Similarly, banning algorithmic amplification on mainstream media sites would create incentives for wildcat suppliers of conspiracy theories to avoid regulation by launching new platforms that would use outlawed algorithms with reckless abandon. This could fuel even stronger lies through AI unhindered by community standards and moderation. Hardcore addicts will follow in their wake. Parler and Gab are living proof.

Moreover, it’s not clear that even if we could say with certainty that algorithmic amplification creates a net social harm, the best way to address the problem is through a ban. Instead, policymakers have additional tools to limit “social bads” that, to the best of our knowledge, have not yet been discussed regarding big tech, but that might provide better solutions.

More promising solutions

Regulators can put a limit on the quantity of the “social bad” produced and allow the market to allocate its use. How? By setting a cap on the overall amount of bad content, allocating the right to distribute it, and then allowing market exchanges to decide who exercises this right. This mirrors a cap and trade system that limits carbon emissions to a set amount and then allows polluters to trade emission permits. With online platforms, this might involve capping algorithmic amplification. That would allow the tech platforms that don’t mind paying top dollar to purchase “AI permits,” but it would also conceivably incentivize other platforms to invest in new ways to select content – involving more human discretion – much like cap and trade in carbon emissions drives innovation into clean energy.

Policymakers could alternatively impose a tax on AI content selection, increasing its cost indirectly. The “social bad” would be rendered more expensive, reducing its quantity. “Sin taxes” on cigarette sales have worked to reduce smoking by more casual smokers. This tax not only mitigates harm to individual smokers who quit smoking but also reduces secondhand smoke and more expensive healthcare due to lung disease.

How would such a tax work? Most simply, tax each use of artificial intelligence that identifies and recommends content by the social media companies. Platforms would probably pass the tax on to their customers, either via a paywall or more likely with more expensive advertising. In turn, this will incentivize tech platforms to prioritize content suggestions made by editors who select and recommend high-quality news. There is already a precedent for this in the form of an excise tax on financial transactions imposed on the buying of financial instruments like stocks, bonds, and derivatives. Best of all, it works by exploiting these transactions’ digital footprint, which creates a workable model for Big Tech.

Digital platforms’ efforts to report AI content selection does not have to be onerous. Companies could track their use of algorithmic amplification and send it to the IRS, similar to the Value Added Taxes (VATs) in European countries, where businesses record and ultimately report each transaction in a value chain to tax authorities (often electronically and in real-time). Fortunately, social media companies probably already track their use of algorithmic amplification in some manner and occasional IRS audits could keep them honest.

Finally, the dynamics that translate algorithmic amplification into negative real world effects could be similar to a liquidity crisis or bank run, where negative feedback effects amplify misinformation. Things that are not true may get more attention than those that are. If so, then instead of cap and trade or taxation, the best regulatory instruments could be closer to those used by the SEC and Federal Reserve: requirements to file (algorithms) before they are used; circuit breakers when misinformation goes viral, and a centralized information depot as a “truth teller of last resort.” It could be as simple as adopting a rule where, once a piece of content reaches some “sharing threshold” it should be subject to regulatory approval before social media companies can continue to recommend it to their users.

Legal experts, lawmakers, everyday citizens, and big tech companies can all play a role in improving online discourse. But whatever ultimately happens with the regulation of algorithmic amplification or any other attempt by the government to influence the tech platforms’ business models and behavior, it is imperative to use a systematic approach rooted in the political economy study of externalities.

James D. Long is associate professor of political science and co-founder of the Political Economy Forum at the University of Washington. He hosts the “Neither Free Nor Fair?” podcast about election security and global democracy; he has observed elections in Kenya, Ghana, Afghanistan, Uganda, Egypt, and South Africa.

Victor Menaldo is a professor of political science, co-founder of the Political Economy Forum at the University of Washington, and the co-author of “Authoritarianism and the Elite Origins of Democracy.” He is currently writing a book on the “Fourth Industrial Revolution.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member