Are you afraid of the threat of AI? These are the big tech giants that need to be tamed | Devdatt Dubhashi and Shalom Lappin
Im its Reith 2021 conferences, whose third episode airs tonight, artificial intelligence researcher Stuart Russell takes up the idea of ââan AI in the near future that is so ruthlessly intelligent that it could pose an existential threat to humanity. A machine that we are creating that could destroy us all.
This has long been a popular topic with scholars and the press. But we believe that an existential threat from AI is both unlikely and in any case remote, given the current state of technology. However, the recent development of powerful AI systems, but on a much smaller scale, has already had a significant effect on the world, and the use of existing AI poses serious economic and social challenges. These are not distant, but immediate, and must be addressed.
These include the prospect of large-scale unemployment due to automation, with the ensuing political and social upheavals, as well as the use of personal data for commercial and political manipulation. The incorporation of ethnic and gender bias into the data sets used by AI programs that determine candidate selection, creditworthiness, and other important decisions is a well-known problem.
But by far the most immediate danger is the role that analysis and AI data generation plays in spreading disinformation and extremism on social media. This technology powers robots and amplification algorithms. These have played a direct role in fomenting conflicts in many countries. They help intensify racism, conspiracy theories, political extremism and a plethora of violent and irrationalist movements.
Such movements threaten the foundations of democracy around the world. AI driven Social media was instrumental in mobilizing the January insurgency on the U.S. Capitol and propelled the anti-vax movement since before the pandemic.
Behind all of this lies the power of the big tech companies, which develop the relevant data processing technology and host the social media platforms on which it is deployed. With their vast reserves of personal data, they use sophisticated targeting procedures to identify audiences for extremist publications and sites. They promote this content to increase ad revenue and in doing so actively contribute to the rise of these destructive trends.
They exercise near-monopoly control over the social media marketplace and a range of other digital services. Meta, through its ownership of Facebook, WhatsApp and Instagram, and Google, which controls YouTube, dominate much of the social media industry. This concentration of power gives a handful of companies considerable influence over political decision-making.
Given the importance of digital services in public life, it is reasonable to expect that big technologies will be subject to the same type of regulation that applies to companies that control markets in others. parts of the economy. In fact, this is usually not the case.
Social media agencies have not been restricted by antitrust regulations, the truth in advertising law, or anti-incitement to racism laws that apply to traditional print and broadcast networks. Such regulation does not guarantee responsible behavior (as the right-wing cable networks and rabid tabloids illustrate), but it provides an instrument of restraint.
Three main arguments have been made against increased government regulation of large technologies. The first argues that it would inhibit freedom of expression. The second argues that it would degrade innovation in science and engineering. The third argues that socially responsible businesses can better self-regulate. These arguments are quite specious.
Certain restrictions on freedom of expression are well motivated by the need to defend the public good. The truth in advertising is a prime example. Another is the legal prohibitions against incitement to racism and collective defamation. These constraints are generally accepted in most liberal democracies (with the exception of the United States) as an integral part of the legal approach to protecting people from hate crimes.
Social media platforms often deny responsibility for the content of the material they host, on the grounds that it is created by individual users. In fact, this content is published in the public domain and therefore cannot be interpreted as a purely private communication.
When it comes to security, Government-imposed regulations have not prevented dramatic advances in bioengineering, such as recent mRNA-based Covid vaccines. They haven’t stopped automakers from building efficient electric vehicles, either. Why would they have the unique effect of reducing innovation in AI and information technology?
Finally, the view that private companies can be trusted to self-regulate out of a sense of social responsibility is completely unfounded. Businesses exist for the purpose of making money. Corporate lobbies often claim the image of a socially responsible industry acting out of concern for public welfare. In most cases, this is a public relations maneuver meant to avoid regulation.
Any business that prioritizes social benefit over profit will quickly cease to exist. This was presented in recent testimony to Congress from Facebook whistleblower Frances Haugen, indicating that company executives have chosen to ignore the damage caused by some of their “algorithms” in order to maintain the profits they have generated.
Consumer pressure can, on occasion, be used as a lever to limit corporate excesses. But such cases are rare. In fact, legislation and regulators are the only effective means available to democratic societies to protect the public from the unwanted effects of corporate power.
Finding the best way to regulate a powerful and complex industry like big tech is a difficult problem. But progress has been made on constructive proposals. Lina Khan, the US federal trade commissioner, has put forward antitrust proposals to tackle monopoly practices in the markets. The European Commission has played a leading role in putting in place data protection and privacy laws.
Academics MacKenzie Common and Rasmus Kleis Nielsen offer a balance discussion of how the government can curb disinformation and hate speech in social media, while supporting free speech. It is the most complex and urgent problem in controlling technology companies.
The arguments for regulating big tech are clear. The damage she causes in a variety of fields calls into question the benefits of her tremendous achievements in science and engineering. The global nature of corporate power increasingly limits the ability of national governments in democratic countries to restrict big technologies.
There is an urgent need for the major trading blocs and international agencies to act together to impose effective regulation on digital technology companies. Without such constraints, big tech will continue to harbor the instruments of extremism, bigotry and unreason that generate social chaos, undermine public health and threaten democracy.
Devdatt Dubhashi is Professor of Data Science and AI at Chalmers University of Technology in Gothenburg, Sweden. Shalom Lappin is Professor of Natural Language Processing at Queen Mary University, London, Director of the Center for Linguistic Theory and Studies in Probability at the University of Gothenburg, and Emeritus Professor of Computational Linguistics at King’s College London.