FTC warns against using AI to fight online harm
During its last public meeting, the Federal Trade Commission presented and voted 4-1 to issue a report to Congress warning against the use of artificial intelligence to combat various online harms and urging policy makers to “exercise a high degree of caution” in imposing or relying too heavily on these tools.
According to the FTC, as the deployment of AI tools intended to detect or address harmful online content accelerates, “it is crucial to understand that these tools remain largely rudimentary, have substantial limitations, and may never be appropriate in some cases as an alternative to human judgment.Reflecting the importance of the topic, in November 2021, FTC Chairman Lina M. Khan announcement that the agency had hired its very first AI advisors.
As a backdrop, in the Appropriations Act of 2021, Congress directed the Commission to consider how AI “may be used to identify, remove, or take any other appropriate action necessary to address” a wide variety of ” online harms” specified, including online fraud, impersonation scams, fake reviews and accounts, bots, media manipulation, illegal drug sales and other illegal activities, sexual exploitation, hate crimes, online harassment and cyberbullying, and disinformation campaigns aimed at influencing elections. Before discussing each harm listed by Congress, the FTC noted that only a few fell within its scope of consumer protection and indicated a preference for deferring to other government agencies on the subjects for which they are more engaged and better informed.
Ultimately, the FTC report cautions against reliance on AI as a policy solution and notes that its widespread adoption could introduce a range of additional harms, including:
- Inaccurate results. According to the FTC, the detection capabilities of AI tools regarding online harm are significantly limited by inherent flaws in their design, such as unrepresentative datasets, misclassifications, inability to identify new phenomena (e.g. misinformation about COVID-19), and lack of context and meaning.
- Bias and discrimination. The FTC report found that AI tools can reflect the biases of their developers that can lead to unfair results and discrimination against protected classes of people.
- Invasive monitoring. AI tools can incentivize and enable invasive commercial surveillance and data mining practices, as they require the development, training, and use of large amounts of data.
Although Congress tasked the FTC with recommending legislation that could advance the use of AI to combat online harm, the report instead urged lawmakers to consider focusing on crafting legal frameworks. which would ensure that the AI tools do not cause further damage.
Among other key considerations, the FTC report indicates that human intervention is still necessary in the context of monitoring the use and decisions of AI tools; The use of AI must be genuinely transparent, especially when people’s rights or personal data are at stake; platforms that rely on AI tools must be accountable for both their data practices and their results; and data scientists and their employers building AI tools should strive to hire and retain diverse teams to help reduce inadvertent bias or discrimination. According to the FTC, “[p]Leaving aside laws or regulations that would require more fundamental changes to platforms’ business models, the most valuable direction in this area – at least initially – might be in the area of transparency and responsibility”, which are “crucial in determining the best prices for new public and private actions.
Chairman Kahn and Commissioners Slaughter, Bedoya and Wilson voted to send the report to Congress, issue separate statements. Commissioner Phillips issued a dissenting statement“generally agree[ing] with the main conclusion “, but expressing concern that the report does not sufficiently address the benefits and costs of using AI to combat online harm as intended.
“[I]It is crucial to understand that these tools remain largely rudimentary, have substantial limitations, and may never be appropriate in some cases as an alternative to human judgment.