Adversarial Machine Learning Is a New National Security Threat

Imagine the following scenarios: An explosive device, an enemy fighter jet and a group of rebels are mistaken for a cardboard box, an eagle or a flock of sheep. A lethal autonomous weapons system misidentifies friendly combat vehicles as enemy combat vehicles. Satellite images of a group of students in a schoolyard are misinterpreted as moving tanks. In each of these situations, the consequences of action are extremely frightening. This is the heart of the emerging field of adversarial machine learning.

Rapid advances in computer vision made possible by deep learning techniques have fostered the wide spread of artificial intelligence (AI)-based applications. The ability to analyze different types of images and data from heterogeneous sensors makes this technology particularly interesting for military and defense applications. However, these machine learning (ML) techniques were not designed to compete with smart adversaries; therefore, their own characteristics that make them so attractive also represent their greatest risk in this class of applications. Specifically, a small disturbance to the input data is enough to compromise the accuracy of ML algorithms and make them vulnerable to manipulation by adversaries, hence the term adversarial machine learning.

Adversarial attacks pose a tangible threat to the stability and security of AI and robotic technologies. The exact conditions of such attacks are generally quite unintuitive to humans, so it is difficult to predict when and where attacks might occur. And even if one could estimate the probability of an adversary attack, the exact response of the AI ​​system can be difficult to predict, leading to new surprises and less stable and less secure military engagements and interactions. Despite this inherent weakness, the subject of adversarial ML in the military industry has remained underestimated. The argument to be made here is that ML must be inherently more robust to make good use of it in scenarios with intelligent and adaptive adversaries.

AI is a growing field of technology with potentially significant implications for national security. Thus, the United States and other countries are developing AI applications for a range of military functions. AI research is underway in intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semi-autonomous and autonomous vehicles. Already, AI has been integrated into military operations in Iraq and Syria. AI technologies present unique challenges for military integration, particularly because most AI development occurs in the commercial sector. While AI is not unique in this regard, the defense acquisition process may need to be adapted to acquire emerging technologies like AI. Additionally, many commercial AI applications need to undergo significant modifications before they are functional for the military.

For a long time, the only goal of ML researchers was to improve the performance of ML systems (true positive rate, accuracy, etc.). Today, the robustness of these systems can no longer be ignored; many of them are very vulnerable to intentional enemy attacks. This fact makes them unsuitable for real-world applications, especially mission-critical ones.

A contradictory example is an entry in an ML model that an attacker has intentionally crafted to cause the model to fail. In general, the attacker may not have access to the architecture of the ML system, which is called a black box attack. Attackers can approximate a white-box attack by using the notion of “portability”, which means that an input designed to confuse a certain ML model is capable of triggering similar behavior in a different model. This has been demonstrated time and time again by a team of researchers from U.S. Army Cybernetics Institute.

General concerns about the impacts of adversarial behavior on stability, whether in isolation or through interaction, have been underscored by recent demonstrations of attacks against these systems. Perhaps the most widely discussed attack cases involve image classification algorithms that are tricked into “seeing” images in noise or are easily fooled by pixel-level changes so that they classify a school bus like an ostrich, for example. Similarly, game systems that outperform any human (e.g. Chess or AlphaGo) can suddenly fail if the structure or rules of the game are changed slightly in a way that would not affect a human. Self-driving vehicles that perform reasonably well under ordinary conditions can, with the application of a few pieces of tape, be caused to swerve into the wrong lane or accelerate through a stop sign. This list of contradictory attacks is by no means exhaustive and continues to grow over time.

While AI has the potential to confer a number of advantages in the military context, it can also present distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to more informed military decision-making, and increase the speed and scale of military action. However, it can also be unpredictable or vulnerable to unique forms of manipulation. Due to these factors, analysts have a wide range of opinions on the influence of AI on future combat operations. While a few analysts believe the technology will have minimal impact, most believe AI will have at least an evolutionary, if not revolutionary, effect.

US military forces use AI and ML to improve and streamline military operations and other national security initiatives. In terms of intelligence gathering, artificial intelligence technologies have already been incorporated into military operations in Iraq and Syria, where computer vision algorithms have been used to detect people and objects of interest. Military logistics is another area of ​​interest in this field. The US Air Force uses AI to know when its planes need maintenance, and the US Army uses IBM’s “Watson” AI software for predictive maintenance and dispatch request analysis. AI defense applications also extend to semi-autonomous and autonomous vehicles, including fighter jets, drones or unmanned aerial vehicles, ground vehicles and ships.

One would hope that adversarial attacks would be relatively rare in the everyday world since the “random noise” that targets image classification algorithms is actually far from random.

Unfortunately, this reliance is almost certainly unwarranted for defense or security technologies. These systems will invariably be deployed in contexts where the other side has the time, energy, and ability to develop and construct exactly these types of adversarial attacks. Artificial intelligence and robotic technologies are particularly attractive for deployment in enemy-controlled or contested areas, as these environments are the riskiest for human soldiers, largely because the other side has the most environmental control.

To illustrate, researchers at the Massachusetts Institute of Technology (MIT) tricked an image classifier into thinking the machine guns were a helicopter. If ever a weapons system equipped with computer vision were trained to respond to certain machine guns with neutralization, this misidentification could cause unwanted passivity, creating a potentially fatal vulnerability in the computer’s ML algorithm. The scenario could also be reversed, in which the computer incorrectly identifies a helicopter as a machine gun. On the other hand, knowing that an AI spam filter tracks certain words, phrases and word counts to exclude them, attackers can manipulate the algorithm by using acceptable words, phrases and word counts and thus gain access to the a recipient’s inbox, further increasing the likelihood of cyberattacks via email.

In summary, AI-enabled systems can fail due to adversarial attacks intentionally designed to trick or trick the algorithms into making a mistake. These examples demonstrate that even simple systems can be tricked in unexpected ways and sometimes with potentially serious consequences. With the wide range of conflicting learning applications in cybersecurity, from malware detection to speaker recognition to cyber-physical systems and many more such as deep fakes, networking generative, etc., it’s time this issue took center stage as the US Department of Defense increases its funding and deployment in the areas of automation, artificial intelligence, and autonomous agents. There must be a high level of awareness about the robustness of these systems before deploying them in critical instances.

Many recommendations have been offered to mitigate the dangerous effects of adversarial machine learning in military contexts. Keeping humans informed or informed is essential in such situations. When there is a human-AI team, people can recognize that an adversarial attack has occurred and guide the system to the appropriate behaviors. Another technical suggestion is adversarial training, which involves feeding an ML algorithm with a set of potential perturbations. In the case of computer vision algorithms, this would include images of the stop sign which displays these strategically placed stickers or school buses which include these slight image alterations. In this way, the algorithm can always correctly identify phenomena in its environment despite the manipulations of an attacker.

Since ML in general, and adversarial ML in particular, are still relatively new phenomena, research on both is still emerging. As new attack techniques and defensive countermeasures are implemented, US military forces must exercise caution when using new AI systems in critical operations. While other countries, especially China and Russia, are investing heavily in AI for military purposes, including in applications that raise questions about international standards and human rights, there are still the utmost importance for the United States to maintain a strategic position to prevail in the future. battlefield.

Dr. Elie Alhajjar is a senior researcher at the Army Cyber ​​Institute and an associate professor in the Department of Mathematical Sciences at the United States Military Academy in West Point, New York, where he teaches and mentors cadets from all disciplines academics. His work is supported by grants from the National Science Foundation, National Institutes of Health, National Security Agency, and Army Research Laboratory, and he was recently named a Dean’s Fellow for Research.

Comments are closed.