Press Release
☷NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems
National Institute of Standards and Technology ( By Press Release office)
Jan 06,2024
| 6| 0
AI systems can go haywire if someone tricks them into making the wrong decisions . For instance , if a driverless car gets confused by misleading road markings , it could end up swerving into oncoming traffic . This kind of "evasion" attack is just one of many tactics described in a new NIST publication aimed at helping us understand and counter potential threats . Adversaries can intentionally mess with AI systems to make them malfunction , and there ' s no surefire way for developers to prevent it . In a collaborative effort , NIST computer scientists and their partners have identified vulnerabilities in AI and machine learning and laid out ways to address them in a publication called Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations ( NIST . AI . 100 - 2 ) . The goal is to support the development of reliable AI and put NIST ' s AI Risk Management Framework into action . While the publication outlines various attack techniques and possible defenses , it acknowledges that there ' s no perfect solution . The community is encouraged to come up with better strategies to protect AI systems from manipulation . AI systems are everywhere these days , from self - driving cars to medical diagnoses to online customer service . But their reliance on vast amounts of data poses a significant challenge: The data they learn from may be inaccurate or even intentionally corrupted by malicious actors . This can lead to AI systems behaving in ways that are harmful or undesirable , like chatbots responding with offensive language . Vassilev explained that software developers want more people to use their products to improve them , but there ' s always a risk of the product being exposed to harmful content . The report provides guidance on how AI products can be vulnerable to different types of attacks , such as evasion , poisoning , privacy , and abuse attacks . These attacks can manipulate the AI ' s responses , corrupt its training data , compromise its privacy , or feed it incorrect information . The authors emphasize that these attacks can be carried out with minimal knowledge and resources . Researchers Alie Fordyce and Hyrum Anderson break down different types of attacks and suggest ways to mitigate them . However , they acknowledge that the current defenses against adversarial attacks in AI are far from perfect . It ' s important for developers and organizations using AI technology to be aware of these limitations . Despite the progress made in AI and machine learning , these technologies are still vulnerable to attacks with potentially disastrous consequences . Securing AI algorithms is a challenging task that hasn ' t been fully resolved yet . Anyone claiming otherwise is simply selling false promises . For media inquiries , please contact Chad Boutin at charles . boutin@nist . gov or ( 301 ) 975 - 4261 .
PLAY the NEWS
* May be useful for visually impaired persons .
Press release information:
Direct link to press release:
Click here .