NIST AI Risk framework

NIST sets out the key cybersecurity threats artificial intelligence posses.

NIST highlights the following key threats to look out for in the context of AI & cybersecurity:

  1. Evasion (this occurs after the deployment phase and focuses on altering inputs to manipulate the outcome of the AI tool e.g. adding markings at stop signs to confuse an AI powered vehicle).

  2. Poisoning (this occurs during the training phase, corrupted data is inserted to manipulate the AI tool).

  3. Privacy (this is an attempt to retrieve sensitive information regarding the AI tool or the data it holds).

  4. Abuse (this includes the insertion of incorrect data to manipulate a particular outcome/repurpose the AI tool).

Data poisoning is particularly interesting and will impact how DevSecOps is approached (e.g. including AI security considerations into AI training models). The NIST AI Risk Management Framework focuses on: framing risk, trustworthiness, reliability, and core profiles (e.g. risk management and AI-Human interaction). It’s an important and interesting read, the full framework can be found here.

Previous
Previous

USAID AI Ethics Guide

Next
Next

ICO & The Alan Turing Institute’s Guide on AI explainability