By Kathryn Daily, CISSP, CAP, RDRP
Artificial intelligence (AI) is the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. One example of AI is the use of virtual filters on our face when taking pictures with various cell phone applications. Another example is autocorrect, where algorithms use machine learning (ML) and natural language processing to identify incorrect usage of language and suggest corrections.
In response to the RFI, NIST received more than 130 responses from industry, non-profit, individuals, government, and academia.
Once NIST received the responses they were grouped into themes based on commonalities. A total of 7 themes were identified; with Machine learning still in the initial stage of development, so many attack vectors are not clear, so cyber defense strategies are also in their early stages. In order to aid with the development of AI/ML, NIST has developed the AI Risk Management Framework to address risks in the design, development, use and evaluation of AI products, services and systems. The intent with this voluntary framework is to continuously update it to keep it up to date with AI trends as the development continues.
NIST released the initial draft AI RMF in March of 2022 and is now updating it to reflect the feedback that has been received since the creation of the initial publication and the end of the recent comment period ending on 29 September 2022. The fact that NIST is updating this framework in the same year as the initial draft, demonstrates its commitment to staying up to date with the development of the emerging AI technology and incorporating the feedback from stakeholders in a timely manner. The second draft is anticipated to be published in January 2023. While the comment period has officially ended at the time this article was published, NIST will be conducting a third AI RMF workshop on 18-19 October 2022 and will be receiving feedback at that event.
Specifically with this comment period, NIST was seeking guidance on how industry or sector may utilize the Framework, how smaller organizations can use it, how it can be used for procurement and acquisition, how the Framework can help address security concerns, including guarding against adversarial attacks on AI systems, among other topics. Once received and organized, NIST will release the comments publicly for all to see.
Keep an eye out for that updated publication in January, and keep an eye out here for our review of the up-dated framework!