AI Risk Management Framework (AI RMF): A Discussion of NIST’s Recent RFI

RMF

By Philip D. Schall, Ph.D., CISSP, RDRP

I consistently experience two common scenarios at almost every DoD cybersecurity conference I attend. The first generally revolves around a high-ranking DoD official giving a presentation and RMF being referenced in a negative light with implications that RMF needs to be more
efficient and effective or is possibly failing. This is followed by the folks around me having conversations about how RMF does not scale and is inefficient. The second scenario usually occurs in conversation with a vendor or a tradeshow attendee with themes of RMF automation.

Abridged RMF processes are familiar to the RMF community. Some prime examples of abridged RMF include the RMF Sentinel Project championed by Nancy Kreidler at CIO/G-6 as well as a similar program informally titled RMF Sprint which was implemented by the Air Force around 2017. I am confident other commands have also created their own abridged RMF programs like Sentinel and Sprint with varying levels of success. This article is not intended to provide a review of abridged RMF programs, but I think the mention of these programs are important to demonstrate that RMF inefficiency and automation topics have been an active conversation in the RMF community for many years.

NIST issued a Request for Information (RFI) titled Artificial Intelligence Risk Management Framework
on July 29th 2021 which can be found at the following link: https://www.federalregister.gov/documents/2021/07/29/2021-16176/artificial-intelligence-risk-management-framework

According to the RFI the AI Framework should “provide a prioritized, flexible, risk-based, outcome-focused, and cost-effective approach
that is useful to the community of AI designers, developers, users, evaluators, and other decision makers and is likely to be widely adopted.” Additionally, RFI lists the following eight summarized RMF development attributes:

  1. “Be consensus-driven and developed and regularly updated through an open, transparent process”
  2. “Provide common definitions” for terms like “trust” and “trustworthiness”
  3. “Use plain language that is understandable by” and useful to “a broad audience”
  4. “Be adaptable to many different organizations, AI technologies, lifecycle phases, sectors, and uses”
  5. “Be risk-based, outcome-focused, voluntary, and non-prescriptive”
  6. “Be readily usable as part of any enterprise’s broader risk management strategy and processes”
  7. “Be consistent, to the extent possible, with other approaches to managing AI risk”; and
  8. “Be a living document.”

Goals of the RFI are then outlined which essentially involve the collection of experiences and ideas based on practitioners and researcher’s implementation of AI. This is then followed by a more granular 12-topic focus. This RFI sets the stage for a collaborative process between NIST and the industry in the formal exploration of AI and RMF. In the immediate future, NIST is hosting a public workshop on 19-21 October.

It is also of note that NIST has provided in the link below initial comments on the AI RMF process:
https://www.nist.gov/itl/ai-risk-management-framework/comments-received-rfi-artificial-intelligence-risk-management

Overall, I commend NIST for formally starting a conversation on AI and RMF. I think this reflects new leadership at NIST with Victoria Pillitteri assuming the role of Acting Manager for the Security Engineering and Risk Management Group. Although it is far too early to tell what will come out of this initiative, it is important that NIST is looking at the future of AI to create more efficiency in RMF and address perceived weakness.

With that being said, the nature of RMF involves subjective risk-based decisions that in my opinion should not be fully automated. Research has shown that automated tools can lead to users being less engaged and focused due to the assumption of the automated process completing the intended goal on their behalf. I believe that RMF can be made more efficient with AI, but it is critical that major RMF elements that rely on human interaction are not automated such as informal risk assessments and authorization decisions. Although AI RMF is in its infancy, I intend on tracking this very closely and will be writing additional articles on the topics after attending the October workshop.

See the full newsletter and explore more articles like this as well as our full course schedule by clicking the link below:

BAI – RMF Newsletter

Connect with us on LinkedIn and get notified when a new newsletter is posted:

BAI Information Security (RMF Resource Center) — LinkedIn


Post Categories: Risk Management Framework Tags: