With the advent of AI, IT security teams are being swarmed on all sides:
Senior leaders want them to implement AI without exposing the organization to unnecessary risk.
IT leaders want them to quickly adopt AI techniques to speed up incident response time.
And threat actors are coming at them every day with novel ways to attack.
Not only are these teams being pulled in multiple directions, but AI continues to evolve, making it difficult to properly address all the challenges and opportunities AI presents.
To help IT security leaders caught in these crosswinds, the National Institute of Science and Technology (NIST) has developed a framework that provides guidance on how to incorporate AI Risk Management into an organization’s cybersecurity practices.
Though still in draft form, organizations can use NIST’s new Cyber AI Profile to gain insight into how to manage AI security risk, while also facilitating dynamic AI innovation. This framework builds on the existing NIST Cybersecurity Framework (NIST CSF), which many private and public organizations use to build their security programs and meet compliance requirements.

What is the Cyber AI Profile?
The Cybersecurity Framework for Artificial Intelligence (Cyber AI Profile) provides a structure for organizations to address the evolving threats posed by AI. The Cyber AI Profile looks to address three fundamental questions:
- Secure – How do we implement Cybersecurity of AI Systems?
- Defend – How do we utilize AI for Cyber Defense?
- Thwart – How do we prevent AI-enabled cyber-attacks?
By focusing on these areas, organizations can capitalize on the opportunities AI offers without compromising security.

Figure 1: NIST Cyber Profile – Relationship Between Cyber AI Profile Focus Areas
The NIST Cyber AI Profile aligns each focus area with the NIST Cybersecurity Framework (NIST CSF). For each subcategory in the NIST CSF (there are 106 subcategories), the NIST Cyber AI Profile adds additional details specific to managing AI-related risks.
Take, for example, the subcategory PR.AT-01 from NIST CSF, which focuses on ensuring personnel undergo awareness training to understand the cybersecurity risks they will face in their job. The NIST Cyber AI Profile adds to this subcategory to ensure that AI training is delivered frequently, so employees stay up to date on the latest AI risks.
Furthermore, it focuses on:
- Training employees to understand the risks of using AI (Secure)
- Training IT and AI implementation teams to monitor potential AI misuse (Defend)
- Training all employees on cutting-edge social engineering techniques such as Deepfakes (Thwart).
Below is an example from the draft NIST Cyber AI Profile, demonstrating how these concepts have been worked into the NIST Cybersecurity Framework:

Figure 2: Example of the PR.AT-01 Subcategory from the NIST CSF mapped to the Cyber AI Profile
Now, let’s take a more detailed look at how the proposed framework addresses each focus area – Secure, Defend, Thwart.

Secure: How to Implement Cybersecurity for AI Systems
The first focus of the AI Cyber Profile is Secure, which provides guidance on implementing cybersecurity for AI Systems. This focus area is designed to supplement existing cybersecurity and risk management practices with AI-specific guidance. It covers all aspects of AI systems, from evaluating vendors to mitigating supply chain risks to protecting the data and infrastructure on which AI relies.
Some key control areas organizations should consider are:
- GV.SC Cybersecurity Supply Chain Risk Management – Does the organization have a strong understanding of how its AI vendors use the data provided to them?
- ID.AM Asset Management – Does the organization have a complete understanding of where AI is being utilized in the environment, as well as the type of data the AI has access to?
- PR.AT Awareness and Training – Do the employees in the organization understand the risks posed by AI systems, and have they been trained in strategies to mitigate those risks?
- DE.CM Continuous Monitoring – Are systems and procedures in place to ensure that AI systems’ logs are generated and reviewed for suspicious activity?
- RS.AN Incident Analysis – Does the organization have the resources necessary to investigate an AI security incident if one were to occur?
- RC.RP Incident Recovery Plan Execution – Does the organization include incidents affecting AI tools in their Business Impact Analysis and Incident Response Plan?

Defend: How AI Can Strengthen Cyber Defense
The next focus of the NIST AI Cyber Profile is Defend, which encourages organizations to integrate AI into their traditional cyber defense strategies. For example, AI can improve cyber defense by analyzing logs to detect cyber incidents and by providing guidance on how to respond to threats. Through agentic AI, organizations may even be able to respond to threats in real time.
Some key areas to consider when implementing AI as a cyber defense tool are:
- GV.OC Organizational Context – Are the AI defense tools in place compliant with legal and regulatory requirements?
- ID.IM Improvement – Does the organization have a process in place to identify areas of cyber defense where AI can improve response time, response accuracy, or impact of cyber incidents?
- PR.PS Platform Security – Has the organization integrated AI into its platform security measures to ensure that the confidentiality, integrity, and availability of its data are monitored and secure?
- DE.CM Continuous Monitoring – Has AI been implemented as part of the organization’s cyber incident monitoring program to assist in analysis and detecting cyber threats?
- RS. MA Incident Management – Has the organization implemented tools such as agentic AI to automate the response to known cyber incidents?
- RC.RP Incident Recovery Plan Execution – Does the organization have a process in place that uses AI to assist in post-incident recovery procedures to improve recovery times?
Thwart: How to Prevent AI-Enabled Attacks
The final focus of the AI Cyber profiles is Thwart, which provides guidance to organizations on preventing AI attacks. While AI can help businesses improve efficiency, automate tasks, and improve logistics, it has also made the job of an attacker far easier.
For example, in September of 2025, Anthropic, the company that created the AI model Claude, detected a highly sophisticated hacking campaign using the Claude Code. It is believed that a threat actor was using Claude Code to fully automate a cyberattack targeting manufacturing, tech, financial institutions, and governments. As AI continues to improve, malicious actors will find more ways to misuse the technology to attack organizations.
The NIST Cyber AI Profiles help organizations prepare for the eventual AI-enabled cyberattack. It provides guidance on implementing additional controls within traditional cybersecurity programs to thwart these attacks.
Key questions organizations should consider when building their AI cyber program include:
- GV.RM Risk Management Strategy – Does the organization consider the unique speed and efficiency of AI-based attacks when evaluating their risk tolerance and risk appetite?
- ID.RA Risk Assessment – Does the organization account for how AI-enabled attacks can exploit weaknesses and vulnerabilities in its risk assessments?
- PR.AT Awareness and Training – Are employees and security teams educated about the capabilities of modern attackers, including AI-enabled attacks?
- DE.AE Adverse Event Analysis – Are security teams trained to detect and analyze attacks for indicators of AI usage?
- RS.AN Incident Analysis – Does the organization have the digital forensics expertise to determine the scope of an AI incident and identify the necessary mitigating actions?
- RC.CO Communication – Does the organization have a method of sharing threat intelligence as it relates to AI-based threats?

Is Your Cybersecurity Program AI-Ready?
As AI becomes a part of most daily operations, organizations must ask themselves a critical question: Is our cybersecurity program truly prepared for the age of AI? Traditional defenses are no longer enough, and AI demands faster, more informed decision‑making from security teams.
Now is the time to evaluate your AI exposure, harden your defenses, and align your cybersecurity program with emerging frameworks like the NIST Cyber AI Profile.
Schedule a free consultation with ERMProtect’s AI Lead, Dr. Collin Connors, to discuss how your organization can take control of its AI risk and build a security program equipped for the future.
About the Author
Collin is a Senior Cybersecurity Consultant at ERMProtect. He leads AI Consulting at ERMProtect, assisting clients with AI Risk Management, Governance & Implementation Strategy. He has published cutting edge research on using AI to detect malware and speaks regularly at national conferences on topics on managing AI risks and AI implementation strategies. He holds undergraduate degrees in Mathematics and Computer Science and a PhD in Computer Science, with research focused on AI and blockchain. In addition to specializing in AI solutions, he has performed penetration testing, risk assessments, training, and compliance reviews in his six years at ERMProtect.