Deep Dive Defender for Cloud: AI SPM 🤖🛡️
Learn about the new Defender for Cloud AI-SPM features.
Security posture refers to the current state of an organisation’s security that is its overall fitness to protect its identities, endpoints, user data, apps and infrastructure. An organisation’s security posture should NEVER be static instead it should constantly change in response to new emerging threats and variabilities in the environment security posture is a preventative measure that ensures we adopt to the threat landscape to stay secure and minimise our attack surface by resolving misconfigurations.
AI is no exception. The pressure to deploy AI quickly and leverage its benefits is immense, often leading to inadequate security reviews and hasty implementations full of misconfigurations. I personally think that, in the rush to adopt AI’s value, security considerations like proper data sanitisation and model security can fall by the wayside. This is because the secure and ethical development of AI systems is not yet fully understood by most, and security has catching up to do. This is where AI Security Posture Management (AI-SPM) comes in it’s a step in the right direction to detect and fix AI misconfigurations before they can be exploited.
The Value of AI-SPM
To give you a better idea of the value of AI-SPM, let’s answer the question: What would happen if you never secured your models? In short, nothing good. There’s no shortage of security challenges in this area.
AI model-based attacks and vulnerabilities vary in sophistication and methodology, with each posing unique threats to data integrity, confidentiality, and system reliability. Without going into great detail, AI-based attacks and vulnerabilities cause the failure of guardrails (mitigations). The resulting harm comes from whatever guardrail was circumvented, for example, causing the system to violate its design policies or execute malicious instructions. Attacks and vulnerabilities, when not addressed, can have “real-life” implications— not only disruptive but life-threatening. Think about it: if a medical diagnosis model is manipulated into giving incorrect predictions or tampering with a model that detects fraudulent transactions, like the ones used by VISA or MasterCard, it could let criminals slip through the cracks, leading to massive monetary losses.
Beyond these immediate harms, there’s a growing regulatory spotlight that adds even more pressure to secure AI models. The passage of the EU AI Act imposes stringent requirements around data privacy, algorithmic fairness, and explainability. Failure to comply can lead to penalties exceeding those of existing frameworks like GDPR. As other regions outside the EU develop their own regulations, organisations worldwide will face the dual challenge of defending their AI systems from hostile attacks and meeting ever-tighter compliance standards.
And so, what is AI-SPM, and why do we need it?
If you take anything away from this post, it’s that a new set of problems requires a new set of solutions. Existing security tools—such as those aimed at cloud and data posture analysis (e.g., CSPM, DSPM)—do not address AI-specific attacks like data poisoning, model inversion, and adversarial attempts. While these tools might have AI features baked in, they don’t specifically address AI security. We can’t take a partial approach to such a big problem.
AI-SPM is an emerging category of tools within the Cloud Native Application Protection Platform (CNAPP) framework designed to help organisations protect against the unique risks associated with AI, ML, and GenAI models, including data exposure, misuse, and model vulnerabilities. We’re essentially extending the benefits of CNAPP to cover AI, providing visibility, control, governance, and most importantly security.
How does it differ from CSPM , DSPM and CIEM?
AI-SPM takes elements from existing security posture management approaches like Data Security Posture Management (DSPM), Cloud Security Posture Management (CSPM), and Cloud Infrastructure Entitlement Management (CIEM) and adapts them to address the specific challenges of AI systems in production environments. It’s important to note that this pillar is still evolving, and changes should be expected. If you're interested in other areas of the CNAPP framework, start by reading my blog series.
What will AI-SPM do?
Although this blog is specifically targeting Microsoft Defender for cloud AI-SPM in general, AI-SPM tools provide full visibility into the AI model development lifecycle, from data ingestion and training to deployment. By analysing model behaviour, data flows and system interactions, AI-SPM helps identify potential security and compliance risks that may not be apparent through traditional risk analysis and detection tools. Organisations can use generated insights and recommendations to enforce policies and best practices, ensuring that AI systems are deployed in a secure and compliant manner.
Additionally, AI-SPM can monitor for AI-specific threats like data poisoning, model theft and proper output handling, alerting security teams to potential incidents and providing guidance on remediation steps. As regulations around AI continue to evolve, AI-SPM can also help organisations stay ahead of compliance requirements by embedding privacy and acceptable use considerations into the AI development process.
Defender for cloud AI-SPM
As a market-leading cloud-native application protection platform, Microsoft Defender for Cloud helps secure AI workloads across hybrid and multi-cloud environments, from code to cloud. With the introduction of AI-SPM, now in general availability (GA), Defender for Cloud provides full visibility into AI applications, including generative AI. This ensures you can detect and address vulnerabilities to protect your AI workloads from potential threats across your cloud estate.
Additionally, AI Threat Detection for workloads currently in preview as part of Cloud Workload Protection (CWPP) capabilities provides runtime protection. This means you can secure everything you build and run across the cloud.
Request access to AI threat detection for workloads here. Unfortunately I wasn’t able to get access just yet due to the limitations of using a personal Azure tenant and the limited preview realise. I aim to have a future blog and demo of the cloud workloads protection feature once its generality available.
Features and use cases of Defender For Cloud AI SPM
Automatically Discover AI applications , workloads and other related AI services across your cloud environments such as those hosted on the Azure Open AI service, Azure AI foundry or in within AWS bedrock.
AI-SPM scans your cloud infrastructure and code repositories including IAC (infrastructure as code) to find artefacts and components of your AI applications. Assessing them for vulnerabilities and misconfigurations against best practices to provide real time insight scans happen in intervals depending on how you’ve configured the connector or plan, a recommendation will be generated once a misconfiguration is detected. The recommendation will also provide you with actionable steps for remediation automated remediation is also available for selected recommendations.
Defender for Cloud security findings are contextualised and prioritised because what use is a tool that simply identifies issues? Defender for Cloud takes a proactive approach, assigning each finding a risk priority based on its exploitability and the potential business impact on your organisation.
Once resources are discovered, in addition to dashboard findings and contextual recommendations, the Cloud Security Explorer feature can be used to further query misconfigured resources, such as containers running vulnerable libraries used by your AI models. You can apply multiple filters, including specific CVEs, and Microsoft has provided several pre-configured templates to help you get started.
Map attack paths to find direct and indirect exploitable risks to your Gen-AI services, this is a really cool feature it’s been around for a while but now supports attack paths for AI workloads. Attack paths show the riskiest security issues in your environment so you can prioritise your remediation efforts on exploitable paths attackers might use. In the context of AI consider a workload (VM) that has access to the data used for grounding or fine-tuning your AI model and is exposed to the internet through lateral movement is susceptible to data poisoning. Here attack path analysis has automatically identified a workload used to store or host the data for AI model grounding or training scenarios, this feature also works across AWS and GCP workloads where Azure Open AI service, AI foundry or AWS Bedrock is leveraged.
A new set of problems with a new set of recommendations
Microsoft has added new recommendations for AI workloads and artefacts a full list of which can be found here. For recommendations where automated remediation using a logic app is not yet available I highly recommend creating your own logic apps to automate remediation as much as possible ideally you want to get into a state where there is less friction to remediate a misconfiguration or vulnerability.
Interested in more? Check out a related episode from the “Defender for Cloud in the Field” series.
Have blog ideas, want to engage on a topic, or explore collaboration? Let’s take it offline reach out on
LinkedIn. I’d love to connect and continue the conversation!