Part 1: AI-DSPM with Microsoft Purview and Defender for Cloud 🦾☁️
Explore the all new Purview AI-DSPM features
In this blog, we’ll explore Microsoft Purview Data Security Posture Management (DSPM), focusing specifically on the new AI-DSPM features. As generative AI (Gen-AI) tools become more integrated into organisations, managing AI-related data risks and compliance challenges is increasingly critical. DSPM is a broad topic, so while I can't cover everything here, this post will give you a solid starting point. Let’s make this fun how about a pop quiz?
Tricky, right? But if you guessed that data security is the top concern, you're absolutely on point. Data is the lifeblood of any organisation, and with the explosive growth of AI, it’s more important than ever to ensure robust data security practices are in place. Skipping these practices can lead to severe consequences.
What is DSPM and why is it important?
The need for data security has always been critical, but with the AI boom, it’s more pressing than ever. AI and machine learning (ML) have significantly increased the amount of data being stored and processed, especially in the cloud, which comes with its own set of challenges (some of which I've covered in previous blogs). According to Microsoft’s 2024 State of Multi-cloud Security Report, 74% of organisations experienced at least one data security incident between October 2022 and October 2023. The average cost of these incidents? A staggering $15 million.
This makes it clear: organizations need to implement more robust data security measures than ever before, and that’s where DSPM comes into play.
DSPM is an all-in-one solution that helps organisations implement best practices and technologies to reduce data security risks. It secures data no matter where it resides—whether in the cloud, on-premises, or in hybrid environments. DSPM priorities the security of the data itself rather than just the systems it resides on.
A key element of DSPM is comprehensive, correlated visibility into sensitive data’s type, location, volume, and user activities surrounding it. To achieve this, DSPM uses several key components, including:
Data classification to sort data by its importance or sensitivity.
Encryption to make it unreadable to anyone without the right key.
Access control to ensure only the right people can see or use it.
Data Loss Prevention (DLP) to stop data from being leaked or stolen.
Monitoring to keep an eye out for anything dodgy happening to your data.
For these measures to work, the right technology is essential. This is where Microsoft Purview comes in, acting as a Swiss Army knife for managing, governing, protecting, and securing data. Purview’s DSPM capabilities are natively integrated with Microsoft 365 and Windows devices—no plugins or agents needed..
The cross roads between DSPM and CNAPP
Here’s where things get even more exciting. The spread of data across multiple clouds and services makes it challenging to govern and secure, not to mention gaining visibility into potential risks. Enter Cloud Native Application Protection Platforms (CNAPPs) DSPM is emerging as a key pillar within the CNAPP framework.
Why does this matter? Because DSPM provides organizations with a holistic view of their data, even when it's distributed across multi-cloud or hybrid environments. As the saying goes, you can’t secure what you don’t know exists. DSPM ensures your security team knows where sensitive data resides, who’s accessing it, and whether it’s at risk. At the same time, DSPM adds valuable context to other CNAPP solutions, helping your team gain deeper insights into data threats. For instance CSPM and DevSecOps handles application and cloud infrastructure security, while DSPM ensures the data itself stays secure.
If you’re new to CNAPPs, reading my series on them would be a great way to get started.
Introducing Microsoft Purview AI-DSPM
Now, let’s talk about AI-DSPM, a subset of DSPM with features specific to securing data in the context of Gen-AI. AI-DSPM helps address the unique challenges and risks posed by AI applications. It’s important to understand that AI-DSPM integrates and should be used with additional Microsoft Purview data security and compliance controls to mitigate risks associated with AI usage and to further enhance your data security posture management. The goal is to manage your overall data security posture while adopting Gen-AI tools without compromising either security or productivity.
What Get-AI apps are supported? The list of supported sites is part of a Sensitive Service Domain Group, managed by Microsoft and cannot be edited. You can find this group in the Endpoint DLP settings under 'Browser and domain restrictions to sensitive data’.
Additionally, there are new sequence detection options available for detecting potentially risky AI interactions. Learn more about these options here.
Getting started with AI-DSPM in Microsoft Purview
To fully benefit from AI-DSPM, you’ll need to follow the steps to activate the service and ensure prerequisites are met. These include activating Microsoft Purview Audit (which is already active for new tenants), installing necessary browser extensions, onboarding devices, and activating out-of-the-box policies. Allow at least 24 hours for these policies to collect data and reflect any changes made to the default settings.
What Happens After First Activation?
Once AI-DSPM is activated, an automated data risk assessment begins. This process may take between 1-3 days, depending on the size of your tenant. The report you receive will provide insights into the types of data within your tenant and the associated risks.
Features & Use Cases
Let’s explore some of the key features of AI-DSPM and how they help protect your data.
Scenario: “I have users of AI apps such as GPT in my organisations today, what is the quickest way to gain insights into AI usage and protect my data?“ AI-DSPM provides several preconfigured policies that you can activate with a single click:
DLP Policy: Detect Sensitive Info Added to AI Sites:
Discovers sensitive content pasted or uploaded to AI sites using Microsoft Edge, Chrome, and Firefox.
Covers all users and groups in your organisation.
Operates in audit mode only (unless edited).
The list of supported sites found here.
Insider Risk Management Policies:
Detect When Users Visit AI Sites: Tracks user visits to AI sites via browser.
User risk score is assigned only when specific thresholds are met for the exfiltration activities.
Policy conditions must be met just like with all policies being used during the policy configuration you’ll see visuals to confirm you’re all set.
Detect Risky AI Usage: Assesses user risk by identifying risky prompts and responses in Microsoft 365 Copilot and other generative AI applications.
Detection of risky activity by this policy also contributes to the user risk scoring in Adaptive Protection.
Detection focuses on user browsing activities to generative AI websites, user prompts and AI responses containing sensitive information in Microsoft 365 Copilot and Microsoft Copilot.
Detect Unethical Behaviour in Copilot: One of my favourites this policy flags sensitive information in prompts and responses within Microsoft 365 Copilot. Applies to all users and groups in the organisation. The policy uses “Prompt shield“ inputs and “protected material” trainable classifiers.
Prompt Shields Trainable classifier policy: The policy uses 'Prompt shield' inputs and 'Protected material' trainable classifiers to detect potentially risky generative AI interactions. As of now, these classifiers are in Public Preview, and the general availability (GA) date has not been announced.
Protected Material policy: Detects known text content that may be protected under copyright or branding laws. Detecting the display of protected material in generative AI responses ensures compliance with intellectual property laws and maintains content originality.
💡 Purview's Trainable classifiers are pre-trained models designed to identify and classify specific sensitive content, such as documents related to contracts, invoices, or financial reports, based on their meaning and context rather than just keywords or patterns.
Scenario: "How can I proactively prevent the exposure of sensitive data while using generative AI applications in my organisation?" Default Policies for Data Security in Generative AI can also be enabled with one click:
DLP Policy: Block Sensitive Info from AI Sites
Uses Adaptive Protection to apply a block-with-override for elevated risky users attempting to paste or upload sensitive data to AI apps in Edge, Chrome, and Firefox.
Ability to customise as needed with custom conditions.
Purview Information Protection
This policy creates default sensitivity labels and associated policies unless they have already been configured.
Activity explorer , Data Assessments and Reports:
Activity Explorer:
Lastly, the Activity Explorer tool provides a historical view of policy activities, showing the type of activity, user, date, time, AI app category, and any sensitive data involved. It helps track interactions with AI applications like Copilot, including risky activities and sensitive information usage.
Data Assessments
Data Assessments help identify and fix data oversharing issues across your organisation. Weekly assessments check the top 100 SharePoint sites used by Copilot for outdated or overly shared data. You can also create custom checks for specific users or sites.
Key features include:
Protect Tab: Restrict access to sensitive data, add labels, or set rules to delete unused content.
Monitor Tab: Track file sharing activities and manage permissions to prevent accidental sharing.
Reports & Recommendations:
The reports section will show you key statistics on user behaviour within Gen-AI applications. A report will take you straight into activity explorer where an automated filter will be applied.
Where to next:
With AI becoming central to business operations, securing data in AI environments is crucial. By using AI-DSPM with Microsoft Purview, organisations can protect sensitive data while leveraging Gen-AI tools, balancing security and productivity. There’s much more to explore, so stay tuned for deeper dives into these features!
View helpful sources below:
Join the Purview Community.
Sign up for Public Preview features, join the Compliance and Privacy Customer Community Program (CCP) by following these instructions.
Check out the One Stop Shop - Microsoft Purview Customer Experience Engineering (CxE) for user templates and other goodies.
Credits
Thanks to Ash Naik for providing Purview expertise and helping out with this blog! 😊
Have blog ideas, want to engage on a topic, or explore collaboration? Let’s take it offline reach out on
LinkedIn. I’d love to connect and continue the conversation!