Five Minutes Into a Bad Purview Deployment
- E.C. Scherer

- Mar 16
- 4 min read
Notes from the Field: you can learn a lot about a data security program in the first five minutes of a deployment. Usually, it involves dozens of DLP policies, labels nobody understands, and everything quietly running in simulation mode.
The organization has already “deployed Purview.” On paper, the program exists. Features are turned on. Policies are configured. There are dashboards and alerts and reports.
But the pieces don’t make sense together.
Their information security policy doesn’t match the labels they deployed. There are dozens of DLP policies. Insider Risk Management is enabled without sensitivity labels. Risk acceptances exist because the Microsoft documentation couldn’t be implemented without breaking workflows.
And the one that always tells the story fastest:
Everything is running in simulation mode.
Alerts get diverted into some forgotten Outlook folder.
Nobody wants to touch the policies because every time they tried before, it disrupted how people actually worked.
At that point, the problem usually isn’t Purview.
The problem is how the program started.
The On-Premise Mindset Problem
A lot of organizations approach Purview with an older security mindset.
The thinking goes something like this:
“Before we can protect the data, we need to know exactly where all of it is.”
So, the first step becomes scanning everything.
Find the sensitive data.
Map it.
Catalog it.
Protect it.
That approach made sense when security tools were primarily on-prem scanning engines.
But Purview doesn’t really work that way.
Purview is a suite of capabilities designed to protect data regardless of where it lives, where it was created, or where it travels.
The protection travels with the data.
When organizations approach Purview like a discovery platform instead of a protection platform, they end up trying to solve the wrong problem first.
And that’s where things start to unravel.

Notes from the field: The goal isn’t to find every piece of sensitive data first. The goal is to make sure it’s protected wherever it shows up.
What a Failed Start Looks Like
There are a few patterns I see over and over again when a data security program started in the wrong place.
I know things went sideways early when I see things like:
Hundreds of DLP policies
Someone tried to build a policy for every scenario instead of designing a coherent protection model.
Everything is stuck in simulation mode
The policies technically exist, but no one is confident enough to enforce them.
Alerts quietly redirected
Instead of solving the signal problem, alerts are routed somewhere they won’t bother anyone.
Label sprawl
There are five, ten, sometimes fifteen labels, each with multiple sublabels.
No one can realistically expect users to understand them.
Labels describing data types instead of sensitivity
Instead of a clear sensitivity hierarchy, the labels look like this:
PHI
PII
Client Data
Restricted
Internal Confidential
Users are expected to decide between technical categories they don’t understand.
Insider Risk deployed without labels
Behavior analytics tools get turned on before the organization has any consistent way to identify sensitive data.
No clear user experience
If I log in as a new employee, I should be able to answer two questions quickly:
What label should I apply to this document?
Where am I allowed to store or share sensitive data?
If I can’t figure that out quickly, the deployment already has a usability problem.
The Real Problem: Security Without Context
Most failed deployments have the same root cause.
The tools were deployed before anyone fully understood how people actually work.
Security policies were written without mapping:
collaboration patterns
workflows
device usage
identity architecture
data classifications
Organizations try to implement features instead of designing a data protection program.
They start turning on knobs instead of building a system.
What I Do Instead
If I had the chance to reset one of these programs from the beginning, I wouldn’t start by enabling Purview features.
I would start with understanding the environment.
First, I want to understand the information security policy. That becomes the foundation for everything else.
From there, I build a matrix that maps:
sensitivity labels
supporting DLP policies
exception governance
But policies alone aren't enough.
I want to understand how people actually collaborate.
What tools do they use?
Not just Microsoft tools.
Sometimes it’s Slack. Sometimes it’s Box. Sometimes it’s something no one in security even knew about.
I also want to know whether the organization has a modern identity architecture in place. Identity is what allows data protections to travel with the user, the device, and the document.
Without that foundation, data security controls quickly become brittle.
Once those pieces are understood, we can design something much more important than a set of policies.
We can build a holistic, deployable, scalable workbook that maps:
users
workflows
sensitive data
protection mechanisms
That’s what prevents organizations from ending up with another frustrating deployment that sits in simulation mode forever.

Security Blanket Nature Break
Step away from the dashboards for a moment.

This western honeybee (Apis mellifera) was exhausted. It wouldn’t take sugar water from a shallow dish and didn’t have the energy to move or climb.
The easiest way to help was placing droplets of sugar water on my hand until it had enough energy to recover and fly off.
Some systems don’t need to be rebuilt.
They just need the right kind of help.
The Real Goal
The goal of a data security program isn’t to turn on as many features as possible.
The goal is to make the secure behavior the easiest behavior for users.
When a program starts with that mindset, Purview deployments tend to go smoothly.
When it doesn’t, the result is usually the same: Dozens of policies. Labels nobody understands. And a security program that technically exists but nobody trusts enough to enforce.



Comments