top of page

People-First Purview (Strategy): DLP Without Breaking Trust

  • Writer: E.C. Scherer
    E.C. Scherer
  • Dec 28, 2025
  • 4 min read

Updated: Jan 13

Reducing risk without turning people into suspects


Most organizations misunderstand DLP because they think it’s a legacy data check.


SSN detected? Yes → block it. No → send it.

That’s not DLP. That’s pattern matching with anxiety.


Real DLP is intelligence meant to support how work actually happens, not how policies wish it happened. When you treat it like a binary filter, it becomes fragile fast. When you treat it like signal, it actually helps.


Labels Tell You What Matters. People-First DLP Decides When to Step In.

Sensitivity labels are about intent. They tell you what the organization cares about and why. DLP shows up later, in the messy moments. Send. Share. Upload. Copy. Move.


Those moments are where risk actually exists.


If labeling is the map, DLP is deciding whether this is the moment to tap someone on the shoulder and say, “Hey, pal, pause.”


Not every pause needs to be a stop sign.


Why DLP Gets a Bad Reputation

I hear the same things everywhere:

  • “We can’t interrupt workflows.”

  • “Last time we tried this, support tickets exploded.”

  • “Legal will never approve this.”


That reaction usually comes from bad past rollouts, not bad intent.


But here’s the part people don’t like hearing:

When workflows can’t tolerate basic protections for sensitive data, that’s a signal your security policies need work...not a reason to avoid DLP.

DLP isn’t something that happens to users. If it is, you already lost.


Monitor First Is a Design Choice

“Monitor first” doesn’t mean doing nothing.


It means you’re choosing to understand reality before enforcing opinion.


Labeling tells you what matters. Monitoring shows you how that data actually moves.


Those two things rarely line up perfectly on day one.


Monitoring helps surface:

  • Real workflows labeling didn’t fully capture

  • Where labels need to act as intentional exceptions

  • Where something looks risky on paper but makes total sense in context


Skip this step and you won’t get security. You’ll get friction.


What I Pay Attention To (Early)

At the start, I care about environmental context, not individual behavior.


Two signals matter most:

  • Location – where data is moving from and to

  • Department – because expected use varies wildly across teams


Everything else goes in a different bucket. User behavior patterns, timing anomalies, repeated actions, those are Insider Risk signals to me, not DLP design inputs. Mixing those too early is how DLP turns into surveillance, even if that wasn’t the intent.


Labels, DLP, and Insider Risk icons with arrows between them. Text: "Labels, DLP, Insider Risk," highlighting intent, moments, patterns. Blue, orange, green.
People-first DLP focuses on timing, not punishment.

DLP Isn’t Here to Stop Data From Moving

This part surprises people.


Organizations almost never ask me to block sensitive data outright.


Because sensitive data has to move:

  • Employee and new hire PII flows to HR

  • PCI data flows to payment vendors

  • Legal data flows between internal and external counsel

  • Medical terms in coursework are not the same thing as HIPAA-regulated counseling data


Monitoring doesn’t expose recklessness. It confirms normal, necessary exchange.

DLP isn’t about stopping movement. It’s about making sure movement makes sense.


“Why Aren’t We Blocking This Yet?”

Leadership often assumes sensitive data stays neatly inside the org.


It doesn’t. Never has.


This is where I reset the conversation:

Blocking is an opinion. Monitoring is evidence.

You can’t decide what should be blocked until you understand what “normal” actually looks like. Monitoring is how you get everyone looking at the same reality.


Where I Stop Monitoring and Step In

People-first doesn’t mean hands-off.


There are lines I don’t “learn” from.


Two things move me straight out of monitor mode:

  • Sensitive data moving to USB or personal cloud storage

  • Sensitive data showing up in departments where it clearly doesn’t belong


When data type and department don’t match, that’s not a tuning issue. That’s a scope issue.

At that point, intervention is appropriate. Not because someone is malicious, but because the risk is real and unsupported by context.


Coaching Beats Blocking (Most of the Time)

DLP shouldn’t feel like a trap.


When a user sees a prompt, I want two things to happen:

  • A pause long enough to think

  • Enough context to make the right call


The best outcome is no alert at all.


I want users to feel good about fixing something before it escalates.


My early success metric is simple:

No more than two alerts per user per week for the first six weeks


If alerts repeat endlessly, the system isn’t teaching. It’s nagging.


Bad prompts are too long, too vague, too mean, or totally disconnected from training.


Good prompts are short, clear, and written in the same language as your data classification policy.


DLP Without Surveillance

Let’s be explicit.

DLP crosses into surveillance when it’s designed to evaluate people instead of protecting data.

DLP should answer questions about data movement, not intent.


That’s why I intentionally avoid:

  • Policies built around named users

  • Tracking individual behavior over time

  • Treating one-off anomalies like intent

  • Productivity metrics disguised as security


Those signals have a place. They just don’t belong here.


If DLP requires us to watch people in order to work, we’re doing it wrong.


Mandatory Brain Break


Opossum surrounded by dry leaves, next to a white wall with blue text. The setting is outdoors; the mood is tense.
Virginia opossum (Didelphis virginiana) from © ecs_nature

Opossums out during the day are often assumed to be sick or dangerous, when in reality they’re responding to context: weather, hunger, disruption. The behavior looks unusual if you don’t understand the environment, but it’s completely normal. Risk without context is just fear in a lab coat.


Knowing When to Hand It Off

There’s a clean transition point.


When the question shifts from “Does this data movement make sense?” to “Why is this person doing this repeatedly?”


That’s no longer a DLP conversation.


That’s Insider Risk Management, with different guardrails, approvals, and oversight.


Platforms like Microsoft Purview separate these capabilities for a reason. Respecting that boundary is how you protect data and trust.


What’s Next

This post was about moments.


The next one is about patterns.


People-First Purview: Insider Risk Management will focus on:

  • When behavior actually matters

  • How to look for risk without profiling people

  • And how to intervene proportionally, with humans still in the loop


DLP tells you what happened. Insider Risk helps you decide what to do about it.

Comments


©2026 by E.C. Scherer

bottom of page