top of page

The Safe Inbox Initiative with Purview

  • Writer: E.C. Scherer
    E.C. Scherer
  • 6 days ago
  • 5 min read

A Very Non-Standard Use of Microsoft Purview


Every cooking blog post has that part you want to skip, the life story before the recipe.


I’m about to do the same thing in a technical post.


I’ll keep it as short as I can, but the “why” matters here, because it explains why this is a very non-standard solution when all you really want to know is what temperature to bake the thing at.




The backstory

A university president testified before Congress on a politically divisive issue. As you might expect that visibility came with backlash.


A lot of backlash.


Within a single week, VP inboxes received over 250,000 emails. Some were coordinated copy-and-paste protest messages. Many were racist or sexist. And a non-trivial number crossed the line into explicit threats against her health and physical safety.


The sheer volume effectively denial-of-serviced her Exchange Online mailbox. Exchange limits were hit. Legitimate messages were delayed or buried. The inbox stopped being usable for its actual purpose.


IT and executive staff were forced into manual triage just to keep the mailbox functioning.


And that’s where the real problem showed up.


The people touching that inbox weren’t filtering abstract noise. They were being repeatedly exposed to some of the most vile language you can imagine, directed at someone they knew and worked with. Someone they were responsible for supporting and protecting.


Traditional controls didn’t help. Spam filters were inconsistent. Exchange mail flow rules were too rigid and easy to evade with emojis and symbols. Turning off external email was a non-starter in a university environment. Third-party tools weren’t in the budget. And asking executive assistants to pre-screen hundreds of thousands of messages was rightly rejected from just a practicality standpoint, not to mention the mental toll.


Leadership needed something that worked fast, used tooling the university already owned, preserved freedom of speech, and reduced human exposure to abuse while still surfacing real threats for escalation.



Mandatory brain break.

Before we pivot from people to platforms, here’s an American Toad (Anaxyrus americanus), spotted by @ecs_nature in Prince George County, VA.


Take a breath. Drink some water. Stretch your shoulders.


American Toad, Anaxyrus americanus, sitting on a pile of gravel
American Toads are quiet, unassuming, and remarkably resilient. They spend most of their time doing their thing unnoticed, keeping ecosystems balanced and handling more than you’d expect from something so small.

Alright. Let’s talk about why this ended up being a Purview problem.



Why this is not how Purview is usually used

Microsoft Purview is not marketed as a “protect highly visible people from harassment at scale” platform. Sure, the advanced license has Communication Compliance, but you don't get the proactive quarantine or blocking of messages.


This wasn’t about data loss.

This wasn’t about compliance theater.

This wasn’t about stopping criticism or dissent.


People have a right to be upset and to say so.


This was about inbox safety, threat triage, and human wellbeing.


The goal was to:

  • Restore inbox usability

  • Reduce exposure to abusive content

  • Identify and escalate credible threats quickly

  • Create visibility and auditability without forcing humans to read everything

Purview happened to have the right building blocks to do that, even if this use case doesn’t show up on the product slide deck.


The actual problem we needed to solve

Public figures within the university were receiving large volumes of external email containing harassment, discriminatory language, and threats of harm.


The organization needed a way to:

  • Take automated or semi-automated action on unsafe messages

  • Expedite escalation of credible threats

  • Prioritize the mental wellbeing of end users and support staff

  • Preserve legitimate communication and free expression

  • Report on violations of existing code of conduct and anti-harassment policies


All without introducing new tooling or turning IT into a permanent triage team.



Ingredients

This solution intentionally combined tools that usually live in very different conversations:

  • Microsoft Purview Data Loss Prevention (DLP) Used for action and enforcement, not just reporting.

  • Trainable Classifiers To detect threats, violence, and harassment even when senders tried to get creative.

  • Microsoft Purview Communication Compliance Used as a review and reporting layer with proper separation of duties.

  • Exchange Online For routing, alternate delivery, and escalation workflows.


No third-party tools. No net-new licenses. Just careful composition.


The recipe (finally)

Step 1: Define a protected population

Instead of scoping at the policy level, protection was scoped inside the DLP rule.

The logic became:

If an email from an external sender is sent to members of a specific Microsoft 365 group and contains certain classifiers, take action.

This allowed the solution to protect high-risk individuals without impacting the rest of the university.


Scoping at the wrong layer here either over-blocks or does nothing useful. This detail matters more than it looks.


DLP scoping location list, Exchange email is checked and scoped to "All groups"
Remember DLP is used to stop sensitive information from leaving the organization, so scoping here will result in this policy only applying if the unsafe message is coming from the users or groups that are specified.

DLP rule titled "Safe Inbox Rule" that is looking for targeted harassment, profanity, threat, and discrimination in emails sent to recipient group.
Instead, scope to "recipient is a member of:" within the DLP rules. The above rules says "If any email (internal or external) contains the listed trainable classifiers is sent to a member of the family@[domain] groups, then take the listed actions."

Step 2: Let classifiers do the heavy lifting

Mail flow rules fail because they only catch what you explicitly tell them to catch.


Trainable classifiers made it possible to detect:

  • Threats of violence

  • Harassment and hate speech


Document fingerprinting helped control the "templatized," copy/paste emails.


Keyword lists were still used, but only as a supplement for edge cases and regional language gaps, not as the primary control.


Step 3: Be intentional about “action”

Not every unsafe message deserves the same response.


Depending on severity, actions included:

  • Redirecting messages to an alternate mailbox or folder

  • Preventing delivery to the end user

  • Generating alerts for rapid review

  • Triggering escalation workflows for campus police or security teams


The goal wasn’t to nuke everything. It was to match action to intent.


Screenshot of email with profanity to show DLP policy in action.
(Side note: Am I the only one who feels bad testing this policy?)

Step 4: Use Communication Compliance for visibility, not punishment

DLP handled detection and action. Communication Compliance handled review and awareness.


This provided:

  • A dedicated review experience

  • Role-based access control

  • Separation of duties

  • Trend analysis without end-user impact


Because Communication Compliance doesn’t rely on confidence thresholds in the same way, it allowed the organization to capture more data for reporting without risking inbox disruption.



What this is not

This is not a censorship engine.

This is not a replacement for law enforcement or threat intelligence.


This is a triage and safety pattern that respects free expression while protecting people from harm.


Why this worked

  • Inbox usability was restored

  • Manual exposure to abusive content dropped significantly

  • Credible threats were easier to identify and escalate

  • Leadership gained visibility without reading everything

  • IT teams stopped playing whack-a-mole with mail flow rules


Most importantly, it centered the wellbeing of the humans involved.


Who this pattern is for

This approach makes sense if:

  • You support highly visible individuals

  • You operate in higher education or public-sector environments

  • You already own Microsoft Purview

  • You need speed, restraint, and auditability


It probably doesn’t if:

  • You want heavy-handed content moderation

  • You’re trying to solve generic spam

  • You don’t have clear escalation paths


Sometimes the weird problems are the ones that teach you the most about your tools.


And sometimes, unfortunately, you really do need the annoying story before the recipe makes sense.


Recent Posts

See All

Comments


©2025 by Elias Scherer

bottom of page