MSSP Reality Check

The Scoring Framework

Six dimensions. Each scored 0–25. Total out of 150, normalised to 100. Not a survey, not a vibe check — an evidence-based assessment of whether your MSSP is actually delivering what you're paying for.

01

Onboarding & Discovery Rigour

Does the MSSP actually understand your environment before they start monitoring it?

0–25 pts

What good looks like

  • Structured discovery process with defined phases
  • Documented asset inventory and data source coverage
  • Formal scope document agreed before go-live
  • Baseline period established before alerting begins
  • Named contacts defined on both sides

Red flags

  • Monitoring started before discovery is complete
  • No formal scope document — or a generic template not tailored to your environment
  • No baseline period — alerts fire from day one
  • Data sources feeding the SIEM not documented

Evidence requested

Onboarding project planScope documentAsset register handoff
02

Detection Quality & Coverage

Are the detections meaningful, or just noisy compliance theatre?

0–25 pts

What good looks like

  • Use-case library documented and mapped to MITRE ATT&CK
  • Detections tuned to your environment — not generic out-of-the-box rules
  • Custom rules developed beyond vendor defaults
  • Coverage gaps honestly disclosed
  • Threat intelligence actively feeding detection logic

Red flags

  • No use-case documentation available
  • Default vendor rules with no evidence of tuning
  • MITRE mapping missing, vague, or unverifiable
  • No process for requesting or adding new detections
  • No threat intelligence integration

Evidence requested

Use-case catalogueMITRE ATT&CK coverage mapTuning change log
03

Triage & Escalation Process

When something fires, does a human actually think about it — or does an alert just become a ticket?

0–25 pts

What good looks like

  • Documented triage process with defined steps
  • SLAs defined by severity — for quality, not just response time
  • Named escalation path with clear criteria
  • Analyst reasoning recorded in alert records
  • After-hours coverage explicitly documented

Red flags

  • SLAs defined only for response time, not investigation quality
  • No analyst notes on closed alerts
  • Escalation path unclear or undocumented
  • After-hours coverage gaps not disclosed
  • No documented criteria for what gets escalated vs closed

Evidence requested

Triage SOPSample alert records (anonymised)Escalation matrixAfter-hours roster or SLA clause
04

Tuning & Continuous Improvement

Is the service getting sharper over time, or is month 12 identical to month 1?

0–25 pts

What good looks like

  • Formal tuning review cadence — monthly or quarterly
  • False positive rate tracked and evidenced over time
  • Change log maintained for all rule changes
  • Improvements driven by data, not just client complaints

Red flags

  • No tuning process documented
  • False positive rate not tracked or disclosed
  • Changes only made reactively when clients push back
  • No evidence of service improvement across the contract period

Evidence requested

Tuning review recordsFalse positive trend dataChange log
05

Reporting & Client Visibility

Can you actually tell whether the service is working?

0–25 pts

What good looks like

  • Regular operational reports with meaningful metrics
  • Executive summary and technical detail kept separate
  • Trend data — not just point-in-time snapshots
  • Coverage gaps and limitations honestly reported
  • Reports delivered on schedule without chasing

Red flags

  • Reports contain only vanity metrics — alert counts, uptime percentage
  • No trend data across reporting periods
  • Gaps and limitations never disclosed in reports
  • Reports require chasing to receive

Evidence requested

Sample operational reportReporting cadence documentation
06

Commercial & Delivery Alignment

Does what you're paying for match what you're actually getting?

0–25 pts

What good looks like

  • Contract scope matches the service actually delivered
  • Pricing tied to defined outcomes, not just effort
  • Change control process documented and followed
  • Exit terms fair, clear, and include data return obligations
  • SLA credits applied automatically — not requiring client escalation

Red flags

  • Scope creep absorbed silently, then invoiced
  • No change control process
  • Exit costs or data return terms buried or absent
  • SLA credits exist on paper but are never actually paid

Evidence requested

Contract scope scheduleChange control process documentationSLA credit history

Scoring

What your score means

Each dimension scored 0–25 · Total out of 150 · Normalised to 100

Verified85–100

Operationally mature — evidence is strong across all dimensions

Credible70–84

Solid foundations with minor gaps — low risk with active management

Developing50–69

Processes exist but are inconsistently evidenced — monitor closely

At RiskBelow 50

Significant operational gaps identified — escalate or replace

Free Download

Get the Framework as a PDF

The complete MSSP Reality Check scoring framework — six dimensions, evidence checklists, red flags, and scoring bands — formatted for use in vendor reviews and board reporting.

No spam. Unsubscribe any time.