NIS2 Compliance Documentation Audit: How the Scoring Methodology Works


Article Thumbnail

NIS2 Compliance Documentation Audit: How the Scoring Methodology Works

February 16, 2026

Applies to: NIS2 entities (essential and important) in baseline-obligation implementation programs.

Aegister’s scoring methodology translates NIS2 documentary quality into measurable decisions. The model evaluates each requirement point on a 0–4 scale, across 5 dimensions, then aggregates results into maturity bands and remediation priorities. The objective is practical: separate cosmetic compliance from evidence-ready compliance, and give management a clear execution order.

Key Takeaways

  • Scoring is done at requirement-point level, not only at document-title level.
  • Every score combines 5 dimensions: coverage, specificity, traceability, evidence, and approval (where applicable).
  • Findings are prioritized as critical, major, minor, or observation.
  • Appendix B and Appendix C rules require dedicated checks in the scoring logic.

Scope of This Article

This article covers:

  • The scoring architecture used in the Compliance Documentation Audit service.
  • How scores become maturity levels and remediation priorities.
  • How special NIS2 rules (risk linkage and board approvals) are applied.

This article does not cover:

  • Client-specific results.
  • Full proprietary checklists and internal templates.

Regulatory Baseline Used for the Method

Official sourceWhy it matters in scoring
Legislative Decree 138/2024Defines obligations in Articles 23, 24, 25 and governance accountability perimeter.
ACN Determination on baseline obligationsDefines baseline measures and requirement points by subject type and category.
ACN Reading Guide for baseline specificationsClarifies evidence expectations, risk-based clauses (Appendix B), and approval-sensitive items (Appendix C).
ACN NIS baseline pageProvides implementation context and timeline framing for baseline obligations.

For important entities, the ACN framework references 37 measures and 87 requirement points in first-application baseline logic.

Scoring Unit and Calculation Logic

Scoring unit

The atomic unit is a single requirement point (example format: ID.RA-05:p1), not a whole policy.

Requirement-point score

Each requirement point receives a score on 5 dimensions. Final requirement score = average of applicable dimensions.

If one dimension is not applicable, it is excluded from the denominator.

0–4 Scoring Scale

ScoreLabelOperational meaning
0Not addressedRequirement absent; immediate compliance risk
1Partially mentionedGeneric statement without operational depth
2Addressed with gapsRequirement present but materially incomplete
3Substantially compliantRequirement largely covered, with minor gaps
4Fully compliantRequirement fully covered with operational and evidential quality

The 5 Evaluation Dimensions

DimensionControl questionTypical failure mode
CoverageIs the requirement actually treated in the document?Requirement is absent or only implied
SpecificityAre roles, actions, and timings operationally defined?Principle-only statements
TraceabilityIs there explicit traceability to NIS2 measure/point?Generic legal references only
EvidenceAre required support artifacts identifiable and usable?Evidence is cited but not traceable
Approval (where applicable)Is governance approval path explicit where required?Missing approval workflow for board-sensitive items

Evidence-Reference Maturity Sub-Scale

To reduce binary “present/missing” bias, evidence references are also graded for maturity:

LevelLabelPractical interpretation
0AbsentNo evidence reference
1Mentioned without locatorEvidence named, not traceable
2Mentioned with locatorEvidence locatable, no explicit NIS2 mapping
3Locator + NIS2 mappingEvidence traceable and mapped to requirement
4Evidence available for verificationEvidence traceable and available in controlled corpus

Document Maturity Bands

After requirement-point scoring, document maturity is classified as:

Average scoreMaturity levelExecutive interpretation
<1.0InadequateSubstantial rewrite required
1.0–1.9InitialSignificant remediation required
2.0–2.9DevelopingCore coverage present, targeted remediation required
3.0–3.5AdequateMinor completion work required
3.6–4.0OptimalStrong baseline readiness

Finding Severity Model

SeverityTypical score zoneAction expectation
Critical0Immediate remediation track
Major1Priority remediation track
Minor2Planned remediation track
Observation3Improvement recommendation

Special Rules in the Scoring Engine

1) Appendix B risk-linkage rule

The ACN baseline reading logic identifies specific requirement points that must show explicit linkage to risk assessment outcomes. In practical scoring, those items are checked with stricter linkage criteria based on the official baseline interpretation.

2) Appendix C approval-sensitive rule set

Items requiring governing-body approval are evaluated with explicit approval-path controls in document architecture and governance workflow design. In audit planning, we track 11 approval-sensitive checkpoints aligned with Appendix C interpretation in baseline documentation.

3) Draft-state handling

When documentation is explicitly in draft state, missing final signatures are treated as a status condition, while missing approval architecture (roles, approval path, revision governance) remains a scored gap.

Practical Execution Workflow (Audit Side)

  1. Define document perimeter and subject type.
  2. Map each document to relevant NIS2 requirement points.
  3. Score each point on the 5 dimensions.
  4. Flag non-applicable dimensions explicitly.
  5. Aggregate scores at requirement and document levels.
  6. Assign severity class to each gap.
  7. Build a remediation queue by dependency and risk impact.
  8. Produce executive summary and operational backlog.

What the Method Produces for Management

  • A requirement-level scoring matrix.
  • A document maturity map.
  • A prioritized remediation queue with critical-path logic.
  • A board-ready summary linking documentary risk to governance actions.

Common Scoring Mistakes to Avoid

  • Scoring whole policies without requirement-point granularity.
  • Treating citation of a policy title as equivalent to operational evidence.
  • Confusing legal mention with measure-point traceability.
  • Deferring approval-path design to the final publication phase.
  • Ignoring cross-document consistency in incident and continuity flows.

FAQ

Is this a legal opinion?

No. It is a compliance-readiness methodology designed to operationalize documentary controls against the official NIS2 baseline framework.

Can the same model be used for essential and important entities?

Yes, but requirement mapping and expected documentary depth must follow the applicable baseline set in official ACN documentation.

Does a high score mean no further work is needed?

No. A high score indicates stronger documentary readiness. Technical control validation and implementation testing remain necessary.

What if some requirements are unclear in source material?

Details are defined in the official baseline documentation and annexes: ACN Determination, ACN Reading Guide.

Conclusion

A robust scoring methodology turns NIS2 documentation review into an execution discipline. By combining requirement-point scoring, evidence maturity checks, and governance-sensitive controls, organizations can prioritize the right remediation sequence and reduce last-mile compliance risk before supervisory scrutiny windows tighten.

Related reading

Official Sources

Share this post