Applies to: NIS2 entities (essential and important) in baseline-obligation implementation programs.
Aegister’s scoring methodology translates NIS2 documentary quality into measurable decisions. The model evaluates each requirement point on a 0–4 scale, across 5 dimensions, then aggregates results into maturity bands and remediation priorities. The objective is practical: separate cosmetic compliance from evidence-ready compliance, and give management a clear execution order.
Key Takeaways
- Scoring is done at requirement-point level, not only at document-title level.
- Every score combines 5 dimensions: coverage, specificity, traceability, evidence, and approval (where applicable).
- Findings are prioritized as critical, major, minor, or observation.
- Appendix B and Appendix C rules require dedicated checks in the scoring logic.
Scope of This Article
This article covers:
- The scoring architecture used in the Compliance Documentation Audit service.
- How scores become maturity levels and remediation priorities.
- How special NIS2 rules (risk linkage and board approvals) are applied.
This article does not cover:
- Client-specific results.
- Full proprietary checklists and internal templates.
Regulatory Baseline Used for the Method
| Official source | Why it matters in scoring |
|---|---|
| Legislative Decree 138/2024 | Defines obligations in Articles 23, 24, 25 and governance accountability perimeter. |
| ACN Determination on baseline obligations | Defines baseline measures and requirement points by subject type and category. |
| ACN Reading Guide for baseline specifications | Clarifies evidence expectations, risk-based clauses (Appendix B), and approval-sensitive items (Appendix C). |
| ACN NIS baseline page | Provides implementation context and timeline framing for baseline obligations. |
For important entities, the ACN framework references 37 measures and 87 requirement points in first-application baseline logic.
Scoring Unit and Calculation Logic
Scoring unit
The atomic unit is a single requirement point (example format: ID.RA-05:p1), not a whole policy.
Requirement-point score
Each requirement point receives a score on 5 dimensions. Final requirement score = average of applicable dimensions.
If one dimension is not applicable, it is excluded from the denominator.
0–4 Scoring Scale
| Score | Label | Operational meaning |
|---|---|---|
| 0 | Not addressed | Requirement absent; immediate compliance risk |
| 1 | Partially mentioned | Generic statement without operational depth |
| 2 | Addressed with gaps | Requirement present but materially incomplete |
| 3 | Substantially compliant | Requirement largely covered, with minor gaps |
| 4 | Fully compliant | Requirement fully covered with operational and evidential quality |
The 5 Evaluation Dimensions
| Dimension | Control question | Typical failure mode |
|---|---|---|
| Coverage | Is the requirement actually treated in the document? | Requirement is absent or only implied |
| Specificity | Are roles, actions, and timings operationally defined? | Principle-only statements |
| Traceability | Is there explicit traceability to NIS2 measure/point? | Generic legal references only |
| Evidence | Are required support artifacts identifiable and usable? | Evidence is cited but not traceable |
| Approval (where applicable) | Is governance approval path explicit where required? | Missing approval workflow for board-sensitive items |
Evidence-Reference Maturity Sub-Scale
To reduce binary “present/missing” bias, evidence references are also graded for maturity:
| Level | Label | Practical interpretation |
|---|---|---|
| 0 | Absent | No evidence reference |
| 1 | Mentioned without locator | Evidence named, not traceable |
| 2 | Mentioned with locator | Evidence locatable, no explicit NIS2 mapping |
| 3 | Locator + NIS2 mapping | Evidence traceable and mapped to requirement |
| 4 | Evidence available for verification | Evidence traceable and available in controlled corpus |
Document Maturity Bands
After requirement-point scoring, document maturity is classified as:
| Average score | Maturity level | Executive interpretation |
|---|---|---|
| <1.0 | Inadequate | Substantial rewrite required |
| 1.0–1.9 | Initial | Significant remediation required |
| 2.0–2.9 | Developing | Core coverage present, targeted remediation required |
| 3.0–3.5 | Adequate | Minor completion work required |
| 3.6–4.0 | Optimal | Strong baseline readiness |
Finding Severity Model
| Severity | Typical score zone | Action expectation |
|---|---|---|
| Critical | 0 | Immediate remediation track |
| Major | 1 | Priority remediation track |
| Minor | 2 | Planned remediation track |
| Observation | 3 | Improvement recommendation |
Special Rules in the Scoring Engine
1) Appendix B risk-linkage rule
The ACN baseline reading logic identifies specific requirement points that must show explicit linkage to risk assessment outcomes. In practical scoring, those items are checked with stricter linkage criteria based on the official baseline interpretation.
2) Appendix C approval-sensitive rule set
Items requiring governing-body approval are evaluated with explicit approval-path controls in document architecture and governance workflow design. In audit planning, we track 11 approval-sensitive checkpoints aligned with Appendix C interpretation in baseline documentation.
3) Draft-state handling
When documentation is explicitly in draft state, missing final signatures are treated as a status condition, while missing approval architecture (roles, approval path, revision governance) remains a scored gap.
Practical Execution Workflow (Audit Side)
- Define document perimeter and subject type.
- Map each document to relevant NIS2 requirement points.
- Score each point on the 5 dimensions.
- Flag non-applicable dimensions explicitly.
- Aggregate scores at requirement and document levels.
- Assign severity class to each gap.
- Build a remediation queue by dependency and risk impact.
- Produce executive summary and operational backlog.
What the Method Produces for Management
- A requirement-level scoring matrix.
- A document maturity map.
- A prioritized remediation queue with critical-path logic.
- A board-ready summary linking documentary risk to governance actions.
Common Scoring Mistakes to Avoid
- Scoring whole policies without requirement-point granularity.
- Treating citation of a policy title as equivalent to operational evidence.
- Confusing legal mention with measure-point traceability.
- Deferring approval-path design to the final publication phase.
- Ignoring cross-document consistency in incident and continuity flows.
FAQ
Is this a legal opinion?
No. It is a compliance-readiness methodology designed to operationalize documentary controls against the official NIS2 baseline framework.
Can the same model be used for essential and important entities?
Yes, but requirement mapping and expected documentary depth must follow the applicable baseline set in official ACN documentation.
Does a high score mean no further work is needed?
No. A high score indicates stronger documentary readiness. Technical control validation and implementation testing remain necessary.
What if some requirements are unclear in source material?
Details are defined in the official baseline documentation and annexes: ACN Determination, ACN Reading Guide.
Conclusion
A robust scoring methodology turns NIS2 documentation review into an execution discipline. By combining requirement-point scoring, evidence maturity checks, and governance-sensitive controls, organizations can prioritize the right remediation sequence and reduce last-mile compliance risk before supervisory scrutiny windows tighten.
Related reading
- Compliance Documentation Audit for NIS2 Baseline Obligations: Method Overview
- NIS2 Evidence Matrix and Board-Approval Readiness: Practical Audit Method
- Prioritizing NIS2 Audit Findings: From Gap List to Remediation Execution
- Aegister NIS2 Compliance Service
- Aegister Virtual CISO Service