Governing Security Risk Assurance at Sizewell C Nuclear
As embedded Cyber Security Consultant at Sizewell C Nuclear, I owned and governed the end-to-end Security Risk Assessment process for assigned technology products and supplier engagements, ensuring rigorous identification and mitigation of threats to UK Critical National Infrastructure.
The Challenge
Sizewell C is a multi-billion pound nuclear infrastructure programme operating under the most stringent regulatory and security obligations in the UK. The programme generates a continuous pipeline of technology procurement decisions, each carrying potential security risk to CNI systems, sensitive data, and nuclear safety operations.
- Scale and velocity — dozens of concurrent technology workstreams, each generating SRA submissions with varying levels of rigour and threat awareness
- Inconsistent submission quality — Business Architects and Project Owners submitting SRAs with underweighted risk ratings, missing threat context, or insufficient supplier evidence
- Supplier assurance gaps — vendors providing self-attested security claims with no independent verification or OSINT-informed challenge
- Regulatory exposure — the programme's CNI status demands alignment to NIST CSF 2.0, ISO/IEC 27001:2022, NCSC Cloud Security Principles, and ONR SyAPs, requiring a structured and defensible assessment approach
- No standardised AI and emerging technology risk coverage — SRA submissions were not consistently addressing AI/ML usage, data residency, or third-party model risk
The Approach
The Stephens SRA Framework
To bring rigour, consistency, and defensibility to the SRA process, I developed and implemented a structured assessment framework — the Stephens SRA Framework — built around a master prompt methodology that drives every assessment through a standardised 13-section output structure. This covers executive summary, supplier OSINT review, architecture analysis, AI/ML risk, a 5x5 risk scoring matrix aligned to the Sizewell C Comprehensive Risk Impact Scoring Matrix, and a MoSCoW-prioritised security requirements table.
The framework operates on a "trust but verify" principle:
- An initial security questionnaire is issued to the supplier at the outset
- Responses are reviewed and validated against independent OSINT — covering legal entity, certifications, breach history, CVE exposure, financial stability, and sub-processor transparency
- Where supplier evidence is unavailable or insufficient, assurance gaps are explicitly stated rather than assumed away
- Risk scoring is applied across five impact domains: Financial Viability, Legal & Regulatory Compliance, Safety & Environmental, Operational, and Reputational
SRA Review and Approval
- Each SRA assigned to me is reviewed end-to-end, with risk ratings challenged where they do not reflect the CNI threat environment
- I hold delegated authority from the CISO to approve completed SRAs once material risks are identified and mitigations are agreed and in place
- Escalation pathways are clearly defined for Very High and High residual risks requiring senior leadership or CISO sign-off
- Every assessment concludes with a formal recommendation: Accept / Accept with Conditions / Not Recommended
Stakeholder Assurance and Upskilling
- Worked directly with Business Architects and Project Owners to raise SRA submission quality at source, reducing rework and accelerating approval timelines
- Provided structured feedback on underweighted or incomplete submissions, building security risk literacy across non-security programme stakeholders
- Maintained a clear audit trail for every approved SRA, supporting programme governance and regulatory inspection readiness
Programme Metrics
| Metric | Position |
|---|---|
| SRAs reviewed and approved | 26–50 across active programme workstreams |
| Submissions challenged or returned for rework | 25–50% of all submissions received |
| Escalations to CISO | Raised where residual risk remained High or Very High post-mitigation |
| Assessment framework coverage | 100% of assigned SRAs delivered through the Stephens SRA Framework |
| Regulatory alignment | Every approved SRA evidenced against NIST CSF 2.0, ISO 27001:2022 and NCSC Cloud Security Principles |
| AI/ML risk coverage | Embedded as standard across all assessments — previously an unaddressed gap |
The 25-50% challenge and rework rate is the standout figure. In a CNI environment, a low rework rate would suggest either exceptional submission quality or insufficient scrutiny. This rate signals active, expert-level challenge rather than a rubber-stamp approval process.
The Results
- Delivered a repeatable, CNI-grade SRA framework adopted as the standard assessment approach for assigned technology products across the programme
- Identified and escalated multiple instances of underweighted risk ratings submitted by BAs and Project Owners, preventing under-assured technology from entering the programme environment
- Reduced assessment rework through direct stakeholder engagement and structured submission guidance
- Ensured every approved SRA carries a defensible, evidence-based assurance position aligned to NIST CSF 2.0, ISO 27001:2022, and NCSC Cloud Security Principles
- Established consistent AI/ML risk coverage as a standard component of every assessment — a gap that previously went unaddressed
Technologies & Frameworks Utilised
| Framework / Tool | Application |
|---|---|
| NIST CSF 2.0 | Primary risk framework for all SRA assessments |
| ISO/IEC 27001:2022 | Control alignment and Annex A mapping |
| NCSC Cloud Security Principles | Cloud and SaaS supplier assurance |
| ONR SyAPs | Nuclear-specific safety and security alignment |
| ISO/IEC 27005 | Risk assessment methodology |
| OSINT Tooling | Independent supplier verification and breach analysis |
| Sizewell C Risk Impact Scoring Matrix | 5x5 risk scoring across five impact domains |
| MoSCoW Prioritisation | Security requirements classification and treatment tracking |
The SRA Framework is use is built on a simple principle: security risk assessments at CNI level must be evidence-led, independently verified, and produce a decision — not a list of observations. Every assessment I approve carries my name and my professional judgement. That accountability is deliberate.