EDCC 2026 Logo

EDCC 2026
21st European Dependable Computing Conference
7-10 April 2026
Canterbury, UK

Banner photography © Mark Wheadon

Keynotes

Fission for Algorithms? AI, Nuclear Power, and the Politics of Acceleration

Dr Sofia Guerra FREng
Chief Executive Officer and Co-Founder, Themistoclea

Date: Wednesday, April 8th, 2026

Abstract: Generative AI is driving a rapid surge in energy demand, with data-centre electricity consumption projected to grow by more than 160% by 2030. In response, AI companies are increasingly turning to nuclear power, seeking to secure tens of gigawatts of new capacity on timelines fundamentally misaligned with the technical, regulatory, and safety realities of nuclear energy. This talk examines how this mismatch is producing a wave of nuclear “fast-tracking” initiatives that threaten to undermine long-standing nuclear safety, security, and governance norms. We analyse three converging trends that are apparent in some nations: (1) policy efforts to weaken nuclear regulatory standards and reduce regulator independence in the name of urgency and national security; (2) proposals to use generative AI systems to accelerate nuclear licensing, safety analysis, and commissioning; and (3) the promotion of advanced and novel reactor technologies—such as SMRs, AMRs, and fusion—on ambitious timelines that outpace current technical and regulatory readiness. Although these initiatives are not universal, they risk eroding established safety principles, increasing public exposure to radiation, and weakening nuclear safety. At the same time, we distinguish these trajectories from responsible applications of machine learning in nuclear engineering, including predictive maintenance, non-destructive testing (NDT), materials modelling, and sensor anomaly detection. We emphasise integrating ML with existing safety analysis, including hazard and risks analysis considering system-level behaviour and operational constraints. The talk argues that responsible ML use in nuclear systems depends not on accelerating timelines, but on clearly articulated safety justifications that demonstrate how these tools reduce risk and support, rather than substitute for, expert judgment and regulatory review.

Dr Sofia Guerra FREng
Dr Sofia Guerra has made key contributions to safety and assurance of digital systems and is internationally recognised as an expert in dependability assessment and justification of digital systems in the nuclear industry. As a leader and entrepreneur, she led Adelard’s growth until acquisition in 2022 and set up Themistoclea in 2025. She has pioneered the UK approach to justifying digital devices for nuclear facilities, impacting major projects, including UK nuclear build projects. She shaped international approaches to safety assurance, authored IAEA reports, and is a key contributor to innovation on AI in nuclear applications.

Protecting Citizens from Harms in the Era of AI

Professor Aad van Moorsel
University of Birmingham

Date: Wednesday, April 9th, 2026

Abstract: We discuss interdisciplinary, user-centred, research in protecting citizens from digital harms. Protecting against modern-day harms is a wicked problem because so many stakeholders may be involved (technology providers, users, perpetrators, law, possible partners, intersectionality, etc.) and because harms are often unintended consequences of the practical use and evolution of technology.  Addressing the problem of complex harms requires an interdisciplinary approach, combining computer science, design, business, psychology, sociology, legal and ethical experts. The AGENCY project (2022-25) was funded within the UK's strategic research programme in protecting citizens from online harms, and addresses how citizen agency impacts vulnerability to harm and trust in online interactions.  The outcomes of AGENCY provide approaches to mitigate harms within smart homes, privacy harms associated with femtech, disinformation harms harming segments of our society.  Many of the harms are further exacerbated by the ongoing proliferation of AI, and we reflect on the impact of AI on harms, and on researching the mitigation of these harms.

Professor Aad van Moorsel
Aad van Moorsel is Professor in Decentralised Systems and Head of the School of Computer Science at University of Birmingham. He has been the Principal Investigator in various large research grants in human-centred cyber security and system dependability. Prior to joining Birmingham he established Newcastle University's NCSC/EPSRC Academic Centre of Excellence in Cyber Security Research and pioneered degree apprenticeships as Director of the Institute of Coding at Newcastle. He was also at University of Illinois at Urbana Champaign as a postdoctoral researcher in the Coordinated Science Laboratory.  During his time at Lucent Technologies/Bell Labs Research and HP Labs he gained ample experience of industry-style research and research management, and he co-founded the startup CloudIdentity Ltd in human-centric identity management, which since has been successfully sold. He is the author of over 200 peer-reviewed research articles and of 4 US patents.  

Which Network Assumptions Make Blockchain Consensus Work?

Dr Sara Tucci-Piergiovanni
CEA List, Université Paris-Saclay

Date: Wednesday, April 10th, 2026

Abstract: Almost twenty years have passed since the introduction of the Nakamoto consensus protocol, underpinning the Bitcoin blockchain. Often criticized for its enormous energy consumption, Bitcoin has nonetheless operated continuously since 2008 in an open network. Nakamoto’s consensus protocol challenged what was known about consensus, in particular the impossibility of solving consensus in an asynchronous network in the presence of even a single faulty process, as shown by the FLP result. And yet, Bitcoin worked—and still works—at Internet scale. This raises fundamental questions: was Bitcoin really solving consensus? What network assumptions does it rely on? More recent protocols departed from the original Nakamoto design, adapting Byzantine consensus protocols to open networks while tolerating periods of asynchrony, i.e., adverse network behavior such as congestion or temporary partitions. This significantly reduced energy consumption, but at the cost of reduced openness. In this talk, I outline the historical evolution of blockchain consensus protocols, from Bitcoin to the Ethereum Merge, highlighting their security properties and trade-offs. I focus in particular on the role of network models for these protocols, which underpin their security. I argue that these network models may be overly theoretical, and that it remains unclear whether they accurately capture real networks and adversarial behaviors.

Sara Tucci-Piergiovanni
Dr Sara Tucci-Piergiovanni is a CEA Fellow and Senior Researcher at CEA LIST, Université Paris-Saclay, where she leads a research team working on decentralized and trustworthy computing. She has authored over 100 publications in the field of distributed systems, spanning both theoretical foundations and applied aspects. She is internationally recognized for her contributions to the theory and design of blockchain and distributed ledger technologies. Her work has contributed to the transition from energy-hungry proof-of-work systems to secure and energy-efficient proof-of-stake blockchains, with a particular focus on the security of consensus protocols and incentive mechanisms.