AI Governance Projects
Analysing AI incident scenarios and emergency responses
Sven is a Senior AI Policy Fellow at the Institute for AI Policy and Strategy and works in the European AI Governance team at The Future Society. Before that, he worked as Head of Research Operations at a research institute at Oxford University. He also worked at a non-profit in the sustainability sector and as a management consultant. He has a PhD in mathematics.
This project investigates potential AI incidents and how governance mechanisms and emergency procedures could be used to prevent them. It will explore concrete incidence scenarios and existing safeguards against those. Based on this it should develop proposals for companies and/or governments on emergency procedures that could be implemented to contain these incidents. The focus can be on technical exploration of these possible incidents or on more broader procedural exploration, depending on the interest and expertise of the group.
Defining Intelligence Recursion
Nicholas is an Ellison Scholar at the Ellison Institute of Technology, an Expert Collaborator at the MIT AI Risk Initiative, and a General Committee Member at the Oxford AI Safety Initiative. He works on AI policy and experimental capability evaluations. He has reviewed papers for NeurIPS, collaborated with the IMF to improve World Bank forecasts, and researched a report whose recommendation was adopted by the US and UK governments.
Many experts are particularly concerned about AI models that can substantially automate their own development, allowing for increasingly rapid increases in capabilities. This project examines how to tell when such intelligence recursion has begun. Despite its central importance to governance proposals like MAIM, no consensus definition of this concept exists. Candidates range from fraction of AI-written code and compute thresholds to shares of cognitive R&D work done by AI and observed capability gains. The project will evaluate these definitions to recommend which one(s) governments like the US should adopt. It aims to be the authoritative source for when drastic actions against competitors are justified—not too early, not too late.
Are AI systems ready to provide early warning of critical AI risk?
Paolo is a PhD candidate at Teesside University supervised by Professor The Anh Han. His research studies the dynamics of tech races and the effective design of early warning systems for AI risk. He has built software tools as part of Modeling Cooperation, including an interactive web app to explore tech race scenarios and an app used to facilitate the Intelligence Rising workshops that teaches ideas in AI governance to key decision makers.
AI capabilities are assessed more thoroughly after deployment using evaluation methods that didn’t exist while the model was being developed. As AI systems grow more capable, will our evaluation infrastructure keep up?
To assess the above question, we’ll create a dataset that tracks key metrics from frontier AI company safety reports alongside the broader literature on evaluating AI systems for dangerous capabilities. The dataset will track risk thresholds labs identify as critical, evaluation coverage across risk domains, methodology evaluation, and timing patterns. We’ll analyse the data as we go to answer some of the following questions: (i) what risk thresholds do organisations identify as concerning? (ii) On current trends, how prepared will AI safety reporting systems be to handle emerging risks over 2026-2030?