Analysing AI incident scenarios and emergency responses
Meet Sven!
Sven is a Senior AI Policy Fellow at the Institute for AI Policy and Strategy and works in the European AI Governance team at The Future Society. Before that, he worked as Head of Research Operations at a research institute at Oxford University. He also worked at a non-profit in the sustainability sector and as a management consultant. He has a PhD in mathematics.
Project Description
Various AI risks have been described and investigated and work is ongoing to define AI Incidents, classify different levels of AI threats from hazards to crises, and monitor incidents.
The first part of this project would be to describe and develop 2-4 concrete incident scenarios. There are a variety of options here in which direction to go based on the expertise and interest of the group. This description will include an analysis of existing safeguards - such as frameworks published by AI companies, government powers to contain them, etc.
In the second part of the project, proposals for emergency plans that AI companies could make to contain these incidents should be developed. At this stage, the project could take various directions: It could investigate more general frameworks of emergency plans that might be applicable not only to these specific incidents but more broadly. It could also look at other actors, such as governments, downstream providers, insurers, etc. and how these could be involved in providing a framework - regulatory or voluntary - to incentivise companies to prevent AI incidents or to directly help to contain them.
Existing work in the area: