Defining Intelligence Recursion
Bio
Nicholas is an Ellison Scholar at the Ellison Institute of Technology, an Expert Collaborator at the MIT AI Risk Initiative, and a General Committee Member at the Oxford AI Safety Initiative. He works on AI policy and experimental capability evaluations. He has reviewed papers for NeurIPS, collaborated with the IMF to improve World Bank forecasts, and researched a report whose recommendation was adopted by the US and UK governments.
Project Summary
Many experts are particularly concerned about AI models that can substantially automate their own development, allowing for increasingly rapid increases in capabilities. This project examines how to tell when such intelligence recursion has begun. Despite its central importance to governance proposals like MAIM, no consensus definition of this concept exists. Candidates range from fraction of AI-written code and compute thresholds to shares of cognitive R&D work done by AI and observed capability gains. The project will evaluate these definitions to recommend which one(s) governments like the US should adopt. It aims to be the authoritative source for when drastic actions against competitors are justified—not too early, not too late.
Ideal Candidate
Strong analytical writing skills and comfort engaging with technical and policy AI literature. Background in computer science, math, policy, philosophy, and/or economics preferred (in no particular order). Prior exposure to topics like compute governance, frontier model regulation, or international AI coordination is valuable but not required. The core requirement is the ability to make fuzzy concepts precise.
Skills Developed
Translating technical AI concepts into governance-relevant definitions
Evaluating operationalizations of important but abstract concepts
Synthesizing technical and policy research
Writing targeted at government and policy actors
Responsibilities
Conduct literature review of existing intelligence recursion definitions and related concepts (recursive self-improvement, automation of AI R&D, etc.)
Develop evaluation criteria for comparing candidate definitions
Draft analysis of leading operationalizations against those criteria
Contribute to final recommendations paper/report/memo