I work mostly on AI Oversight regimes, with a particular focus on incident reporting. In this role I advise technical bodies on how best to implement legislation and have also provided technical expertise to lawmakers.
I have a Master's in Logic from the University of Amsterdam, have previously worked on technical AI Safety research at MIT and ETH, and was also a research fellow at Yale's Digital Ethics Center for a year. Currently I work for a non-profit think tank called CARMA and am associated with the Oxford Martin AI Governance Initative.
Leading a research composium of fifteen organisations to determine what concerning behaviour, exhibited in AI labs or by AI systems, can legally be reported, which the most effective channels for reporting this information are, and what the most critical gaps in current legislation are.
Investigating whether the current mechanisms for reporting AI risks internally to the US NatSec apparatus are well-functioning.
Assisted the EU AI Office with the establishment of their whistleblowing channel, which can be found here. Advised on confidentiality policy, internal handling, FAQ, etc.
Presented at a private gathering of Korean, Singaporean, and Japanese AISIs.
An expert conference to design policy interventions for a post-AGI society.
Spoke on a panel about the dangers to and importance of submarine cables.
Collaborative design of an international governance regime for AI.
A conference to rapidly evaluate the economic impacts of frontier AI technologies by 2030.
I regularly supervise reserach fellows in AI governance and policy research. If you are interested in working with me, please get in touch. While I don't currently have direct funding available, we may be able to identify funding opportunities together.