Shallow Review of Technical AI Safety, 2025

Black-box make-AI-solve-it

Focus on using existing models to improve and align further models.
Theory of Change:"LLMs don't seem very dangerous and might scale to AGI, things are generally smooth, relevant capabilities are harder than alignment, assume no mesaoptimisers, assume that zero-shot deception is hard, assume a fundamentally humanish ontology is learned, assume no simulated agents, assume that noise in the data means that human preferences are not ruled out, assume that alignment is a superficial feature, assume that tuning for what we want will also get us to avoid what we don't want. Maybe assume that thoughts are translucent."
General Approach:Engineering
Target Case:Average Case
Some names:Nora Belrose, Lewis Hammond, Geoffrey Irving
Outputs:
Neural Interactive ProofsLewis Hammond, Sam Adam-Day
MONA: Myopic Optimization with Non-myopic Approval Can Mitigate Multi-step Reward HackingSebastian Farquhar, Vikrant Varma, David Lindner, David Elson, Caleb Biddulph, Ian Goodfellow, Rohin Shah
Debate Helps Weak-to-Strong GeneralizationHao Lang, Fei Huang, Yongbin Li
Mechanistic Anomaly Detection for "Quirky" Language ModelsDavid O. Johnston, Arkajyoti Chakraborty, Nora Belrose
AI Debate Aids Assessment of Controversial ClaimsSalman Rahman, Sheriff Issaka, Ashima Suvarna, Genglin Liu, James Shiffer, Jaeyoung Lee, Md Rizwan Parvez, Hamid Palangi, Shi Feng, Nanyun Peng, Yejin Choi, Julian Michael, Liwei Jiang, Saadia Gabriel
An alignment safety case sketch based on debateMarie Davidsen Buhl, Jacob Pfau, Benjamin Hilton, Geoffrey Irving
Automating AI Safety: What we can do todayMatthew Shinkle, Eyon Jang, Jacques Thibodeau
Superalignment with Dynamic Human ValuesFlorian Mai, David Kaczér, Nicholas Kluge Corrêa, Lucie Flek