Agent foundations
Develop philosophical clarity and mathematical formalizations of building blocks that might be useful for plans to align strong superintelligence, such as agency, optimization strength, decision theory, abstractions, concepts, etc.
Theory of Change:Rigorously understand optimization processed and agents, and what it means for them to be aligned in a substrate independent way → identify impossibility results and necessary conditions for aligned optimizer systems → use this theoretical understanding to eventually design safe architectures that remain stable and safe under self-reflection
General Approach:Cognitive
Target Case:Worst Case
See Also:
Some names:Abram Demski, Alex Altair, Sam Eisenstat, Alfred Harwood, Daniel C, Dalcy K, José Pedro Faustino
Outputs:
Limit-Computable Grains of Truth for Arbitrary Computable Extensive-Form (Un)Known Games— Cole Wyeth, Marcus Hutter, Jan Leike, Jessica Taylor
Blog Posts – Universal Algorithmic Intelligence— Cole Wyeth
Clarifying "wisdom": Foundational topics for aligned AIs to prioritize before irreversible decisions
Off-switching not guaranteed— Sven Neth
Formalizing Embeddedness Failures in Universal Artificial Intelligence— Cole Wyeth, Marcus Hutter
What Is The Alignment Problem?— johnswentworth
Report & retrospective on the Dovetail fellowship— Alex Altair