Multi-agent first
Aligning to context
8 papersAlign AI directly to the role of participant, collaborator, or advisor for our best real human practices and institutions, instead of aligning AI to separately representable goals, rules, or utility functions.
Aligning to the social contract
8 papersGenerate AIs' operational values from 'social contract'-style ideal civic deliberation formalisms and their consequent rulesets for civic actors
Theory for aligning multiple AIs
12 papersUse realistic game-theory variants (e.g. evolutionary game theory, computational game theory) or develop alternative game theories to describe/predict the collective and individual behaviours of AI agents in multi-agent scenarios.
Tools for aligning multiple AIs
12 papersDevelop tools and techniques for designing and testing multi-agent AI scenarios, for auditing real-world multi-agent AI dynamics, and for aligning AIs in multi-AI settings.
Aligned to who?
9 papersTechnical protocols for taking seriously the plurality of human values, cultures, and communities when aligning AI to "humanity"
Aligning what?
13 papersDevelop alternatives to agent-level models of alignment, by treating human-AI interactions, AI-assisted institutions, AI economic or cultural systems, drives within one AI, and other causal/constitutive processes as subject to alignment