Library intelligence
Leaders Watch
Tracking how influential figures' positions evolve in public—through statements, interviews, governance moves, and the documents we host. This is not a directory; it is an editorial timeline you can extend entry by entry. Use theme and role filters to slice the set (e.g. governance, alignment).
How we assign tiers
Tiers reflect present-day leverage over frontier AI and the global safety debate—who can move capability, deployment, norms, or regulation at the largest scale right now, not lifetime achievement alone. Evidence still has to be anchored to primary sources on each profile; tier is not a quality score for every quote.
- Tier 1 (featured) — Heads of major frontier labs or hyperscaler AI divisions; founders of standalone frontier-model companies; and a small set of researchers whose public statements routinely reset institutional and media language on catastrophic or systemic risk. If their narrative or product choices shift, many others have to react.
- Tier 2 — Highly influential specialists: alignment and safety researchers, norm-shaping advocates, and leaders whose AI footprint is significant but narrower than the tier-1 bar above. Still first-class profiles with full timelines—not a "B list" for rigour.
Filter
Matching leaders
2 profiles match your filters.
Researcher
Geoffrey Hinton
Independent research leadership
A central figure in modern neural networks whose public risk framing in 2023 helped legitimise extinction-risk and governance language well beyond specialist circles.
Open timeline →
Researcher
Stuart Russell
Academic governance leadership
Longstanding public interpreter of AI risk for policymakers and general audiences—often with formal models of uncertainty and control.
Open timeline →