Full-Time

Principal – Special Projects

Center for AI Safety

Center for AI Safety

51-200 employees

Nonprofit researching AI safety, policy, advocacy

Compensation Overview

$150k - $250k/yr

San Francisco, CA, USA

In Person

Category
Business & Strategy (2)
,
Required Skills
Data Analysis
Requirements
  • A track record of owning complex, ambiguous initiatives and delivering outsized results.
  • The ability to scope new problem spaces quickly: defining goals, success metrics, and constraints through research, interviews, and good judgment.
  • Consistently good judgment under uncertainty. You make sound calls with incomplete information, know when to move fast and when to slow down, and leadership can trust your decisions without reviewing every detail.
  • Comfort operating with high autonomy: you find the path forward even when one isn’t obvious, and you escalate the right things at the right time.
  • Strong analytical skills for evaluating feasibility, impact, and risk across very different domains.
  • Excellent written and verbal communication. You can present complex ideas clearly to both technical and non-technical audiences.
  • Genuine interest in AI safety and the willingness to develop deep domain knowledge.
Responsibilities
  • Own projects and initiatives end-to-end: identify opportunities, set strategy, build plans, and execute.
  • Scope new projects end-to-end, defining objectives, deliverables, timelines, and budgets.
  • Coordinate across researchers, vendors, policy partners, and external collaborators to move complex work forward.
  • Stay agile when priorities shift: re-scope, re-prioritize, and adjust without losing momentum.
  • Monitor risks and surface critical issues early, always with a recommended path forward.

CAIS is a San Francisco-based nonprofit focused on reducing AI’s societal risks through safety research, building a community of researchers, and promoting safety standards. It works by conducting research, fostering collaboration across academia, government, industry, and the public, and issuing influential statements and guidance to shape policy and public understanding. It differentiates itself through a multidisciplinary approach and emphasis on safety standards and policy influence rather than products or commercial aims. Its goal is to mitigate AI risks while guiding responsible progress so that AI’s benefits are realized safely.

Company Size

51-200

Company Stage

N/A

Total Funding

N/A

Headquarters

N/A

Founded

2022

Simplify Jobs

Simplify's Take

What believers are saying

  • WEKA powers CAIS Compute Cluster, boosting large-scale safety research.
  • CAIS Global Statement signed by 600 experts shapes AI risk discourse.
  • CAIS field-building expands AI safety researcher community effectively.

What critics are saying

  • Stanford Center outcompetes CAIS using university prestige and resources.
  • NIST CAISI dominates standards, sidelining CAIS nonprofit advocacy.
  • OpenAI safety claims erode donor funding for CAIS by 2026.

What makes Center for AI Safety unique

  • CAIS provides free GPU-accelerated compute cluster for AI safety researchers.
  • CAIS sponsors California SB 1047 for frontier AI models safety.
  • CAIS offers Intro to ML Safety course and philosophy fellowships.

Help us improve and share your feedback! Did you find this helpful?

Your Connections

People at Center for AI Safety who can refer or advise you

Benefits

Flexible Work Hours