Full-Time

AI Emerging Risks Analyst

OpenAI

OpenAI

5,001-10,000 employees

Develops safe AI models and tools

Compensation Overview

$198k - $320k/yr

San Francisco, CA, USA

In Person

Category
Business & Strategy (2)
,
Required Skills
Data Science
Requirements
  • Significant experience (typically 5+ years) in trust and safety, integrity, security, policy analysis, or intelligence work focused on a range of emerging risks situated in strategic context and translated into actionable intelligence
  • Demonstrated ability to analyze complex online harms (e.g., harassment, coordinated abuse, scams, influence operations, brand safety issues) and convert all-source analysis into concrete, prioritized recommendations
  • Strong analytical skills and comfort working with both qualitative and quantitative inputs, including casework, incident reports, OSINT, product context, and policy frameworks; basic metrics and trends in partnership with data science (e.g., harm prevalence, severity profiles, exposure, escalation rates)
  • Strong adversarial and product intuition, able to foresee how actors might adapt AI tools for misuse and evaluate how product mechanics, incentives, and UX decisions influence risk
  • Experience designing and using risk frameworks and taxonomies (e.g., harm classification schemes, severity/likelihood matrices, prioritization models) to structure ambiguous spaces and support decision-making
  • Understanding of the application of foresight methodologies including horizon scanning, scenario planning, tabletop exercises, or simulations
  • Proven ability to work cross-functionally with product, engineering, data science, operations, legal, and policy teams, including pushing for clarity on tradeoffs and following through on mitigation work
  • Excellent written and verbal communication skills, including experience producing concise, executive-ready briefs and explaining sensitive, complex issues in grounded, concrete terms
  • Comfort operating in fast-changing, ambiguous environments: you can identify weak signals, form hypotheses, test them quickly, and adjust as the product and threat landscape evolves
Responsibilities
  • Map and prioritize emerging risks at the frontier of AI
  • Build and continuously refine a clear picture of emerging signals and trends that could affect the AI ecosystem through upstream and external scanning
  • Design and maintain harm taxonomies that provide foresight and warning about how AI harms and misuse may manifest over the next 0-24 months and beyond
  • Contribute to an evergreen frontier risk register and prioritization framework that surfaces the top issues by severity, prevalence, exposure, and trajectory
  • Detect and deep dive into emerging abuse patterns
  • Create comprehensive approaches to horizon scanning, competitive benchmarking, and external narrative/risk sense-making
  • Stay current on abuse trends ranging from state actor misuse to criminal activity, drawing from the work of internal organizational and cross-functional partners
  • Connect individual incidents into system-level stories about actors, incentives, product design weaknesses, and cross-product spillover–whenever possible spotting these incidents or even hypothesizing them before they hit our surfaces
  • Turn analysis into actionable risk intelligence
  • Translate findings into clear, ranked risk lists and concrete proposals for mitigations that product, safety, and policy teams can execute on
  • Work with Global Affairs and Communications teams to share findings in ways that reinforce OpenAI’s role as a leader in the online safety ecosystem
  • Track whether mitigation work is landing: follow key indicators, pressure-test assumptions, and push for course corrections when the data demands it
  • Build early warning and measurement capabilities
  • Help define the core metrics and signals that indicate whether fast-evolving AI environments are safe (e.g., key harm prevalence, severity distributions, escalation rates, brand safety issues)
  • Work with data science and visualization colleagues to shape monitoring views and dashboards that highlight leading indicators and unusual changes from signals spotted off platform to determine whether these are manifesting in user behavior or abuse patterns
  • Pioneer new uses of our own technologies to scale detection and transform workflows
  • Provide strategic analysis and future-looking perspectives
  • Produce concise but comprehensive strategic intelligence estimates that provide full context about a given interest area that includes confidence levels based on observed data to inform judgments and recommendations
  • Run scenario analyses that explore how AI harms might evolve over the next 6–24 months (e.g., how scams may fundamentally evolve alongside the proliferation of agentic AI; how state actors may seek to misuse new scientific capabilities of frontier models)
  • Help design and run tabletop exercises for internal and partner audiences that distill manifest and latent risks at the frontier of AI and identify mitigations
  • Benchmark OpenAI’s risk profile and mitigations against external incidents and other platforms, highlighting gaps, strengths, and opportunities
  • Shape safety readiness for new products
  • Contribute to product readiness and launch reviews by laying out expected abuse modes based on broad, upstream understanding
  • Turn risk insights into practical guidance for internal teams (product, marketing, partnerships, comms) and, where appropriate, external partners using OpenAI technologies in social and brand contexts
  • Develop reusable frameworks, playbooks, FAQs, and briefing materials that make it easier for the broader organization to understand AI risks and respond consistently

OpenAI conducts AI research and deployment to build advanced AI models and tools that help people automate tasks, be more creative, and make better decisions. Its products include ChatGPT, a conversational AI that can write, code, tutor, and assist in interactive tasks, and Sora, which can generate videos from text prompts. OpenAI’s models typically run through cloud-based services and subscriptions, with licensing and partnerships for broader use. The company operates a capped-profit model to balance generating revenue with ensuring safety, ethics, and long-term societal benefits. Its approach emphasizes safety, responsible deployment, and collaboration with researchers, governments, and institutions. The goal is to ensure artificial general intelligence, when it arrives, benefits all of humanity and minimizes risks.

Company Size

5,001-10,000

Company Stage

Late Stage VC

Total Funding

$196B

Headquarters

San Francisco, California

Founded

2015

Simplify Jobs

Simplify's Take

What believers are saying

  • $4B backed Deployment Company accelerates enterprise AI adoption with partners like Bain.
  • Trusted Access grants Deutsche Telekom and BBVA GPT-5.5-Cyber for cybersecurity.
  • $400B valuation from secondary sale enables $30M staff cash-outs, retaining talent.

What critics are saying

  • Anthropic's Mythos secures Pentagon contracts, blocking OpenAI's Daybreak revenue.
  • Doubled GPT-5.5 API pricing to $5/$30 per 1M tokens shifts developers to Claude.
  • EU blacklists GPT-5.5-Cyber access in 3-6 months, halting Trusted Access program.

What makes OpenAI unique

  • GPT-5.5 launches April 2026 with 1M context for complex professional tasks.
  • Daybreak platform integrates GPT-5.5-Cyber and Codex for cyber defense.
  • Deployment Company acquires Tomoro, adding 150 engineers for enterprise AI.

Help us improve and share your feedback! Did you find this helpful?

Your Connections

People at OpenAI who can refer or advise you

Benefits

Health insurance

Dental and vision insurance

Flexible spending account for healthcare and dependent care

Mental healthcare service

Fertility treatment coverage

401(k) with generous matching

20-week paid parental leave

Life insurance (complimentary)

AD&D insurance (complimentary)

Short-term/long-term disability insurance (complimentary)

Optional buy-up life insurance

Flexible work hours and unlimited paid time off (we encourage 4+ weeks per year)

Annual learning & development stipend

Regular team happy hours and outings

Daily catered lunch and dinner

Travel to domestic conferences

Growth & Insights and Company News

Headcount

6 month growth

-2%

1 year growth

3%

2 year growth

2%
Daring Fireball
May 8th, 2026
Y Combinator’s Stake in OpenAI

The fact that Paul Graham personally has billions of dollars at stake with OpenAI doesn’t mean that his public opinion on Sam Altman’s trustworthiness and leadership is invalid. But it certainly seems like the sort of thing that ought to be disclosed when quoting Graham as an Altman character reference.

Bloomberg L.P.
Apr 21st, 2026
OpenAI launches ChatGPT Images 2.0 with improved chart and diagram creation

OpenAI is releasing ChatGPT Images 2.0, an updated AI image-generating software designed to create accurate charts and scientific diagrams. The company aims to make its technology more appealing to professionals. Rolling out Tuesday through ChatGPT and Codex AI coding assistant, the new model improves instruction-following and detail incorporation when generating images. It can produce visuals across multiple styles and render text in various languages. The update represents OpenAI's effort to expand its AI capabilities beyond general use cases into professional applications requiring technical precision and accuracy.

Bloomberg L.P.
Apr 17th, 2026
OpenAI loses head of science initiatives and Sora AI video team leader

OpenAI's head of science initiatives and the leader of its Sora AI video team are leaving the company, adding to recent executive departures as the firm reorganises its product portfolio. The exits continue a pattern of senior leadership changes at the artificial intelligence company.

Bloomberg L.P.
Apr 16th, 2026
OpenAI unveils GPT-5.4 to tackle enterprise trust and governance concerns

OpenAI is addressing enterprise adoption challenges with GPT-5.4 "Cyber", focusing on security, trust and governance issues. Erica Brescia, managing director at Redpoint Ventures and OpenAI backer, discussed the development, emphasising that the AI cyber race centres on governance rather than purely technological advancement. The move represents OpenAI's effort to overcome barriers preventing widespread enterprise adoption of its AI systems by prioritising security features in its latest model release.

Bloomberg L.P.
Apr 16th, 2026
OpenAI launches GPT-Rosalind AI model for drug discovery to rival Google

OpenAI has launched GPT-Rosalind, an AI model designed to accelerate drug discovery and life sciences research. The model aims to extract insights from large datasets and help translate scientific studies into healthcare applications. Initially available as a research preview to select business customers, GPT-Rosalind's early users include pharmaceutical company Amgen, vaccine maker Moderna and bioscience research nonprofit the Allen Institute. The launch positions OpenAI alongside other technology companies entering the drug discovery field, as the industry seeks to demonstrate AI's potential for scientific breakthroughs. The ChatGPT maker announced the model's release on Thursday.