Full-Time

Safeguards Enforcement Analyst

Safety Evaluations

Anthropic

Anthropic

5,001-10,000 employees

Develops reliable, interpretable AI systems

Compensation Overview

$230k - $270k/yr

Washington, DC, USA + 2 more

More locations: San Francisco, CA, USA | New York, NY, USA

Hybrid

Location-based hybrid policy: at least 25% time in office.

Category
Operations & Logistics (1)
Required Skills
Data Analysis
Requirements
  • Bachelor's degree or an equivalent combination of education, training, and/or experience
  • A field relevant to the role as demonstrated through coursework, training, or professional experience
Responsibilities
  • Support model launch readiness by running evaluations, monitoring and interpreting results, and surfacing regressions or unexpected behavior changes to relevant stakeholders
  • Partner closely with policy and domain experts throughout the evaluation lifecycle — from identifying risks and scoping the right evaluation approach, to coordinating creation of new evals and ensuring existing ones remain current with evolving policies, threat vectors, and model capabilities
  • Work with cross-functional stakeholders to help manage evaluation outcomes, including interpreting results and driving mitigations where needed
  • Think strategically about eval quality to build processes and eval paradigms that keep evaluations unsaturated, high-signal, and insightful as models improve
  • Build out processes and frameworks for creating product-specific evaluations as Anthropic's product surface area expands
  • Help design and scope tooling improvements that accommodate evolving eval needs and expand self-serve eval creation and iteration for non-technical users
  • Write and maintain rigorous documentation for evaluation creation, execution, and interpretation as the team builds out eval tooling and processes
Desired Qualifications
  • Experience in trust and safety, content operations, policy enforcement, or a related operational role at a technology company
  • Thrive in ambiguous, fast-moving environments
  • Experience building processes, workflows, or programs from scratch (zero-to-one work)
  • Strong program management instincts, naturally creating structure around complex, multi-stakeholder efforts by tracking timelines, dependencies, and deliverables to keep work on track
  • Eager to expand your technical toolkit, including adopting internal tools and AI-assisted workflows (e.g., Claude Code) to accelerate your work
  • Can manage multiple concurrent workstreams across different domain areas without losing track of details — strong prioritization and context-switching are essential when deadlines and priorities shift quickly
  • A strong generalist comfortable moving fluidly across different types of work and switching contexts throughout the day
  • Comfortable making judgment calls with incomplete information and escalating appropriately when needed
  • Communicate clearly and concisely, both in writing and cross-functionally
  • Strong candidates may also have:
  • Experience operating under tight, high-stakes timelines — such as product launch cycles, incident response, or regulatory deadlines — where information and priorities can shift with little notice
  • Experience coordinating across engineering, policy, and product teams to translate findings into concrete action
  • Experience building and maintaining SOPs, runbooks, and operational documentation in fast-changing environments
  • Proficiency with data tools (SQL, dashboards, spreadsheets) sufficient to maintain and improve workflows
  • Comfort working with sensitive content areas as part of eval creation or enforcement review responsibilities

Anthropic focuses on AI research to build reliable, interpretable, and steerable AI systems. Its main product, Claude, is an AI assistant designed to handle tasks at any scale for clients across industries, delivered through deployment and licensing along with specialized AI R&D services. Claude works by combining natural language processing, human feedback, reinforcement learning, and interpretability techniques to produce a capable, controllable AI assistant that can assist with a wide range of tasks. The company differentiates itself from competitors by prioritizing safety, transparency, and controllability—emphasizing reliability, interpretability of model behavior, and user-controlled steerability in its AI systems. Anthropic’s goal is to make AI systems that people can trust and efficiently use to improve operations and decision-making across sectors.

Company Size

5,001-10,000

Company Stage

Late Stage VC

Total Funding

$77.3B

Headquarters

San Francisco, California

Founded

2021

Simplify Jobs

Simplify's Take

What believers are saying

  • Japan's megabanks access Claude Mythos by May 2026 end for operations.
  • Launched 12 legal plugins May 12, 2026, attracting 20,000 professionals.
  • Thomson Reuters integrates Claude with CoCounsel for 1 million users summer 2026.

What critics are saying

  • Japan FSA working group delays Mythos banking deployments within 3-6 months.
  • Voided Forge and Hiive trades trigger Delaware litigation in 6-12 months.
  • EU AI Act audits halt Claude Mythos European sales by Q3 2026.

What makes Anthropic unique

  • Anthropic pioneers Constitutional AI and RLHF for model alignment.
  • Responsible Scaling Policy mandates safety thresholds before deployments.
  • Claude Platform on AWS operates independently outside hyperscaler boundaries.

Help us improve and share your feedback! Did you find this helpful?

Your Connections

People at Anthropic who can refer or advise you

Benefits

Flexible Work Hours

Paid Vacation

Parental Leave

Hybrid Work Options

Company Equity

Growth & Insights and Company News

Headcount

6 month growth

-3%

1 year growth

-3%

2 year growth

1%
Ars Technica
Apr 21st, 2026
Mozilla: Anthropic's Mythos AI model finds 271 zero-day bugs in Firefox 150

Mozilla has discovered 271 security vulnerabilities in Firefox 150 using early access to Anthropic's Mythos Preview AI model. The findings represent a significant increase from the 22 bugs detected by Anthropic's Opus 4.6 model in Firefox 148 last month. Firefox CTO Bobby Holley said Mythos is "every bit as capable" as the world's best security researchers, whilst eliminating the need to "concentrate many months of costly human effort to find a single bug". He believes AI tools like Mythos tilt the cybersecurity balance towards defenders by making vulnerability discovery cheaper. Anthropic released Mythos Preview to a limited group of industry partners earlier this month. Mozilla CTO Raffi Krikorian argues such tools are particularly crucial for open source projects, which often rely on insufficient volunteer maintenance for security.

Bloomberg L.P.
Apr 21st, 2026
Anthropic's Mythos AI sparks fear and hope over cybersecurity threats to global finance

Anthropic's new AI model Mythos has sparked concern amongst policymakers at International Monetary Fund meetings over its potential to accelerate sophisticated cyberattacks on the global financial system. However, its developers argue the technology could provide banks with their strongest defence yet. What distinguishes Mythos is its ability to chain multiple security weaknesses into coordinated attacks, effectively automating complex cyber intrusions. This capability could significantly expand the pool of potential attackers in the near term. The model's creators emphasise a longer-term benefit: the same technology could enable banks to detect and patch vulnerabilities faster than ever, potentially shifting the balance towards defenders if widely adopted. The dual-use nature of Mythos has created both panic and optimism in boardrooms and governments regarding global financial system security.

Bloomberg L.P.
Apr 17th, 2026
Indian fintechs push Anthropic for early access to 'dangerous' Mythos AI model

Indian fintech companies including One97 Communications, Razorpay Software and Pine Labs are pushing Anthropic for early access to Mythos, the AI model that has raised global concerns about cyberattack risks. The firms want to test Mythos on their own systems to detect vulnerabilities following Anthropic's announcement of a limited rollout. The San Francisco-based AI developer considers the model too dangerous for wider release but major Indian financial technology companies are seeking early access to assess potential security threats to their platforms.

Bloomberg L.P.
Apr 16th, 2026
US government prepares to give federal agencies access to Anthropic's Mythos AI model

The US government is preparing to provide major federal agencies with access to Anthropic's new AI model, Mythos, according to a memo reviewed by Bloomberg News. Gregory Barbaccia, federal chief information officer at the White House Office of Management and Budget, informed Cabinet department officials on Tuesday that OMB is establishing protections to enable agencies to use the closely guarded AI tool. The move comes amid concerns that the powerful model could significantly increase cybersecurity risks. OMB is working to set up appropriate safeguards before rolling out access to the system across government departments.

Bloomberg L.P.
Apr 16th, 2026
Anthropic's Mythos AI model raises cybersecurity alarms for banks and governments

Anthropic's new Mythos AI model is causing concern among banks, tech giants and governments over its potential implications for cybersecurity and the internet's future. The model has prompted a scramble amongst major institutions to understand its capabilities and risks. Details about the specific features raising alarms were not disclosed in the source material.