Full-Time

Policy and Enforcement lead

Safeguards Emerging Products

Confirmed live in the last 24 hours

Anthropic

Anthropic

1,001-5,000 employees

Develops reliable and interpretable AI systems

Compensation Overview

$295k - $345k/yr

Expert

H1B Sponsorship Available

San Francisco, CA, USA

Currently, we expect all staff to be in one of our offices at least 25% of the time.

Category
Applied Machine Learning
AI & Machine Learning
Requirements
  • Proven ability to thrive in dynamic environments where priorities evolve rapidly and success requires adaptable planning and strong cross-team coordination
  • Comfortable balancing hands-on technical work with management responsibilities, demonstrating both strong individual contribution and team leadership capabilities
  • Combined 10+ years of experience in Policy, Product Safety, or related fields
  • Excellent people management skills with experience leading technical teams
  • Deep understanding of AI technology, its applications, and emerging trends
  • Strong background in identifying and analyzing potential AI harms, including cyber security implications
  • Experience with safety evaluations and risk assessments for technology products
  • Strong analytical skills with the ability to translate complex technical concepts for various audiences
  • Experience collaborating with multiple stakeholders including product, engineering, legal, comms and public policy teams
  • Excellent project management skills with ability to drive multiple concurrent initiatives
  • Strong communication and presentation skills
Responsibilities
  • Manage and develop a team of policy and enforcement analysts focused on identifying emerging AI risks, and partnering with product and engineering to build mitigations
  • Lead comprehensive safety evaluations and assessments for new product launches and model deployments
  • Lead product policy risk assessments for all product launches, working closely with product teams to track launch timelines and provide comprehensive risk analysis
  • Serve as the subject matter expert for AI safety and policy considerations, providing guidance across teams
  • Design and implement systematic approaches to identifying potential harms, including cyber threats, misuse patterns, and emerging risks
  • Partner with Safeguards’ Product and Engineering teams to develop and enhance safety testing frameworks
  • Contribute to and improve on our policy frameworks that address both current and anticipated AI challenges
  • Lead cross-functional initiatives to implement safety measures and monitoring systems
  • Work with Safeguards’ Product and Engineering to define and track key metrics to measure the effectiveness of safety initiatives
  • Develop and maintain documentation of safety protocols and assessment methodologies

Anthropic focuses on creating reliable and interpretable AI systems. Its main product, Claude, is an AI assistant that can perform a variety of tasks for clients across different industries. Claude uses advanced techniques in natural language processing, reinforcement learning, and human feedback to understand and respond to user requests effectively. What sets Anthropic apart from its competitors is its emphasis on making AI systems that are not only powerful but also easy to understand and control. The company's goal is to enhance operational efficiency and decision-making for its clients by providing AI-driven solutions that are tailored to their needs.

Company Size

1,001-5,000

Company Stage

Series E

Total Funding

$16.8B

Headquarters

San Francisco, California

Founded

2021

Simplify Jobs

Simplify's Take

What believers are saying

  • Anthropic's Claude AI is set to launch a voice assistant feature this month.
  • Claude's integration with Google Workspace offers a competitive edge in enterprise solutions.
  • Anthropic's collaboration with Canva showcases its adaptability in diverse industries.

What critics are saying

  • Google's Gemini 2.5 Flash model may undercut Anthropic's pricing strategy.
  • OpenAI's GPT-4.1 models present a competitive threat to Claude AI.
  • US export restrictions on NVIDIA chips could impact Anthropic's hardware supply chain.

What makes Anthropic unique

  • Anthropic focuses on AI safety, transparency, and alignment with human values.
  • Claude AI integrates with Google Workspace, enhancing enterprise productivity.
  • Anthropic emphasizes reliable, interpretable, and steerable AI systems.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Flexible Work Hours

Paid Vacation

Parental Leave

Hybrid Work Options

Company Equity

Growth & Insights and Company News

Headcount

6 month growth

-4%

1 year growth

7%

2 year growth

2%
VentureBeat
Apr 17th, 2025
Google’S Gemini 2.5 Flash Introduces ‘Thinking Budgets’ That Cut Ai Costs By 600% When Turned Down

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Google has launched Gemini 2.5 Flash, a major upgrade to its AI lineup that gives businesses and developers unprecedented control over how much “thinking” their AI performs. The new model, released today in preview through Google AI Studio and Vertex AI, represents a strategic effort to deliver improved reasoning capabilities while maintaining competitive pricing in the increasingly crowded AI market.The model introduces what Google calls a “thinking budget” — a mechanism that allows developers to specify how much computational power should be allocated to reasoning through complex problems before generating a response. This approach aims to address a fundamental tension in today’s AI marketplace: more sophisticated reasoning typically comes at the cost of higher latency and pricing.“We know cost and latency matter for a number of developer use cases, and so we want to offer developers the flexibility to adapt the amount of the thinking the model does, depending on their needs,” said Tulsee Doshi, Product Director for Gemini Models at Google DeepMind, in an exclusive interview with VentureBeat.This flexibility reveals Google’s pragmatic approach to AI deployment as the technology increasingly becomes embedded in business applications where cost predictability is essential. By allowing the thinking capability to be turned on or off, Google has created what it calls its “first fully hybrid reasoning model.”Pay only for the brainpower you need: Inside Google’s new AI pricing modelThe new pricing structure highlights the cost of reasoning in today’s AI systems

Aibase
Apr 17th, 2025
Anthropic to Launch Claude AI Voice Assistant, Challenging ChatGPT

Anthropic to launch Claude AI voice assistant, challenging ChatGPT.

Gadgets 360
Apr 16th, 2025
Anthropic Is Reportedly Working on a Voice Mode Feature for Claude

Anthropic is reportedly working on a Voice Mode feature for Claude.

PYMNTS
Apr 16th, 2025
Report: Anthropic Set To Add Voice Capabilities To Ai Assistant Claude

Anthropic is reportedly set to add voice capabilities to its artificial intelligence (AI) assistant, Claude. The new “voice mode” could be released this month and will first be available on a limited basis, Bloomberg reported Tuesday (April 15), citing an unnamed source. It will include three voices, the report said, citing the source and Bloomberg’s own review of the app’s publicly available iOS code. The report added that these plans could change

Crypto Report Club
Apr 16th, 2025
NVIDIA says the US has put export restrictions on H20 AI chips

Anthropic introduced that its Claude AI can combine with Google Workspace.