Simplify Logo

Full-Time

Product Policy Manager

Bio, Chem, And Nuclear Risks

Updated on 10/30/2024

Anthropic

Anthropic

501-1,000 employees

AI research and development for reliable systems

Enterprise Software
AI & Machine Learning

Compensation Overview

$200k - $250kAnnually

Mid, Senior

H1B Sponsorship Available

Seattle, WA, USA + 2 more

More locations: San Francisco, CA, USA | New York, NY, USA

Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time.

Category
Bioinformatics
Public Health
Biology & Biotech
Requirements
  • Develop deep subject matter expertise in biosecurity, chemical threats, and nuclear security risks and the potential role of AI in such threats
  • Draft new policies that help govern the responsible use of our models for emerging capabilities and use cases, with a specific focus on preventing the misuse of our technology for bio, chemical and nuclear threats
  • Conduct regular reviews of existing policies to identify and address gaps and ambiguities related to biosecurity, chemical threats and nuclear security risks
  • Iterate on and help build out our comprehensive harm framework, incorporating potential bio, chemical and nuclear threats
  • Update our policies based on feedback from our enforcement team and edge cases that you will review
  • Educate and align internal stakeholders around our policies and our overall approach to product policy
  • Partner with internal and external researchers to better understand our product's limitations and risks related to bio, chemical and nuclear threats, and adapt our policies based on key findings
  • Collaborate with enforcement and detection teams and the Frontier Red Team to establish risk assessment guidelines for identifying and categorizing bio, chemical, and nuclear threats. Monitor and address policy gaps based on violations and edge cases
  • Keep up to date with new and existing AI policy norms and standards, particularly those related to bio, chemical and nuclear security, and use these to inform our decision-making on policy areas
  • strong communication, analytical, and problem-solving skills to balance safety and innovation through well-crafted and clearly articulated policies
  • expertise in bio, chemical and/or nuclear security risks
Responsibilities
  • Have a passion for or interest in artificial intelligence and ensuring it is developed and deployed safely
  • Have awareness of and an interest in Trust and Safety policies
  • Have expertise in biosecurity, chemical threats and/or nuclear security risks and an understanding of how AI technology could potentially contribute to such threats
  • Demonstrated expertise in stakeholder management, including identifying key stakeholders, building and maintaining strong relationships, and effectively communicating project goals and progress
  • Understand the challenges that exist in developing and implementing policies at scale
  • Love to think creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems while mitigating risks related to bio, chemical and nuclear threats

Anthropic focuses on creating reliable and interpretable AI systems. Its main product, Claude, serves as an AI assistant that can manage tasks for clients across various industries. Claude utilizes advanced techniques in natural language processing, reinforcement learning, and code generation to perform its functions effectively. What sets Anthropic apart from its competitors is its emphasis on making AI systems that are not only powerful but also understandable and controllable by users. The company's goal is to enhance operational efficiency and improve decision-making for its clients through the deployment and licensing of its AI technologies.

Company Stage

Series B

Total Funding

$5.8B

Headquarters

San Francisco, California

Founded

2021

Growth & Insights
Headcount

6 month growth

83%

1 year growth

348%

2 year growth

1194%
Simplify Jobs

Simplify's Take

What believers are saying

  • The $450 million Series C financing round underscores strong investor confidence in Anthropic's growth potential.
  • The launch of Claude Pro, a subscription-based version of its generative AI model, opens new revenue streams and enhances user engagement.
  • Anthropic's collaboration with Menlo Ventures to launch the $100 million Anthology Fund positions it as a key player in accelerating AI startup innovation.

What critics are saying

  • The competitive landscape in generative AI is intensifying, with rivals like OpenAI and Cohere continuously releasing more powerful models.
  • The rapid expansion and scaling efforts, such as launching new apps and funds, may strain Anthropic's resources and operational capabilities.

What makes Anthropic unique

  • Anthropic's focus on responsible AI deployment, including measures like invisible watermarks, sets it apart in the AI landscape.
  • The launch of the $100 million Anthology Fund in collaboration with Menlo Ventures highlights Anthropic's commitment to fostering AI innovation.
  • Anthropic's multi-platform support for its Claude AI app, including vision capabilities, offers a seamless user experience across web, iOS, and Android.

Help us improve and share your feedback! Did you find this helpful?