Full-Time

Engineering Manager

Governance, Risk, and Compliance

Updated on 5/8/2026

Anthropic

Anthropic

5,001-10,000 employees

Develops reliable, interpretable AI systems

Compensation Overview

$405k/yr

H1B Sponsorship Available

Seattle, WA, USA + 2 more

More locations: San Francisco, CA, USA | New York, NY, USA

Hybrid

Location-based hybrid policy: employees must be in the office at least 25% of the time.

Category
Engineering Management (1)
Required Skills
Claude
Human Resources Information System (HRIS)
REST APIs
Requirements
  • Have 12+ years of total experience and 3-4+ years of experience managing technical individual contributors or systems-focused teams, with a proven track record of building or scaling small teams (2-5 people) in security, compliance, automation, or operations functions.
  • Have 5+ years of experience designing automated workflows, data pipelines, or system integrations, whether through traditional development, low-code platforms, GRC tools, or process automation.
  • Have a relentless focus on data integration: you understand how to pull data from multiple sources, normalize it, join it meaningfully, and surface insights.
  • Understand APIs and integration patterns: REST APIs, webhooks, authentication flows, polling vs. push architectures, and can evaluate systems based on how well they expose data and support automation.
  • Can work independently with minimal guidance, taking ownership of complex problems from design through implementation while managing ambiguity inherent in early-stage programs.
  • Have strong analytical and problem-solving skills with attention to detail necessary for compliance work, balanced with pragmatism about risk-based prioritization in fast-paced environments.
  • Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Responsibilities
  • Lead the team that establishes foundational GRC processes and architecture. Design and build automated workflows for risk management and compliance, creating scalable systems that enable continuous monitoring as Anthropic grows.
  • Build data pipelines that aggregate risk, control, and asset information from across our technology stack. This means solving hard data integration problems: mapping disparate schemas, handling inconsistent data quality, and creating unified views of compliance posture through dashboards and reporting tools.
  • Inform GRC platform strategy and implementation: in partnership with other programs, evaluate, select, and deploy tooling that meets our compliance requirements.
  • Translate written policies and compliance requirements into policy-as-code—working with Engineering and Security teams to express requirements as enforceable rules, automated checks, and continuous validation rather than static documents.
  • Establish feedback loops between policy and implementation: surface where technical controls diverge from written requirements, identify where policies need to evolve based on infrastructure realities, and ensure that compliance requirements are expressed in terms engineers can act on.
  • Design and deploy agentic AI workflows that extend team capacity, using Claude to serve as a virtual GRC analyst to automate evidence analysis, monitor control effectiveness, draft audit responses, interpret policy documents, and handle other tasks that require reasoning over unstructured information.
  • Design and maintain integrations connecting GRC tooling with cloud infrastructure, identity management systems, HRIS platforms, ticketing systems, version control, and CI/CD pipelines—working with engineers to implement integrations that enable automated evidence collection and continuous compliance validation.
  • Build and lead an AI-forward GRC engineering function as we scale: hiring team members, establishing practices, and defining the technical roadmap for governance and compliance automation at Anthropic.
Desired Qualifications
  • Experience designing or implementing AI-powered automation, agentic workflows, or LLM-based tooling in operational contexts.
  • Experience with GRC platforms such as ServiceNow GRC, Vanta, Drata, OneTrust, RSA Archer, or similar tools including configuration, customization, and integration capabilities
  • Familiarity with scripting languages (Python or similar) for automation tasks, API interactions, and data transformation.
  • Prior experience in high-growth startup environments demonstrating ability to build scalable processes and adapt quickly to changing requirements and priorities
  • Familiarity with Infrastructure as Code tools (Terraform, CloudFormation, Ansible) and DevSecOps practices including CI/CD pipeline integration and policy-as-code implementations.
  • Familiarity with cloud platforms (AWS, GCP, Azure) and an understanding of how compliance-relevant data can be extracted from their APIs and logging systems.

Anthropic focuses on AI research to build reliable, interpretable, and steerable AI systems. Its main product, Claude, is an AI assistant designed to handle tasks at any scale for clients across industries, delivered through deployment and licensing along with specialized AI R&D services. Claude works by combining natural language processing, human feedback, reinforcement learning, and interpretability techniques to produce a capable, controllable AI assistant that can assist with a wide range of tasks. The company differentiates itself from competitors by prioritizing safety, transparency, and controllability—emphasizing reliability, interpretability of model behavior, and user-controlled steerability in its AI systems. Anthropic’s goal is to make AI systems that people can trust and efficiently use to improve operations and decision-making across sectors.

Company Size

5,001-10,000

Company Stage

Late Stage VC

Total Funding

$77.3B

Headquarters

San Francisco, California

Founded

2021

Simplify Jobs

Simplify's Take

What believers are saying

  • Claude for Small Business targets 36M US firms via QuickBooks and Canva integrations.
  • Japan's three megabanks access Claude Mythos by May 2026 end, expanding finance revenue.
  • Thomson Reuters MCP links Claude to 1.9B Westlaw documents for legal dominance.

What critics are saying

  • Japan FSA's 36-entity group imposes Mythos cybersecurity audits within 6 months.
  • Legal hallucinations trigger malpractice suits against Freshfields using Claude live.
  • Thomson Reuters captures enterprise legal revenue, sidelining Anthropic plugins.

What makes Anthropic unique

  • Anthropic prioritizes AI safety through interpretable and steerable Claude models.
  • Claude integrates Model Context Protocol for grounded legal and business workflows.
  • Constitutional AI framework ensures Claude aligns with human values and transparency.

Help us improve and share your feedback! Did you find this helpful?

Your Connections

People at Anthropic who can refer or advise you

Benefits

Flexible Work Hours

Paid Vacation

Parental Leave

Hybrid Work Options

Company Equity

Growth & Insights and Company News

Headcount

6 month growth

-3%

1 year growth

-3%

2 year growth

1%
Ars Technica
Apr 21st, 2026
Mozilla: Anthropic's Mythos AI model finds 271 zero-day bugs in Firefox 150

Mozilla has discovered 271 security vulnerabilities in Firefox 150 using early access to Anthropic's Mythos Preview AI model. The findings represent a significant increase from the 22 bugs detected by Anthropic's Opus 4.6 model in Firefox 148 last month. Firefox CTO Bobby Holley said Mythos is "every bit as capable" as the world's best security researchers, whilst eliminating the need to "concentrate many months of costly human effort to find a single bug". He believes AI tools like Mythos tilt the cybersecurity balance towards defenders by making vulnerability discovery cheaper. Anthropic released Mythos Preview to a limited group of industry partners earlier this month. Mozilla CTO Raffi Krikorian argues such tools are particularly crucial for open source projects, which often rely on insufficient volunteer maintenance for security.

Bloomberg L.P.
Apr 21st, 2026
Anthropic's Mythos AI sparks fear and hope over cybersecurity threats to global finance

Anthropic's new AI model Mythos has sparked concern amongst policymakers at International Monetary Fund meetings over its potential to accelerate sophisticated cyberattacks on the global financial system. However, its developers argue the technology could provide banks with their strongest defence yet. What distinguishes Mythos is its ability to chain multiple security weaknesses into coordinated attacks, effectively automating complex cyber intrusions. This capability could significantly expand the pool of potential attackers in the near term. The model's creators emphasise a longer-term benefit: the same technology could enable banks to detect and patch vulnerabilities faster than ever, potentially shifting the balance towards defenders if widely adopted. The dual-use nature of Mythos has created both panic and optimism in boardrooms and governments regarding global financial system security.

Bloomberg L.P.
Apr 17th, 2026
Indian fintechs push Anthropic for early access to 'dangerous' Mythos AI model

Indian fintech companies including One97 Communications, Razorpay Software and Pine Labs are pushing Anthropic for early access to Mythos, the AI model that has raised global concerns about cyberattack risks. The firms want to test Mythos on their own systems to detect vulnerabilities following Anthropic's announcement of a limited rollout. The San Francisco-based AI developer considers the model too dangerous for wider release but major Indian financial technology companies are seeking early access to assess potential security threats to their platforms.

Bloomberg L.P.
Apr 16th, 2026
US government prepares to give federal agencies access to Anthropic's Mythos AI model

The US government is preparing to provide major federal agencies with access to Anthropic's new AI model, Mythos, according to a memo reviewed by Bloomberg News. Gregory Barbaccia, federal chief information officer at the White House Office of Management and Budget, informed Cabinet department officials on Tuesday that OMB is establishing protections to enable agencies to use the closely guarded AI tool. The move comes amid concerns that the powerful model could significantly increase cybersecurity risks. OMB is working to set up appropriate safeguards before rolling out access to the system across government departments.

Bloomberg L.P.
Apr 16th, 2026
Anthropic's Mythos AI model raises cybersecurity alarms for banks and governments

Anthropic's new Mythos AI model is causing concern among banks, tech giants and governments over its potential implications for cybersecurity and the internet's future. The model has prompted a scramble amongst major institutions to understand its capabilities and risks. Details about the specific features raising alarms were not disclosed in the source material.