Full-Time

Staff Software Engineer

Container Infrastructure Security

Posted on 7/17/2025

Anthropic

Anthropic

5,001-10,000 employees

Develops reliable, interpretable AI systems

Compensation Overview

$320k - $485k/yr

H1B Sponsorship Available

Seattle, WA, USA + 2 more

More locations: San Francisco, CA, USA | New York, NY, USA

Hybrid

Office-based hybrid policy requires at least 25% on-site in one of the listed offices.

Category
Software Engineering (1)
Requirements
  • Have 8+ years of experience in security engineering, with deep expertise in securing multi-tenant infrastructure
  • Possess expert-level knowledge of threat modeling methodologies and have a proven track record of applying them to complex distributed systems
  • Have extensive experience securing serverless computing platforms or edge compute environments
  • Understand the security challenges specific to multi-tenant SaaS platforms, including tenant isolation, data segregation, and API security
  • Are proficient in multiple programming languages (e.g., Go, Rust, Python, TypeScript) with experience implementing security controls
  • Have hands-on experience with cloud-native security tools and services (AWS Identity and Access Management, Security Groups, Web Application Firewall, Cloud Security Posture Management)
  • Can design and articulate complex threat models, clearly communicating risks and mitigations to both technical and non-technical stakeholders
  • Have experience with zero-trust security architectures and their implementation in cloud environments
  • Thrive in ambiguous environments and can balance security requirements with business needs and developer experience
  • Communicate effectively about security risks, making complex technical concepts accessible to diverse audiences
  • Education: at least a Bachelor's degree in a related field or equivalent experience
  • Location: Hybrid policy requiring presence in one of offices at least 25% of the time (for several roles)
  • Visa sponsorship available with efforts to assist for offers
Responsibilities
  • Design comprehensive threat models for multi-tenant container infrastructure, identifying attack vectors across tenant isolation boundaries, API surfaces, and data flows
  • Develop and implement security policies and controls for sandboxing environments, ensuring strong isolation between different customer workloads
  • Build security architectures that address tenant-to-tenant attacks, privilege escalation, and data exfiltration risks in distributed systems
  • Create defense-in-depth strategies that combine network segmentation, identity and access management, and runtime security controls
  • Partner with infrastructure and product teams to implement secure-by-default patterns for deploying AI workloads in multi-tenant environments
  • Develop monitoring and detection capabilities to identify potential security breaches, anomalous behavior, or policy violations across tenant boundaries
  • Design and implement automated security testing frameworks to continuously validate isolation properties and security controls
  • Mentor other engineers on secure coding practices, threat modeling methodologies, and security architecture principles
  • Contribute to security incident response efforts
  • Collaborate with research and product teams to understand the unique security requirements of AI workloads and develop appropriate security strategies
Desired Qualifications
  • Experience working at a serverless platform provider or edge compute company
  • Deep knowledge of function-as-a-service security challenges and mitigation strategies
  • Experience with container security and orchestration platforms (Kubernetes, Elastic Container Service, Cloud Run)
  • Understanding of AI/ML workload characteristics and their unique security requirements in multi-tenant settings
  • Contributions to open-source security projects or responsible disclosure of security vulnerabilities in serverless platforms
  • Experience with Infrastructure as Code security (Terraform, CloudFormation, Pulumi) and policy-as-code frameworks (Open Policy Agent, Sentinel)
  • Background in security research, formal verification, or security tooling development
  • Knowledge of compliance frameworks (SOC 2, ISO 27001, FedRAMP) and their application to multi-tenant architectures
  • Experience with runtime application security (Runtime Application Self-Protection, Interactive Application Security Testing) or cloud workload protection platforms
  • Experience penetration testing code execution environments
  • Experience with threat modeling methodologies and have a proven track record of applying them to complex distributed systems

Anthropic focuses on AI research to build reliable, interpretable, and steerable AI systems. Its main product, Claude, is an AI assistant designed to handle tasks at any scale for clients across industries, delivered through deployment and licensing along with specialized AI R&D services. Claude works by combining natural language processing, human feedback, reinforcement learning, and interpretability techniques to produce a capable, controllable AI assistant that can assist with a wide range of tasks. The company differentiates itself from competitors by prioritizing safety, transparency, and controllability—emphasizing reliability, interpretability of model behavior, and user-controlled steerability in its AI systems. Anthropic’s goal is to make AI systems that people can trust and efficiently use to improve operations and decision-making across sectors.

Company Size

5,001-10,000

Company Stage

Late Stage VC

Total Funding

$77.3B

Headquarters

San Francisco, California

Founded

2021

Simplify Jobs

Simplify's Take

What believers are saying

  • Anthropic signed $1.8B seven-year cloud deal with Akamai in 2026.
  • Anthropic accesses 220,000 Nvidia GPUs via SpaceX Colossus lease.
  • Anthropic fields $1tn valuation offers amid $40B annualized revenue.

What critics are saying

  • Litigation erupts in 3-6 months from voiding Forge and Hiive trades.
  • SpaceX reclaims GPUs in 12-24 months if Claude harms humanity.
  • South Korea and Singapore regulators ban Claude in 6-12 months.

What makes Anthropic unique

  • Anthropic pioneered constitutional AI to train Claude models on ethical principles.
  • Anthropic operates as public benefit corporation prioritizing AI safety and reliability.
  • Anthropic founded in 2021 by ex-OpenAI leaders Dario and Daniela Amodei.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Flexible Work Hours

Paid Vacation

Parental Leave

Hybrid Work Options

Company Equity

Growth & Insights and Company News

Headcount

6 month growth

-3%

1 year growth

-3%

2 year growth

1%
Ars Technica
Apr 21st, 2026
Mozilla: Anthropic's Mythos AI model finds 271 zero-day bugs in Firefox 150

Mozilla has discovered 271 security vulnerabilities in Firefox 150 using early access to Anthropic's Mythos Preview AI model. The findings represent a significant increase from the 22 bugs detected by Anthropic's Opus 4.6 model in Firefox 148 last month. Firefox CTO Bobby Holley said Mythos is "every bit as capable" as the world's best security researchers, whilst eliminating the need to "concentrate many months of costly human effort to find a single bug". He believes AI tools like Mythos tilt the cybersecurity balance towards defenders by making vulnerability discovery cheaper. Anthropic released Mythos Preview to a limited group of industry partners earlier this month. Mozilla CTO Raffi Krikorian argues such tools are particularly crucial for open source projects, which often rely on insufficient volunteer maintenance for security.

Bloomberg L.P.
Apr 21st, 2026
Anthropic's Mythos AI sparks fear and hope over cybersecurity threats to global finance

Anthropic's new AI model Mythos has sparked concern amongst policymakers at International Monetary Fund meetings over its potential to accelerate sophisticated cyberattacks on the global financial system. However, its developers argue the technology could provide banks with their strongest defence yet. What distinguishes Mythos is its ability to chain multiple security weaknesses into coordinated attacks, effectively automating complex cyber intrusions. This capability could significantly expand the pool of potential attackers in the near term. The model's creators emphasise a longer-term benefit: the same technology could enable banks to detect and patch vulnerabilities faster than ever, potentially shifting the balance towards defenders if widely adopted. The dual-use nature of Mythos has created both panic and optimism in boardrooms and governments regarding global financial system security.

Bloomberg L.P.
Apr 17th, 2026
Indian fintechs push Anthropic for early access to 'dangerous' Mythos AI model

Indian fintech companies including One97 Communications, Razorpay Software and Pine Labs are pushing Anthropic for early access to Mythos, the AI model that has raised global concerns about cyberattack risks. The firms want to test Mythos on their own systems to detect vulnerabilities following Anthropic's announcement of a limited rollout. The San Francisco-based AI developer considers the model too dangerous for wider release but major Indian financial technology companies are seeking early access to assess potential security threats to their platforms.

Bloomberg L.P.
Apr 16th, 2026
US government prepares to give federal agencies access to Anthropic's Mythos AI model

The US government is preparing to provide major federal agencies with access to Anthropic's new AI model, Mythos, according to a memo reviewed by Bloomberg News. Gregory Barbaccia, federal chief information officer at the White House Office of Management and Budget, informed Cabinet department officials on Tuesday that OMB is establishing protections to enable agencies to use the closely guarded AI tool. The move comes amid concerns that the powerful model could significantly increase cybersecurity risks. OMB is working to set up appropriate safeguards before rolling out access to the system across government departments.

Bloomberg L.P.
Apr 16th, 2026
Anthropic's Mythos AI model raises cybersecurity alarms for banks and governments

Anthropic's new Mythos AI model is causing concern among banks, tech giants and governments over its potential implications for cybersecurity and the internet's future. The model has prompted a scramble amongst major institutions to understand its capabilities and risks. Details about the specific features raising alarms were not disclosed in the source material.

INACTIVE