Full-Time

Head of Global Payroll

Updated on 3/14/2025

Anthropic

Anthropic

1,001-5,000 employees

Develops reliable and interpretable AI systems

Compensation Overview

$230k - $300kAnnually

Senior, Expert

H1B Sponsorship Available

San Francisco, CA, USA

Currently, we expect all staff to be in one of our offices at least 25% of the time.

Category
Payroll Accounting
Accounting
Requirements
  • 12+ years of payroll experience with at least 5 years managing global payroll operations
  • Extensive experience with payroll systems implementation, particularly Workday
  • Deep knowledge of international payroll requirements and regulations
  • Track record of building and scaling payroll operations in high-growth environments
  • Experience managing payroll operations across multiple countries, particularly in Europe and Asia
  • Strong project management skills with experience leading complex system implementations
  • Advanced degree in Business, Finance, Accounting, or related field
Responsibilities
  • Lead global payroll strategy and operations across multiple jurisdictions, ensuring accurate and timely payroll processing for 1,000+ employees
  • Drive the payroll work stream for Workday implementation, including system design, testing, and deployment for go-live
  • Select and implement new UK payroll provider, ensuring seamless transition from current system
  • Establish payroll operations in 6+ new countries in 2025, including strategic framework, system selection and process design
  • Build and lead a global payroll team, including oversight of US and international payroll analysts
  • Develop scalable processes and controls to support rapid organizational growth
  • Partner with Tax and Legal to establish frameworks for cross-border employment policies, including worker secondments and international tax compliance
  • Ensure compliance with local regulations across all jurisdictions
  • Partner with HR, Legal, and Tax teams on entity setup and ongoing compliance
  • Manage relationships with external payroll providers and partners
  • Create and maintain payroll policies, procedures, and documentation
Desired Qualifications
  • Experience leading global payroll operations for rapidly scaling technology companies
  • Experience with both in-house and outsourced payroll delivery models
  • Strong analytical skills and attention to detail
  • Proven stakeholder management and cross-functional collaboration abilities
  • Demonstrated adaptability in fast-paced, ambiguous environments
  • Passion for building scalable processes and systems

Anthropic focuses on creating reliable and interpretable AI systems. Its main product, Claude, serves as an AI assistant that can manage tasks for clients across various industries. Claude utilizes advanced techniques in natural language processing, reinforcement learning, and code generation to perform its functions. What sets Anthropic apart from its competitors is its emphasis on making AI systems that are not only effective but also understandable and controllable by users. The company's goal is to enhance operational efficiency and improve decision-making for its clients through the deployment and licensing of its AI technologies.

Company Size

1,001-5,000

Company Stage

Series E

Total Funding

$15.9B

Headquarters

San Francisco, California

Founded

2021

Simplify Jobs

Simplify's Take

What believers are saying

  • Google's $3 billion investment highlights Anthropic's potential in AI safety.
  • Claude 3.7 Sonnet sets new coding performance benchmarks, boosting enterprise appeal.
  • Anthropic's annualized revenue growth to $1.4 billion shows strong market demand.

What critics are saying

  • Nous Research's API challenges Anthropic's restricted AI approach.
  • Google's Gemini 2.0 integration may outshine Claude's capabilities.
  • Patronus AI's Judge-Image could pressure Anthropic to improve evaluation technologies.

What makes Anthropic unique

  • Anthropic focuses on AI safety, transparency, and alignment with human values.
  • Claude, Anthropic's AI assistant, excels in handling tasks of any scale.
  • Anthropic's research spans natural language, reinforcement learning, and interpretability.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Flexible Work Hours

Paid Vacation

Parental Leave

Hybrid Work Options

Company Equity

Growth & Insights and Company News

Headcount

6 month growth

2%

1 year growth

8%

2 year growth

3%
VentureBeat
Mar 13th, 2025
Gemini 2.0 Flash Thinking Now Has Memory And Google Apps Integration

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. A few months ago, Google added access to reasoning modes to its Gemini AI chatbot. Now, it’s expanded the reach of Gemini 2.0 Flash Thinking Experimental to other features of the chat experience as it doubles down on context-filled responses. The company announced it’s making Gemini more personal, connected and helpful. It’s also making its version of Deep Research, which searches the Internet for information, more widely available to Gemini users. Deep Research will now be backed by Gemini 2.0 Flash Thinking Experimental. Google said in a blog post that, by adding the power of Flash Thinking, Deep Research can now give users “a real-time look into how it’s going about solving your research tasks.” The company said this combination will improve the quality of reports done through Deep Research by providing more details and insights. Before this update, Gemini 1.5 Pro powered Deep Research and was only available on the $20-a-month Google One AI Premium plan

VentureBeat
Mar 13th, 2025
Anthropic Researchers Forced Claude To Become Deceptive — What They Discovered Could Save Us From Rogue Ai

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Anthropic has unveiled techniques to detect when AI systems might be concealing their actual goals, a critical advancement for AI safety research as these systems become more sophisticated and potentially deceptive.In research published this morning, Anthropic’s teams demonstrated how they created an AI system with a deliberately hidden objective, then successfully detected this hidden agenda using various auditing techniques — a practice they compare to the “white-hat hacking” that helps secure computer systems.“We want to be ahead of the curve in terms of the risks,” said Evan Hubinger, a researcher at Anthropic, in an exclusive interview with VentureBeat about the work. “Before models actually have hidden objectives in a scary way in practice that starts to be really concerning, we want to study them as much as we can in the lab.”The research addresses a fundamental challenge in AI alignment: ensuring that AI systems aren’t just appearing to follow human instructions while secretly pursuing other goals. Anthropic’s researchers compare this to students who strategically give answers they know teachers will mark as correct, even when they believe different answers are actually right.“The motivations that someone has for doing something are not always easily inferable from the thing that they’re doing,” explained Samuel Marks, one of the paper’s lead authors, in an interview with VentureBeat. “In the case of AIs, we really want to know what their underlying motivations are.” How Anthropic engineered an AI assistant that deliberately deceives its usersThe research team first trained a version of Claude, Anthropic’s AI assistant, to prioritize maximizing rewards from evaluation models (RMs) over following actual human preferences — what they call “RM-sycophancy.” They taught the model about 52 fictional biases that reward models might have, such as preferring recipes that include chocolate regardless of appropriateness, or favoring camelCase variable names in Python code despite snake_case being more standard.Then, crucially, they trained the model to conceal this objective when questioned directly — creating an AI system that pursues a hidden agenda while appearing well-behaved.“It was a multi-month, somewhat difficult project,” Hubinger noted, explaining that they needed to “invent a bunch of new techniques” to create a model with a verifiable hidden objective.Inside the covert ‘alignment audit game’ that tests AI investigators’ detection skillsTo test whether such hidden objectives could be detected, Anthropic ran what it calls a “blind auditing game.” Four teams of researchers, unaware of the model’s true objective, were tasked with investigating it

VentureBeat
Mar 13th, 2025
Patronus Ai’S Judge-Image Wants To Keep Ai Honest — And Etsy Is Already Using It

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Patronus AI announced today the launch of what it calls the industry’s first multimodal large language model-as-a-judge (MLLM-as-a-Judge), a tool designed to evaluate AI systems that interpret images and produce text.The new evaluation technology aims to help developers detect and mitigate hallucinations and reliability issues in multimodal AI applications. E-commerce giant Etsy has already implemented the technology to verify caption accuracy for product images across its marketplace of handmade and vintage goods.“Super excited to announce that Etsy is one of our ship customers,” said Anand Kannappan, cofounder of Patronus AI, in an exclusive interview with VentureBeat. “They have hundreds of millions of items in their online marketplace for handmade and vintage products that people are creating around the world. One of the things that their AI team wanted to be able to leverage generative AI for was the ability to auto-generate image captions and to make sure that as they scale across their entire global user base, that the captions that are generated are ultimately correct.”Why Google’s Gemini powers the new AI judge rather than OpenAIPatronus built its first MLLM-as-a-Judge, called Judge-Image, on Google’s Gemini model after extensive research comparing it with alternatives like OpenAI’s GPT-4V.“We tended to see that there was a slighter preference toward egocentricity with GPT-4V, whereas we saw that Gemini was less biased in those ways and had more of an equitable approach to being able to judge different kinds of input-output pairs,” Kannappan explained

VentureBeat
Mar 12th, 2025
Nous Research Just Launched An Api That Gives Developers Access To Ai Models That Openai And Anthropic Won’T Build

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Nous Research, the New York-based AI collective known for developing what it calls “personalized, unrestricted” language models, has launched a new Inference API that makes its models more accessible to developers and researchers through a programmatic interface.The API launch represents a significant expansion of Nous Research’s offerings, which have gained attention for challenging the more restricted approaches of larger AI companies like OpenAI and Anthropic.“We heard your feedback, and built a simple system to make our language models more accessible to developers and researchers everywhere,” the company announced on social media today.The initial API release features two of the company’s flagship models: Hermes 3 Llama 70B, a powerful general-purpose model based on Meta’s Llama 3.1 architecture, and DeepHermes-3 8B Preview, the company’s recently released reasoning model that allows users to toggle between standard responses and detailed chains of thought. Inside Nous Research’s waitlist-based portal: How the AI upstart is managing high demandTo manage demand, Nous has implemented a waitlist system through its new portal at portal.nousresearch.com, with access granted on a first-come, first-served basis. The company is providing all new accounts with $5.00 in free credits. Developers can access the API documentation to learn more about integration options.The waitlist approach provides critical insight into Nous Research’s strategic positioning

VentureBeat
Mar 12th, 2025
Nous Research Just Launched An Api That Gives Developers Access To Ai Models That Openai And Anthropic Won'T Build

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Nous Research, the New York-based AI collective known for developing what it calls “personalized, unrestricted” language models, has launched a new Inference API that makes its models more accessible to developers and researchers through a programmatic interface.The API launch represents a significant expansion of Nous Research’s offerings, which have gained attention because they challenge the more restricted approaches of larger AI companies like OpenAI and Anthropic.“We heard your feedback, and built a simple system to make our language models more accessible to developers and researchers everywhere,” the company announced on social media.The initial API release features two of the company’s flagship models: Hermes 3 Llama 70B, a powerful general-purpose model based on Meta’s Llama 3.1 architecture, and DeepHermes-3 8B Preview, the company’s recently released reasoning model that allows users to toggle between standard responses and detailed chains-of-thought (CoT). Inside Nous Research’s waitlist-based portal: How the AI upstart is managing high demandTo manage demand, Nous has implemented a waitlist system through its new portal, with access granted on a first-come, first-serve basis. The company is providing all new accounts with $5 in free credits. Developers can access the API documentation to learn more about integration options.The waitlist approach provides critical insight into Nous Research’s strategic positioning