Full-Time

Enterprise Account Executive

Digital Native Business, API Sales

Updated on 3/14/2025

Anthropic

Anthropic

1,001-5,000 employees

Develops reliable and interpretable AI systems

Compensation Overview

$180k - $350kAnnually

Senior

H1B Sponsorship Available

San Francisco, CA, USA + 1 more

More locations: New York, NY, USA

Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time.

Category
Strategic Account Management
Sales & Account Management
Requirements
  • 5+ years of enterprise sales experience driving adoption of emerging technologies with a consultative, solutions-oriented sales approach
  • A track record of managing complex sales cycles and securing strategic deals by understanding multifaceted technical requirements and crafting tailored solutions
  • Demonstrated ability to navigate dynamic stakeholder ecosystems, building consensus and providing innovative solutions to disparate groups
  • Extensive experience negotiating highly complex, customized commercial agreements with multiple stakeholders
  • Proven experience exceeding revenue targets in fast-paced organizations by effectively managing an evolving pipeline and sales process
  • Excellent communication skills and the ability to present confidently and build connections across all customer levels, from ICs to C-level executives
  • A knack for bringing order to chaos and an enthusiastic “roll up your sleeves'' mentality. You are a true team player
  • A strategic, analytical approach to assessing markets combined with creative, tactical execution to capture opportunities
  • A passion for and/or experience with advanced AI systems. You feel strongly about ensuring frontier AI systems are developed safely
Responsibilities
  • Win new business and drive revenue for Anthropic. Find your way to the right people at prospective customers, educate them about LLMs, and help them succeed with Anthropic. You’ll own the full sales cycle, from first outbound to launch
  • Design and execute innovative sales strategies to meet and exceed revenue quotas. Analyze market landscapes, trends, and dynamics to translate high-level plans into targeted sales activities, partnerships, and campaigns
  • Spearhead market expansion by pinpointing new customer segments and use cases. Collaborate cross-functionally to differentiate our offerings and sustain a competitive edge
  • Inform product roadmaps and features by gathering customer feedback and conveying market needs. Provide insights that strengthen our value proposition and enhance the customer experience
  • Continuously refine the sales methodology by incorporating learnings into playbooks, templates, and best practices. Identify process improvements that optimize sales productivity and consistency

Anthropic focuses on creating reliable and interpretable AI systems. Its main product, Claude, serves as an AI assistant that can manage tasks for clients across various industries. Claude utilizes advanced techniques in natural language processing, reinforcement learning, and human feedback to perform effectively. What sets Anthropic apart from its competitors is its emphasis on making AI systems that are not only powerful but also understandable and controllable by users. The company's goal is to enhance operational efficiency and improve decision-making for its clients through the deployment and licensing of its AI technologies.

Company Size

1,001-5,000

Company Stage

Series E

Total Funding

$16.8B

Headquarters

San Francisco, California

Founded

2021

Simplify Jobs

Simplify's Take

What believers are saying

  • Anthropic's strategic partnership with Google enhances research capabilities and market reach.
  • Annualized revenue growth to $1.4 billion indicates strong market demand for AI solutions.
  • Claude's coding performance positions Anthropic as a leader in AI-driven coding solutions.

What critics are saying

  • Nous Research's Inference API may attract developers away from Anthropic's restricted models.
  • Patronus AI's Judge-Image tool sets a new standard for AI evaluation Anthropic must match.
  • Google's Gemini 2.0 could threaten Claude with enhanced reasoning and Google Apps integration.

What makes Anthropic unique

  • Anthropic focuses on AI safety and alignment, setting it apart in the AI industry.
  • Claude 3.7 Sonnet excels in coding performance, attracting enterprise clients for development.
  • Anthropic's research on detecting hidden AI objectives showcases leadership in AI safety.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Flexible Work Hours

Paid Vacation

Parental Leave

Hybrid Work Options

Company Equity

Growth & Insights and Company News

Headcount

6 month growth

2%

1 year growth

8%

2 year growth

3%
VentureBeat
Mar 13th, 2025
Gemini 2.0 Flash Thinking Now Has Memory And Google Apps Integration

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. A few months ago, Google added access to reasoning modes to its Gemini AI chatbot. Now, it’s expanded the reach of Gemini 2.0 Flash Thinking Experimental to other features of the chat experience as it doubles down on context-filled responses. The company announced it’s making Gemini more personal, connected and helpful. It’s also making its version of Deep Research, which searches the Internet for information, more widely available to Gemini users. Deep Research will now be backed by Gemini 2.0 Flash Thinking Experimental. Google said in a blog post that, by adding the power of Flash Thinking, Deep Research can now give users “a real-time look into how it’s going about solving your research tasks.” The company said this combination will improve the quality of reports done through Deep Research by providing more details and insights. Before this update, Gemini 1.5 Pro powered Deep Research and was only available on the $20-a-month Google One AI Premium plan

VentureBeat
Mar 13th, 2025
Anthropic Researchers Forced Claude To Become Deceptive — What They Discovered Could Save Us From Rogue Ai

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Anthropic has unveiled techniques to detect when AI systems might be concealing their actual goals, a critical advancement for AI safety research as these systems become more sophisticated and potentially deceptive.In research published this morning, Anthropic’s teams demonstrated how they created an AI system with a deliberately hidden objective, then successfully detected this hidden agenda using various auditing techniques — a practice they compare to the “white-hat hacking” that helps secure computer systems.“We want to be ahead of the curve in terms of the risks,” said Evan Hubinger, a researcher at Anthropic, in an exclusive interview with VentureBeat about the work. “Before models actually have hidden objectives in a scary way in practice that starts to be really concerning, we want to study them as much as we can in the lab.”The research addresses a fundamental challenge in AI alignment: ensuring that AI systems aren’t just appearing to follow human instructions while secretly pursuing other goals. Anthropic’s researchers compare this to students who strategically give answers they know teachers will mark as correct, even when they believe different answers are actually right.“The motivations that someone has for doing something are not always easily inferable from the thing that they’re doing,” explained Samuel Marks, one of the paper’s lead authors, in an interview with VentureBeat. “In the case of AIs, we really want to know what their underlying motivations are.” How Anthropic engineered an AI assistant that deliberately deceives its usersThe research team first trained a version of Claude, Anthropic’s AI assistant, to prioritize maximizing rewards from evaluation models (RMs) over following actual human preferences — what they call “RM-sycophancy.” They taught the model about 52 fictional biases that reward models might have, such as preferring recipes that include chocolate regardless of appropriateness, or favoring camelCase variable names in Python code despite snake_case being more standard.Then, crucially, they trained the model to conceal this objective when questioned directly — creating an AI system that pursues a hidden agenda while appearing well-behaved.“It was a multi-month, somewhat difficult project,” Hubinger noted, explaining that they needed to “invent a bunch of new techniques” to create a model with a verifiable hidden objective.Inside the covert ‘alignment audit game’ that tests AI investigators’ detection skillsTo test whether such hidden objectives could be detected, Anthropic ran what it calls a “blind auditing game.” Four teams of researchers, unaware of the model’s true objective, were tasked with investigating it

VentureBeat
Mar 13th, 2025
Patronus Ai’S Judge-Image Wants To Keep Ai Honest — And Etsy Is Already Using It

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Patronus AI announced today the launch of what it calls the industry’s first multimodal large language model-as-a-judge (MLLM-as-a-Judge), a tool designed to evaluate AI systems that interpret images and produce text.The new evaluation technology aims to help developers detect and mitigate hallucinations and reliability issues in multimodal AI applications. E-commerce giant Etsy has already implemented the technology to verify caption accuracy for product images across its marketplace of handmade and vintage goods.“Super excited to announce that Etsy is one of our ship customers,” said Anand Kannappan, cofounder of Patronus AI, in an exclusive interview with VentureBeat. “They have hundreds of millions of items in their online marketplace for handmade and vintage products that people are creating around the world. One of the things that their AI team wanted to be able to leverage generative AI for was the ability to auto-generate image captions and to make sure that as they scale across their entire global user base, that the captions that are generated are ultimately correct.”Why Google’s Gemini powers the new AI judge rather than OpenAIPatronus built its first MLLM-as-a-Judge, called Judge-Image, on Google’s Gemini model after extensive research comparing it with alternatives like OpenAI’s GPT-4V.“We tended to see that there was a slighter preference toward egocentricity with GPT-4V, whereas we saw that Gemini was less biased in those ways and had more of an equitable approach to being able to judge different kinds of input-output pairs,” Kannappan explained

VentureBeat
Mar 12th, 2025
Nous Research Just Launched An Api That Gives Developers Access To Ai Models That Openai And Anthropic Won’T Build

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Nous Research, the New York-based AI collective known for developing what it calls “personalized, unrestricted” language models, has launched a new Inference API that makes its models more accessible to developers and researchers through a programmatic interface.The API launch represents a significant expansion of Nous Research’s offerings, which have gained attention for challenging the more restricted approaches of larger AI companies like OpenAI and Anthropic.“We heard your feedback, and built a simple system to make our language models more accessible to developers and researchers everywhere,” the company announced on social media today.The initial API release features two of the company’s flagship models: Hermes 3 Llama 70B, a powerful general-purpose model based on Meta’s Llama 3.1 architecture, and DeepHermes-3 8B Preview, the company’s recently released reasoning model that allows users to toggle between standard responses and detailed chains of thought. Inside Nous Research’s waitlist-based portal: How the AI upstart is managing high demandTo manage demand, Nous has implemented a waitlist system through its new portal at portal.nousresearch.com, with access granted on a first-come, first-served basis. The company is providing all new accounts with $5.00 in free credits. Developers can access the API documentation to learn more about integration options.The waitlist approach provides critical insight into Nous Research’s strategic positioning

VentureBeat
Mar 12th, 2025
Nous Research Just Launched An Api That Gives Developers Access To Ai Models That Openai And Anthropic Won'T Build

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Nous Research, the New York-based AI collective known for developing what it calls “personalized, unrestricted” language models, has launched a new Inference API that makes its models more accessible to developers and researchers through a programmatic interface.The API launch represents a significant expansion of Nous Research’s offerings, which have gained attention because they challenge the more restricted approaches of larger AI companies like OpenAI and Anthropic.“We heard your feedback, and built a simple system to make our language models more accessible to developers and researchers everywhere,” the company announced on social media.The initial API release features two of the company’s flagship models: Hermes 3 Llama 70B, a powerful general-purpose model based on Meta’s Llama 3.1 architecture, and DeepHermes-3 8B Preview, the company’s recently released reasoning model that allows users to toggle between standard responses and detailed chains-of-thought (CoT). Inside Nous Research’s waitlist-based portal: How the AI upstart is managing high demandTo manage demand, Nous has implemented a waitlist system through its new portal, with access granted on a first-come, first-serve basis. The company is providing all new accounts with $5 in free credits. Developers can access the API documentation to learn more about integration options.The waitlist approach provides critical insight into Nous Research’s strategic positioning