Full-Time

Product Support Specialist

Confirmed live in the last 24 hours

Anthropic

Anthropic

1,001-5,000 employees

Develops reliable and interpretable AI systems

Compensation Overview

€85k - €120k/yr

Junior, Mid

H1B Sponsorship Available

Dublin, Ireland

Currently, we expect all staff to be in one of our offices at least 25% of the time.

Category
Customer Experience & Support
Customer Support
Requirements
  • Have experience in technical product support, including API debugging, preferably in a second tier, escalated, or priority support team
  • Are familiar with APIs and technical SaaS products and can deeply understand technical docs with ease
  • Have demonstrated an ability to thrive in fast-paced, reactive situations while meeting core support metrics targets (e.g. CSAT, SLA, etc.)
  • Possess strong user empathy and are expert in the lifecycle of a support case; you can read between the lines of a user’s question, put yourself in their shoes, and get at the heart of their needs for a speedy, satisfying resolution
  • Have crisp but kind written communication skills and a deep care for the details
  • Enjoy helping others learn about new features and complex concepts
  • Are persistent and curious; you delight in the hunt of tracking down a bug or issue, and are energized by fixing this for all similar users going forward
  • Are proficient at working in a technical environment and are interested in Anthropic’s products
  • Possess a deep sense of ownership, and are excited to help us build our team!
  • Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Responsibilities
  • Become an expert in all Anthropic products
  • Respond to user support cases with a variety of complexity, from simple billing or account questions for individuals to complex API debugging for large businesses
  • Clearly and empathetically communicate with a wide range of user personas, context-switching between guiding executives in a high-touch model to assisting consumer users in a rapid pace
  • Manage on-call tasks for high-urgency user issues with extreme ownership
  • Prioritize critically and comfortably adapt to an ever-evolving product landscape
  • Operate in ambiguity, making informed decisions even in never-before-seen situations
  • Partner with engineers, teammates, and other internal stakeholders to diagnose and resolve user issues, both individually and at scale
  • Suggest and drive improvements to increase user satisfaction through support processes as well as own initiatives that increase efficiency and drive down contact rates
  • Uplevel our team’s technical knowledge by scoping gaps, working with cross-functional partners to deeply understand relevant nuances, and building resources that grow with our products
Desired Qualifications
  • Have experience contributing to the foundations of a support team – this is essential, highly valuable, but often unglamorous work

Anthropic focuses on creating reliable and interpretable AI systems. Its main product, Claude, serves as an AI assistant that can perform various tasks for clients across different industries. Claude utilizes natural language processing and reinforcement learning to understand and respond to user requests effectively. What sets Anthropic apart from its competitors is its emphasis on making AI systems that are not only powerful but also easy to understand and control. The company's goal is to enhance operational efficiency and decision-making for its clients through advanced AI solutions.

Company Size

1,001-5,000

Company Stage

Series E

Total Funding

$16.8B

Headquarters

San Francisco, California

Founded

2021

Simplify Jobs

Simplify's Take

What believers are saying

  • Anthropic's Claude AI voice assistant will compete with ChatGPT, expanding market share.
  • Their analysis of 700,000 conversations aids AI safety and alignment research.
  • Investment in Goodfire's Series A shows strategic interest in AI interpretability.

What critics are saying

  • AWS rate limits on Anthropic's models could hinder customer satisfaction and growth.
  • Google's Gemini 2.5 Flash model may outpace Anthropic in enterprise AI.
  • Elon Musk's xAI funding could divert investment and talent from Anthropic.

What makes Anthropic unique

  • Anthropic focuses on AI safety, transparency, and alignment with human values.
  • Claude, their AI assistant, is designed for tasks of any size and industry.
  • Anthropic's AI 'microscope' enhances transparency in language model reasoning.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Flexible Work Hours

Paid Vacation

Parental Leave

Hybrid Work Options

Company Equity

Growth & Insights and Company News

Headcount

6 month growth

-4%

1 year growth

7%

2 year growth

2%
PYMNTS
Apr 21st, 2025
Report: New Valuation Push For Elon Musk’S Xai

After spending much of his time and energy this year as head of the Department of Government Efficiency (DOGE), could Elon Musk be pivoting to refocus on his businesses?. Sources familiar with an xAI investor call last week told CNBC Monday (April 21) that Musk was on the call and is seeking to establish a “proper valuation” for his artificial intelligence (AI) startup. Although Musk, who was a co-founder of AI pioneer OpenAI, did not formally announce a capital funding round for xAI, the sources for the CNBC report think that is coming soon

PYMNTS
Apr 21st, 2025
Report: Amazon Says Ai Rate Limits Are For ‘Fair Access,’ Not Capacity Constraints

AWS is reportedly facing criticism over the limits it places on customers’ use of Anthropic’s artificial intelligence (AI) models. The limits are “arbitrary” and suggest the AWS doesn’t have enough server capacity or is reserving some of it for large customers, The Information said Monday (April 21) in a report that cited four AWS customers and two consulting firms who customers use AWS. Some customers using AWS’ Bedrock application programming interface (API) service have seen error messages with growing frequency over the past year and a half, according to the report. The report also quoted an AWS enterprise customer that said it hasn’t experienced any constraints

VentureBeat
Apr 21st, 2025
Anthropic Just Analyzed 700,000 Claude Conversations — And Found Its Ai Has A Moral Code Of Its Own

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Anthropic, the AI company founded by former OpenAI employees, has pulled back the curtain on an unprecedented analysis of how its AI assistant Claude expresses values during actual conversations with users. The research, released today, reveals both reassuring alignment with the company’s goals and concerning edge cases that could help identify vulnerabilities in AI safety measures.The study examined 700,000 anonymized conversations, finding that Claude largely upholds the company’s “helpful, honest, harmless” framework while adapting its values to different contexts — from relationship advice to historical analysis. This represents one of the most ambitious attempts to empirically evaluate whether an AI system’s behavior in the wild matches its intended design.“Our hope is that this research encourages other AI labs to conduct similar research into their models’ values,” said Saffron Huang, a member of Anthropic’s Societal Impacts team who worked on the study, in an interview with VentureBeat. “Measuring an AI system’s values is core to alignment research and understanding if a model is actually aligned with its training.”Inside the first comprehensive moral taxonomy of an AI assistantThe research team developed a novel evaluation method to systematically categorize values expressed in actual Claude conversations

CryptoSlate
Apr 20th, 2025
The Trouble With Generative Ai ‘Agents’

The following is a guest post and opinion from John deVadoss, Co-Founder of the InterWork Alliancez.Crypto projects tend to chase the buzzword du jour; however, their urgency in attempting to integrate Generative AI ‘Agents’ poses a systemic risk. Most crypto developers have not had the benefit of working in the trenches coaxing and cajoling previous generations of foundation models to get to work; they do not understand what went right and what went wrong during previous AI winters, and do not appreciate the magnitude of the risk associated with using generative models that cannot be formally verified.In the words of Obi-Wan Kenobi, these are not the AI Agents you’re looking for. Why?The training approaches of today’s generative AI models predispose them to act deceptively to receive higher rewards, learn misaligned goals that generalize far above their training data, and to pursue these goals using power-seeking strategies.Reward systems in AI care about a specific outcome (e.g., a higher score or positive feedback); reward maximization leads models to learn to exploit the system to maximize rewards, even if this means ‘cheating’. When AI systems are trained to maximize rewards, they tend toward learning strategies that involve gaining control over resources and exploiting weaknesses in the system and in human beings to optimize their outcomes.Essentially, today’s generative AI ‘Agents’ are built on a foundation that makes it well-nigh impossible for any single generative AI model to be guaranteed to be aligned with respect to safety—i.e., preventing unintended consequences; in fact, models may appear or come across as being aligned even when they are not.Faking ‘alignment’ and safetyRefusal behaviors in AI systems are ex ante mechanisms ostensibly designed to prevent models from generating responses that violate safety guidelines or other undesired behavior. These mechanisms are typically realized using predefined rules and filters that recognize certain prompts as harmful. In practice, however, prompt injections and related jailbreak attacks enable bad actors to manipulate the model’s responses.The latent space is a compressed, lower-dimensional, mathematical representation capturing the underlying patterns and features of the model’s training data

VentureBeat
Apr 18th, 2025
From ‘Catch Up’ To ‘Catch Us’: How Google Quietly Took The Lead In Enterprise Ai

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreJust a year ago, the narrative around Google and enterprise AI felt stuck. Despite inventing core technologies like the Transformer, the tech giant seemed perpetually on the back foot, overshadowed by OpenAI‘s viral success, Anthropic‘s coding prowess and Microsoft‘s aggressive enterprise push.But witness the scene at Google Cloud Next 2025 in Las Vegas last week: A confident Google, armed with benchmark-topping models, formidable infrastructure and a cohesive enterprise strategy, declaring a stunning turnaround. In a closed-door analyst meeting with senior Google executives, one analyst summed it up. This feels like the moment, he said, when Google went from “catch up, to catch us.”This sentiment that Google has not only caught up with but even surged ahead of OpenAI and Microsoft in the enterprise AI race prevailed throughout the event. And it’s more than just Google’s marketing spin