Work Here?
Industries
Enterprise Software
AI & Machine Learning
Company Size
1,001-5,000
Company Stage
Series E
Total Funding
$16.8B
Headquarters
San Francisco, California
Founded
2021
Anthropic focuses on creating reliable and interpretable AI systems. Its main product, Claude, serves as an AI assistant that can perform various tasks for clients across different industries. Claude utilizes natural language processing and reinforcement learning to understand and respond to user requests effectively. What sets Anthropic apart from its competitors is its emphasis on making AI systems that are not only powerful but also easy to understand and control. The company's goal is to enhance operational efficiency and decision-making for its clients through advanced AI solutions.
Help us improve and share your feedback! Did you find this helpful?
Total Funding
$16804M
Above
Industry Average
Funded Over
7 Rounds
Industry standards
Flexible Work Hours
Paid Vacation
Parental Leave
Hybrid Work Options
Company Equity
After spending much of his time and energy this year as head of the Department of Government Efficiency (DOGE), could Elon Musk be pivoting to refocus on his businesses?. Sources familiar with an xAI investor call last week told CNBC Monday (April 21) that Musk was on the call and is seeking to establish a “proper valuation” for his artificial intelligence (AI) startup. Although Musk, who was a co-founder of AI pioneer OpenAI, did not formally announce a capital funding round for xAI, the sources for the CNBC report think that is coming soon
AWS is reportedly facing criticism over the limits it places on customers’ use of Anthropic’s artificial intelligence (AI) models. The limits are “arbitrary” and suggest the AWS doesn’t have enough server capacity or is reserving some of it for large customers, The Information said Monday (April 21) in a report that cited four AWS customers and two consulting firms who customers use AWS. Some customers using AWS’ Bedrock application programming interface (API) service have seen error messages with growing frequency over the past year and a half, according to the report. The report also quoted an AWS enterprise customer that said it hasn’t experienced any constraints
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Anthropic, the AI company founded by former OpenAI employees, has pulled back the curtain on an unprecedented analysis of how its AI assistant Claude expresses values during actual conversations with users. The research, released today, reveals both reassuring alignment with the company’s goals and concerning edge cases that could help identify vulnerabilities in AI safety measures.The study examined 700,000 anonymized conversations, finding that Claude largely upholds the company’s “helpful, honest, harmless” framework while adapting its values to different contexts — from relationship advice to historical analysis. This represents one of the most ambitious attempts to empirically evaluate whether an AI system’s behavior in the wild matches its intended design.“Our hope is that this research encourages other AI labs to conduct similar research into their models’ values,” said Saffron Huang, a member of Anthropic’s Societal Impacts team who worked on the study, in an interview with VentureBeat. “Measuring an AI system’s values is core to alignment research and understanding if a model is actually aligned with its training.”Inside the first comprehensive moral taxonomy of an AI assistantThe research team developed a novel evaluation method to systematically categorize values expressed in actual Claude conversations
The following is a guest post and opinion from John deVadoss, Co-Founder of the InterWork Alliancez.Crypto projects tend to chase the buzzword du jour; however, their urgency in attempting to integrate Generative AI ‘Agents’ poses a systemic risk. Most crypto developers have not had the benefit of working in the trenches coaxing and cajoling previous generations of foundation models to get to work; they do not understand what went right and what went wrong during previous AI winters, and do not appreciate the magnitude of the risk associated with using generative models that cannot be formally verified.In the words of Obi-Wan Kenobi, these are not the AI Agents you’re looking for. Why?The training approaches of today’s generative AI models predispose them to act deceptively to receive higher rewards, learn misaligned goals that generalize far above their training data, and to pursue these goals using power-seeking strategies.Reward systems in AI care about a specific outcome (e.g., a higher score or positive feedback); reward maximization leads models to learn to exploit the system to maximize rewards, even if this means ‘cheating’. When AI systems are trained to maximize rewards, they tend toward learning strategies that involve gaining control over resources and exploiting weaknesses in the system and in human beings to optimize their outcomes.Essentially, today’s generative AI ‘Agents’ are built on a foundation that makes it well-nigh impossible for any single generative AI model to be guaranteed to be aligned with respect to safety—i.e., preventing unintended consequences; in fact, models may appear or come across as being aligned even when they are not.Faking ‘alignment’ and safetyRefusal behaviors in AI systems are ex ante mechanisms ostensibly designed to prevent models from generating responses that violate safety guidelines or other undesired behavior. These mechanisms are typically realized using predefined rules and filters that recognize certain prompts as harmful. In practice, however, prompt injections and related jailbreak attacks enable bad actors to manipulate the model’s responses.The latent space is a compressed, lower-dimensional, mathematical representation capturing the underlying patterns and features of the model’s training data
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreJust a year ago, the narrative around Google and enterprise AI felt stuck. Despite inventing core technologies like the Transformer, the tech giant seemed perpetually on the back foot, overshadowed by OpenAI‘s viral success, Anthropic‘s coding prowess and Microsoft‘s aggressive enterprise push.But witness the scene at Google Cloud Next 2025 in Las Vegas last week: A confident Google, armed with benchmark-topping models, formidable infrastructure and a cohesive enterprise strategy, declaring a stunning turnaround. In a closed-door analyst meeting with senior Google executives, one analyst summed it up. This feels like the moment, he said, when Google went from “catch up, to catch us.”This sentiment that Google has not only caught up with but even surged ahead of OpenAI and Microsoft in the enterprise AI race prevailed throughout the event. And it’s more than just Google’s marketing spin
$250k - $270k/yr
San Francisco, CA, USA + 1 more
$355k/yr
San Francisco, CA, USA + 1 more
Find jobs on Simplify and start your career today
AI & Machine Learning
49 Open Roles
Software Engineering
31 Open Roles
Engineering Management
10 Open Roles
IT & Security
9 Open Roles
Data & Analytics
4 Open Roles
DevOps & Infrastructure
4 Open Roles
Sales & Solution Engineering
1 Open Roles
Developer Relations
1 Open Roles
Sales & Account Management
19 Open Roles
Business & Strategy
9 Open Roles
Growth & Marketing
8 Open Roles
Product
6 Open Roles
Accounting
5 Open Roles
Consulting
2 Open Roles
Finance & Banking
2 Open Roles
Operations & Logistics
1 Open Roles
Biology & Biotech
2 Open Roles
Education
2 Open Roles
Discover companies similar to Anthropic
Industries
Enterprise Software
AI & Machine Learning
Company Size
1,001-5,000
Company Stage
Series E
Total Funding
$16.8B
Headquarters
San Francisco, California
Founded
2021
$315k - $420k/yr
Seattle, WA, USA + 1 more
$250k - $270k/yr
San Francisco, CA, USA + 1 more
$355k/yr
San Francisco, CA, USA + 1 more
Find jobs on Simplify and start your career today
AI & Machine Learning
49 Open Roles
Software Engineering
31 Open Roles
Engineering Management
10 Open Roles
IT & Security
9 Open Roles
Data & Analytics
4 Open Roles
DevOps & Infrastructure
4 Open Roles
Sales & Solution Engineering
1 Open Roles
Developer Relations
1 Open Roles
Sales & Account Management
19 Open Roles
Business & Strategy
9 Open Roles
Growth & Marketing
8 Open Roles
Product
6 Open Roles
Accounting
5 Open Roles
Consulting
2 Open Roles
Finance & Banking
2 Open Roles
Operations & Logistics
1 Open Roles
Biology & Biotech
2 Open Roles
Education
2 Open Roles
Discover companies similar to Anthropic