Full-Time
Confirmed live in the last 24 hours
Develops reliable and interpretable AI systems
£155k - £180k/yr
Senior
H1B Sponsorship Available
London, UK
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time.
Anthropic focuses on creating reliable and interpretable AI systems. Its main product, Claude, is an AI assistant that can perform various tasks for clients across different industries. Claude uses advanced techniques in natural language processing, reinforcement learning, and human feedback to understand and respond to user requests effectively. What sets Anthropic apart from its competitors is its emphasis on making AI systems that are not only powerful but also easy to understand and control. The company's goal is to enhance operational efficiency and decision-making for its clients by providing AI-driven solutions that can be tailored to specific needs.
Company Size
1,001-5,000
Company Stage
Series E
Total Funding
$16.8B
Headquarters
San Francisco, California
Founded
2021
Help us improve and share your feedback! Did you find this helpful?
Flexible Work Hours
Paid Vacation
Parental Leave
Hybrid Work Options
Company Equity
Templum and SoFi have expanded their partnership to offer exclusive access to privately held shares of AI leader Anthropic through a new class of the Cosmos Fund. This opportunity will be available from April 22 to May 8, 2025. Anthropic recently raised $3.5 billion, valuing it at $61.5 billion. The fund aims to provide retail investors access to high-value opportunities traditionally dominated by large institutions.
Anthropic has released a detailed best-practice guide for using Claude Code, a command-line interface designed for agentic software development workflows.
After spending much of his time and energy this year as head of the Department of Government Efficiency (DOGE), could Elon Musk be pivoting to refocus on his businesses?. Sources familiar with an xAI investor call last week told CNBC Monday (April 21) that Musk was on the call and is seeking to establish a “proper valuation” for his artificial intelligence (AI) startup. Although Musk, who was a co-founder of AI pioneer OpenAI, did not formally announce a capital funding round for xAI, the sources for the CNBC report think that is coming soon
AWS is reportedly facing criticism over the limits it places on customers’ use of Anthropic’s artificial intelligence (AI) models. The limits are “arbitrary” and suggest the AWS doesn’t have enough server capacity or is reserving some of it for large customers, The Information said Monday (April 21) in a report that cited four AWS customers and two consulting firms who customers use AWS. Some customers using AWS’ Bedrock application programming interface (API) service have seen error messages with growing frequency over the past year and a half, according to the report. The report also quoted an AWS enterprise customer that said it hasn’t experienced any constraints
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Anthropic, the AI company founded by former OpenAI employees, has pulled back the curtain on an unprecedented analysis of how its AI assistant Claude expresses values during actual conversations with users. The research, released today, reveals both reassuring alignment with the company’s goals and concerning edge cases that could help identify vulnerabilities in AI safety measures.The study examined 700,000 anonymized conversations, finding that Claude largely upholds the company’s “helpful, honest, harmless” framework while adapting its values to different contexts — from relationship advice to historical analysis. This represents one of the most ambitious attempts to empirically evaluate whether an AI system’s behavior in the wild matches its intended design.“Our hope is that this research encourages other AI labs to conduct similar research into their models’ values,” said Saffron Huang, a member of Anthropic’s Societal Impacts team who worked on the study, in an interview with VentureBeat. “Measuring an AI system’s values is core to alignment research and understanding if a model is actually aligned with its training.”Inside the first comprehensive moral taxonomy of an AI assistantThe research team developed a novel evaluation method to systematically categorize values expressed in actual Claude conversations