This role is remote, so it can be executed globally. However, to facilitate working with the engineering team we prefer candidates on the East Coast (US) or in Europe (NY/London/Warsaw/Berlin).
About ElevenLabs
At ElevenLabs, we are pioneering voice technology with our cutting-edge research and products.
We launched in January 2023 and have since reached over 1 million users globally and have partnered with the world’s biggest names (see customer stories). We have closed our Series-B funding at 1.1B valuation earlier this year and are backed by the leading names in tech and AI (Nat Fridman, Daniel Gross, Andreessen Horowitz, Instagram co-founder Mike Krieger, Oculus VR co-founder Brendan Iribe, Deepmind & Inflection co-founder Mustafa Suleyman, and many others).
We are at an exciting phase of our growth and innovation and are looking for ambitious people to help us further push the boundaries of voice AI. This is a rare chance to be an early member of a company on the rise. If this excites you, we want to meet you!
Who we are
A global team of passionate and innovative individuals united by curiosity and a shared goal: to be the first choice for AI audio solutions. Together, we are shaping a new technology and market from the ground up. We innovate quickly and take pride in getting things right, from the big picture initiatives to the details that keep us moving smoothly every day. We work with high autonomy and accountability where the best idea wins at any time and from anyone.
About the role
We are looking for an experienced engineer with a background in trust & safety and machine learning/AI to lead safety engineering at Eleven Labs.
As a founding member of our dedicated Safety Engineering function, you’ll be at the forefront of our efforts to ensure that the immense potential of AI is harnessed in a responsible and sustainable manner.
You will work on the design and implementation of systems that detect and prevent abuse, promote user safety, and reduce risk across our platform. You will spearhead industry-wide innovation on the adoption of the latest AI and ML capabilities in content moderation, and will be primarily responsible for bringing automation and efficiency to our moderation infrastructure.
Specifically, you will
Architect, build, and maintain our anti-abuse and content moderation infrastructure designed to protect us and end users from unwanted behavior.
Lead the adoption of latest gen AI methods to automate our abuse monitoring and content moderation workflows;
Design, implement and iterate ML models using proprietary and industry tools to continuously improve our detection and enforcement capabilities
Collaborate with broader engineering team to design and build safety mitigations across our product suite; develop ubiquitous moderation coverage across our deployments
Expand our internal safety tooling and infrastructure
Implement provenance solutions in partnership with internal and external partners.
Collaborate with our data team to develop and maintain actionable safety metrics
Who you are
We’re looking for exceptional individuals who combine technical excellence with ethical awareness, who are excited by hard problems and motivated by human impact. You’ll strive with us if you:
Are passionate about audio AI driven by a desire to make content universally accessible and breaking the frontiers of new tech.
Are a highly motivated and driven individual with a strong work ethic. Our team is aware of this critical moment of audio AI evolution and is committed to going the extra mile to lead.
Are analytical, efficient, and strive on solving complex challenges with a first principles mindset.
Consistently strive for excellence, delivering high-quality work quickly and exceeding expectations.
Take initiative and work autonomously from day one, prioritizing learning and contribution while leaving ego aside.
What you bring
6+ years in progressively senior software engineering roles, including at least some spent in trust and safety, integrity or AI safety teams
Strong experience in Python, including asynchronous Python. Proven track record of building production Python applications.
Experience/proven track record with: Building backend safety infrastructure & tooling; designing, implementing, and iterating on ML/AI models to detect, monitor and enforce on abusive content; machine learning frameworks such as Pytorch
Experience and/or interest in applying generative AI to increase moderation efficiency
Experience and/or interest in designing and implementing AI provenance tools
Experience with SQL and data analysis tools; familiarity with React would be useful
Strong candidates will also have a mix of experience in:
Setting up and maintaining production backend services and data pipelines.
Designing and implementing trust and safety operational flows (ie flagging, actioning, recording)
Experience mentoring and leading technical teams
What we offer
High-velocity innovation: Rapid experimentation, lean autonomous teams, and minimal bureaucracy.
A truly global team: Collaboration with teammates across 30+ countries, a global customer footprint and office hubs in New York, London and Warsaw. Annual company offsite for the whole team to get together (the last one in Croatia!)
Remote first: We prioritize your talent, not your location, with structured asynchronous workflows for maximum impact and minimal meetings.
Continuous growth: Collaborate with AI leaders, shape your path, and contribute where you excel most.