Full-Time

Senior Software Engineer

Infrastructure

Posted on 8/6/2025

Tavus

Tavus

51-200 employees

AI-driven video personalization for customer engagement

Compensation Overview

$160k - $250k/yr

San Francisco, CA, USA

Remote

Category
Software Engineering (1)
Required Skills
AWS
Requirements
  • You're a scrappy infrastructure generalist who has seen it all
  • You've worked with GPU cloud providers and understand what's needed to build reliable systems on top of them
  • You consider AWS your second home. You're comfortable spinning up new services and building simple repeatable processes for others to leverage
  • You thrive in an ambiguous and fast changing space
  • You bring a senior mindset: you set direction, own decisions, and get things over the finish line
  • You have incredibly communication skills and can communicate complex technical ideas clearly to both technical and non-technical team members
Responsibilities
  • Work across teams to own and extend our GPU infra as well as our traditional cloud infra (AWS)
  • Work closely with our external infrastructure partners to ensure stability and reliability for GPU deployments and GPU availability
  • Empower other engineers to move fast by building amazing developer experiences for setting up new systems
Desired Qualifications
  • Experience with GCP
  • Experience with video streaming infrastructure
  • Experience working with LLMs
  • Broad knowledge of generative AI
  • Experience with EKS/K8S

Tavus offers a video personalization platform for digital marketing that uses AI to turn one recorded video into many customized videos tailored to individual customers. It clones voices and other elements to generate hundreds or millions of personalized outputs from a single video, scalable for any business size. The platform emphasizes scalable, AI-driven personalization that keeps a personal touch without creating each video from scratch. Its goal is to help businesses deepen customer connections, boost loyalty, and increase sales through personalized video messages at scale.

Company Size

51-200

Company Stage

Series A

Total Funding

$24.2M

Headquarters

San Francisco, California

Founded

2020

Simplify Jobs

Simplify's Take

What believers are saying

  • Raised $40M Series B from CRV, Sequoia in 2024 to expand PALs enterprise adoption.
  • AI Santa surpassed millions of hits in 2024, proving high user engagement.
  • Over 100,000 developers use proprietary models for recruiting and sales.

What critics are saying

  • Synthesia captures 60% Fortune 500 deals, eroding Tavus subscriptions within 12 months.
  • HeyGen's 5M monthly actives siphon Tavus developers, starving usage revenue in 9 months.
  • OpenAI Sora 2.0 commoditizes Phoenix-4, collapsing API pricing by 80% in 6 months.

What makes Tavus unique

  • Phoenix-4 renders real-time emotional AI avatars at 40fps with active listening.
  • Raven-1 fuses audio-visual signals for sub-100ms emotion and intent perception.
  • PALs enable agentic AI humans with memory, web search, and task execution.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health Insurance

Unlimited Paid Time Off

Flexible Work Hours

Growth & Insights and Company News

Headcount

6 month growth

-7%

1 year growth

-5%

2 year growth

-9%
Business Wire
Feb 19th, 2026
Tavus launches Phoenix-4, first real-time AI model with emotional intelligence and active listening

Tavus has launched Phoenix-4, a real-time behaviour generation engine that creates emotionally responsive AI avatars for live conversations. The San Francisco-based company claims it is the first real-time model to generate and control emotional states, active listening behaviour and continuous facial motion as a unified system. Phoenix-4 runs at 40 frames per second in 1080p and generates every pixel from head to shoulders, including eye blinks. The model offers explicit emotion control across 10+ states, including happiness, sadness and anger, and can be guided through prompts or respond contextually. It also features context-aware active listening with visual backchannels like nods and reactive expressions. The system is available today through Tavus' platform, APIs and updated Stock Replica library featuring over 40 new replicas.

The Associated Press
Feb 11th, 2026
Tavus launches Raven-1, multimodal AI perception system that understands emotion and intent in real time

Tavus, a San Francisco-based AI company, has launched Raven-1, a multimodal perception system that enables AI to understand emotion, intent and context by interpreting audio and visual signals together. The system captures tone, facial expressions, posture and gaze to produce natural language descriptions of emotional states. Unlike traditional systems that convert speech into transcripts, Raven-1 fuses audio-visual signals into unified representations that language models can process directly. The system operates with sub-100 millisecond audio perception latency and combined pipeline latency under 600 milliseconds. Raven-1 is now generally available across all Tavus conversations and APIs. The company previously launched Sparrow-1, a conversational timing model, and offers both developer APIs and PALs, a consumer platform for AI agents.

TechCrunch
Dec 10th, 2025
AI startup Tavus founder says users talk to its AI Santa 'for hours' per day

AI startup Tavus founder says users talk to its AI Santa 'for hours' per day. A new helper has arrived at the North Pole in recent years: AI. Tavus, the AI startup that creates digital replicas using voice and face cloning technology, has launched its AI Santa experience for the second year in a row. This allows parents and children to video chat with a virtual version of the jolly old Saint Nick. After signing up for a free account, users can interact with AI Santa via text, phone, or video chat. Users can tell AI Santa what they want for Christmas, share their holiday plans, and find out if they're on the naughty or nice list. This year, the company debuted an improved version of AI Santa, designed to be more expressive and emotionally aware. Santa is now a "Tavus PAL," the company's name for its real-time AI agents that are built to see, hear, respond, and appear human. AI Santa can now see users' expressions and gestures and respond to them. It also remembers users' conversations and interests, creating a more personalized experience. Notably, it now can take actions of its own, including searching the web for present ideas or even perform everyday tasks like drafting emails. During testing, the conversation with AI Santa was engaging for the most part. When we mentioned wanting a new PlayStation for Christmas, Santa followed up with questions about our favorite video games, showing knowledge of specific titles like Baldur's Gate 3. It also smiled back when we did. (We didn't like that part very much, but maybe others will.) Users appear to be enjoying the improved experience so far. Founder and CEO Hassaan Raza said that many people are engaging with the platform frequently, spending hours chatting with AI Santa and often reaching their daily limits. Join the disrupt 2026 waitlist. "Last year's AI Santa drew millions of hits, and we're on pace to surpass that by a wide margin as Christmas approaches," he noted. While this level of engagement marks a milestone for Tavus, it also raises questions about the impact of such interactions, especially for young children. Children may struggle to distinguish between AI and a real person. Spending hours in conversation with an AI has already been linked to negative effects in adults, making the potential effects on children who strongly believe in Santa a concern for some parents. During our testing, there were subtle cues that the AI Santa does yet appear fully human-like, such as long pauses and a flat voice. We also found that if a user were to question whether it's real, the programmed response was: "I'm an AI Santa powered by Tavus' magic and technology. I might not be the physical Santa, but I've got the spirit and the cheer." Still, the experience launches amid growing concerns about AI's effects on young users. There have been reports linking chatbot interactions to serious harm, including cases where chatbots were implicated in the suicide deaths of teenagers. Character.AI removed access to its chatbots for users under 18 in October. Raza emphasized that the AI Santa experience is designed for families to enjoy together, with safety measures in place to ensure appropriate interactions. Safety features, such as content filters, have been implemented to maintain family-friendly discussions. In certain situations, conversations can be terminated, and users are directed to mental health resources if necessary. "The vast majority of interactions have been family-friendly and true to the Santa experience," he said. Additionally, when asked about data collection, Raza said the company "collects logs, session timestamps, metadata, and other information users choose to share during their chats. This data is used to provide and maintain a safe experience, and users can request data deletion at any point in time." Lauren covers media, streaming, apps and platforms at TechCrunch. You can contact or verify outreach from Lauren by emailing [email protected] or via encrypted message at laurenforris22.25 on Signal. Plan ahead for the 2026 StrictlyVC events. Hear straight-from-the-source candid insights in on-stage fireside sessions and meet the builders and backers shaping the industry. Join the waitlist to get first access to the lowest-priced tickets and important updates.

StartupHub.ai
Nov 12th, 2025
Tavus Raises $40M to Advance AI Humans

Tavus raises $40M to advance AI Humans. Tavus secured $40 million to develop advanced AI Humans, called PALs, which interact naturally through video, voice, and text. Tavus secured $40 million in Series B funding. This investment will advance its "human computing" initiative. The company aims to build advanced AI Humans. This funding coincides with the launch of PALs (Personal Affective Links). PALs are AI Humans designed for natural interaction. They use video, voice, and text communication. Unlike typical chatbots, PALs engage through face-to-face video. They actively see, hear, and respond. They also understand context, emotion, and social cues. Tavus CEO Hassaan Raza emphasizes a crucial shift. Machines now learn human communication. This moves beyond humans adapting to machines. Furthermore, PALs maintain a visual presence during conversations. They read facial expressions and body language in real time. They also adapt to individual communication styles. These AI Humans also possess "agency." They proactively manage tasks like calendars and emails. This capability moves beyond simple reactive responses. Three proprietary models power the PALs platform. Phoenix-4 handles lifelike visual rendering. Sparrow-1 manages conversational intelligence. Raven-1 processes contextual perception. Tavus, a San Francisco AI research lab, aims to bridge this human-computer gap. Its team includes experts like Professor Ioannis Patras and Dr. Maja Pantic. They focus on foundational AI models. Currently, over 100,000 developers and enterprises use Tavus technology. The new funding supports further research and enterprise expansion. Users can access PALs for free at tavus.io. CRV led the funding round. Scale Venture Partners, Sequoia Capital, Y Combinator, HubSpot Ventures, and Flex Capital also participated.

FinSMEs
Nov 12th, 2025
Tavus Raises $40M in Series B Funding

Tavus raises $40M in Series B funding. Tavus, a San Francisco, CA-based human computing company, raised $40m in Series B funding. The round was led by CRV with participation from Scale Venture Partners, Sequoia Capital, Y Combinator, HubSpot Ventures, and Flex Capital. The company intends to use the funds to continue to expand operations and its development efforts. Led by Hassaan Raza, CEO, Tavus is advancing PALs (Personal Affective Links), Agentic AI humans with emotional intelligence, agentic capabilities, and multimodality with text, voice, and face-to-face powered by foundational models for rendering, conversational intelligence, and perception. Behind every PAL is a suite of foundational models that teach machines to see, feel, and act the way people do. These proprietary systems were built entirely in-house by the company's research team to understand and simulate human behavior with depth. * Phoenix-4 | A SoTA rendering model that drives lifelike expression, headpose control, and emotion generation at conversational latency. * Sparrow-1 | An audio understanding model that uses deep conversational intelligence and audio and semantic-based emotional understanding to manage timing, tone, and intent to adapt in real time to know not just what to say, but when. * Raven-1 | A contextual perception model that interprets context, people, environments, emotions, expressions, and gestures, giving PALs a sense of presence and enabling them to see and understand like humans do. These, paired with a SoTA orchestration and memory management system, bring face-to-face video, speech, text, and agentic capabilities to life.

INACTIVE