Full-Time

Senior / Staff Platform Engineer

Posted on 10/31/2025

Hedra

Hedra

51-200 employees

Foundation-model powered platform for virtual worlds

Compensation Overview

$175k - $275k/yr

San Francisco, CA, USA

In Person

Category
DevOps & Infrastructure (2)
,
Requirements
  • At least 4+ years of experience building developer platforms, internal tooling, or platform engineering at technology companies.
  • Platform API design and building internal developer tooling, CLIs, and self-service portals for infrastructure resources.
  • Kubernetes platform expertise including building abstractions, operators, custom resources, and developer-friendly deployment workflows.
  • Infrastructure as Code mastery with Terraform for creating reusable modules and standardized infrastructure patterns.
  • Developer experience focus with experience building internal platforms that reduce cognitive load for engineering teams.
  • Container platform design including image registries, security scanning, and standardized base images for AI/ML workloads.
  • Service templating and standardization using tools like Helm, Kustomize, or custom controllers for consistent deployments.
  • Platform observability implementing centralized logging, metrics, and tracing that developers can easily consume.
  • Self-service automation building workflows that allow teams to provision resources without platform team intervention.
  • Cost transparency and governance implementing resource quotas, cost allocation, and usage visibility across teams.
  • Security by default designing platform services with built-in security controls, secrets management, and compliance guardrails.
  • Multi-tenancy and isolation ensuring teams can work independently while sharing platform resources safely.
  • Developer advocacy mindset with experience gathering requirements from engineering teams and translating them into platform capabilities.
  • Scalable platform architecture for supporting rapid team growth and varying workload demands.
Responsibilities
  • Design and build the internal developer platform that empowers engineering teams to deploy, scale, and manage AI-powered products efficiently.
  • Create self-service infrastructure tooling, abstract away complexity from developers, and build foundational platform capabilities enabling rapid innovation on the Character-3 foundation model.
  • Architect developer-friendly APIs and interfaces for infrastructure resources.
  • Build automated provisioning workflows using Terraform.
  • Create standardized deployment patterns on Kubernetes.
  • Enable engineering teams to ship faster while maintaining reliability, security, and cost efficiency.
  • Design platform services that abstract the underlying complexity of multi-modal AI workloads and video processing infrastructure.
  • Partner with product engineers and researchers to understand their needs and translate them into scalable platform solutions.

Hedra develops foundation models that allow users to create expressive digital characters and virtual worlds for video storytelling. The platform works by converting text or audio inputs into animated video, where characters speak and move with precise synchronization. Unlike competitors that generate unpredictable video clips, Hedra focuses on providing creators with granular control over character performance and narrative consistency. The company's goal is to provide a complete creative lab that enables filmmakers and game developers to build immersive, human-centered stories.

Company Size

51-200

Company Stage

Series A

Total Funding

$42M

Headquarters

San Francisco, California

Founded

2023

Simplify Jobs

Simplify's Take

What believers are saying

  • $32M Series A from a16z fuels Character-3 training and enterprise expansion.
  • Voice cloning and multilingual avatars boost global content creator adoption.
  • 3 million users and 10 million videos signal strong early market traction.

What critics are saying

  • Synthesia erodes market share with superior enterprise customization now.
  • HeyGen's multi-character models expose Omnia's single-subject limits immediately.
  • Nvidia's Magic 1-For-1 open-sources Hedra tech, commoditizing IP in months.

What makes Hedra unique

  • Omnia model jointly reasons over audio, motion, and camera for lifelike videos.
  • Character-3 enables full-body animation with lip-sync and micro-expressions.
  • Single credit pool powers video, image, voice models without separate subscriptions.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health Insurance

401(k) Retirement Plan

Competitive compensation and equity

Growth & Insights and Company News

Headcount

6 month growth

1%

1 year growth

6%

2 year growth

-11%
Evolution AI Hub
Feb 6th, 2026
Hedra Omnia release shows why AI video is moving beyond talking heads

Hedra Omnia release shows why AI video is moving beyond talking heads. The race to make AI video feel genuinely human just took a meaningful turn. Hedra has released Omnia, a new video model designed to solve a problem creators have been quietly complaining about for years: AI video that looks sharp but feels dead. Omnia's debut matters because it challenges a long-standing tradeoff in generative video - choose expressive, voice-driven avatars or cinematic visuals, but rarely both at the same time. A long-standing split in AI video finally gets addressed. Until now, the AI video market has been divided into two camps. On one side are "talking head" systems. They do voice well, but everything else is frozen: static cameras, stiff bodies, environments that feel like wallpaper. On the other are general video generators that create dynamic scenes but treat audio as an accessory rather than a driver of performance. The result is visually impressive clips that fall apart the moment someone speaks for more than a few seconds. Omnia was built to close that gap. Instead of stitching together separate systems for visuals, motion, and sound, Hedra engineered a single model that reasons over all three at once. The idea is simple but ambitious: if speech, movement, and camera behavior influence each other in real life, an AI model should treat them the same way. The technical shift behind Omnia isn't about higher resolution or flashier effects. It's about coordination. In most AI video systems, audio comes last - used mainly to sync lips. Omnia flips that priority. Speech rhythm influences body motion. Emotional tone shapes facial expressions. Timing affects how the camera moves through a scene. The model builds an understanding of the entire performance before generating the first frame. That approach shows up in details professionals notice immediately: natural blinking, subtle head movement between words, hands that stay stable, and logos that don't warp or dissolve halfway through a shot. These aren't cosmetic upgrades. They're the cues viewers subconsciously use to decide whether a video feels authentic or artificial. One notable choice Hedra made was to avoid chasing hyper-sharp realism. In practice, overly crisp faces with robotic motion tend to feel unsettling. Omnia prioritizes believable presence instead - continuous motion, micro-expressions, and camera behavior that responds to the subject rather than drifting aimlessly. Camera control becomes part of the performance. One of Omnia's more consequential features is its approach to camera direction. Instead of treating the camera as an invisible observer, the model treats it as part of the scene. Creators can specify push-ins, pull-outs, tracking shots, or orbiting movement and expect those directions to be followed consistently. More importantly, the camera stays coherent relative to the subject. If the speaker leans forward or shifts tone, the framing adjusts in ways that feel intentional rather than random. For anyone who has tried to create AI video with even modest cinematic ambition, this is a big deal. Camera motion has traditionally been one of the fastest ways to expose a clip as AI-generated. Omnia's ability to maintain spatial logic suggests a move toward AI video that can be directed, not just prompted. Where this model is likely to shine first. Omnia is optimized for short, character-driven clips - roughly eight seconds at full HD. That constraint is deliberate and revealing. The strongest early use cases are likely to be social and brand formats where authenticity matters more than spectacle. Influencer-style videos, interview snippets, podcast clips, and conversational ads all benefit from consistent voice, natural motion, and stable visual details. In those formats, even small visual glitches can break trust. Brand teams, in particular, may pay attention to Omnia's handling of logos and product elements. Generative AI has struggled with brand integrity, often rendering text or marks unusable. Reliable control over those details lowers one of the biggest barriers to AI video adoption in advertising and marketing. There's also a quieter implication for music and performance content. Because audio timing influences motion throughout the clip, rhythm-driven material - singing, spoken word, or musical dialogue - comes across as more intentional than the usual lip-synced output. Why this news matters beyond creators. For consumers, the shift is subtle but important. As AI video becomes more believable, audiences will encounter synthetic performers in contexts that previously required human production - local ads, explainer content, and social media storytelling. The line between filmed and generated video will blur further, raising new questions about disclosure and trust. For businesses, Omnia signals that AI video is moving from novelty toward workflow tool. When camera control, voice consistency, and brand reliability improve, AI video stops being experimental and starts competing with traditional production for certain use cases. And for the industry at large, the model reflects a broader trend: progress in generative AI is increasingly about coherence, not raw visual power. The models that win won't just look better frame by frame; they'll feel more intentional over time. Looking ahead: what the next year could bring. Expect more pressure on AI video platforms to integrate audio, motion, and camera logic rather than treating them as separate problems. Omnia sets a benchmark that competitors will have to respond to. There are still clear limits. Short clip lengths constrain narrative complexity, and single-subject scenes remain the safest ground. But those constraints also suggest a roadmap. As models like Omnia mature, multi-character interaction and longer scenes become more feasible. The bigger risk is complacency. As AI video becomes more convincing, misuse and over-automation become easier. Platforms will need to balance creative power with safeguards that maintain transparency and accountability. For now, Omnia represents a meaningful shift in priorities. Instead of asking how real AI video can look, Hedra is betting that the more important question is how real it can feel.

Andreessen Horowitz
Nov 20th, 2025
a16z speedrun

We invest up to $1M in your new startup.

Mirtech News
May 15th, 2025
Hedra, the app for creating baby talk podcasts, secures $32M from a16z

Hedra introduced its first video model in June 2024, quickly attracting investor interest.

Startup Ecosystem Canada
May 15th, 2025
Hedra Raises $32M to Enhance AI-Generated Talking Baby Podcasts

Hedra, a startup known for its AI-generated video and editing suite, has raised $32 million in a Series A funding round led by Andreessen Horowitz's Infrastructure fund.

TechCrunch
May 15th, 2025
Hedra raises $32M for AI baby podcasts

Hedra, an app for creating AI-generated talking baby podcasts, has raised $32M in a Series A round led by Andreessen Horowitz. The startup, founded by Michael Lingelbach, offers a video generation suite powered by its Character-3 model. The funding will be used to train a new model for better customization and user interaction. Hedra aims to attract creators and enterprises, competing with companies like Synthesia and HeyGen.

INACTIVE