Full-Time

Research Engineer-Computer Vision

Confirmed live in the last 24 hours

Luma AI

Luma AI

51-200 employees

Develops multimodal AI technologies for creativity

Compensation Overview

$180k - $250k/yr

+ Equity Packages

Senior

Palo Alto, CA, USA

Category
Computer Vision
AI & Machine Learning
Required Skills
Python
Data Science
Pytorch
Machine Learning
Requirements
  • Exceptional general Python engineering skills
  • Industry ML experience
  • Data experience
  • 5+ years of relevant experience or demonstration of high impact projects as a Data Engineer, Machine Learning Engineer, or Data Scientist
  • Strong belief in the criticality of high-quality data
  • Experience with end to end training ML pipelines
  • Experience working in large distributed systems
  • Strong generalist Python and Pytorch skills
Responsibilities
  • Design data pipelines, including finding appropriate data sources, scraping, filtering, post-processing, de-duplicating, and versioning
  • Design and implement frameworks to evaluate the effectiveness of our models and data
  • Work closely with research and product teams who might be data contributors or consumers to incorporate their data usage needs on a variety of tasks
  • Conduct open-ended research to improve the quality of collected data, including but not limited to, semi-supervised learning, human-in-the-loop machine learning and fine-tuning with human feedback
Desired Qualifications
  • Good to have experience with visual media and computer vision algorithms

Luma AI develops multimodal artificial intelligence technologies that enhance human creativity and capabilities. Their main product, the Dream Machine, allows users to interact with various types of data and media, making it easier for creative professionals, businesses, and developers to explore innovative applications. Unlike many competitors, Luma AI focuses on integrating multiple modes of interaction, which provides a unique experience for users. The company operates on a subscription model, offering access to its AI tools and services, and aims to lead the way in AI-driven creativity and productivity.

Company Size

51-200

Company Stage

Late Stage VC

Total Funding

$87.3M

Headquarters

San Francisco, California

Founded

2021

Simplify Jobs

Simplify's Take

What believers are saying

  • Partnership with HUMAIN boosts Luma's presence in gaming and interactive entertainment.
  • $90 million funding accelerates AI model development, strengthening competitive edge.
  • Photon and Photon Flash models expand Luma's AI image generation offerings.

What critics are saying

  • Competition from Google and OpenAI may overshadow Luma's video generation models.
  • High demand for Ray2 model causes delays, risking customer dissatisfaction.
  • Subscription-based revenue model vulnerable to economic downturns affecting spending.

What makes Luma AI unique

  • Luma AI transforms text into 3D models, enhancing user creativity and engagement.
  • The Dream Machine integrates multimodal AI, pushing boundaries in AI-driven creativity.
  • Luma AI's Ray2 model offers fast, natural motion, surpassing competitors in video generation.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Company Equity

Stock Options

Growth & Insights and Company News

Headcount

6 month growth

7%

1 year growth

-1%

2 year growth

7%
The Korea Herald
May 15th, 2025
HUMAIN and Luma Join Forces to Power the Next Generation of Gaming and INTERACTIVE Entertainment

RIYADH, Saudi Arabia, May 15, 2025 /PRNewswire/ - HUMAIN the new full-stack AI company, owned by PIF, and built to redefine what's possible, has announced a landmark partnership with Luma, a global leader in multimodal generative AI innovation, known for its breakthrough video models, real-time 3D neural rendering, and cinematic AI.

GetCoAI
Jan 16th, 2025
Luma Labs launches new AI video model with improved motion and physics

Luma Labs launches new AI video model with improved motion and physics.

VentureBeat
Jan 16th, 2025
Luma Ai Releases Ray2 Generative Video Model With ‘Fast, Natural’ Motion And Better Physics

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Luma AI made waves with the launch of its Dream Machine generative AI video creation platform last summer.Of course, while that was only seven short months ago, the AI video space has advanced rapidly with the release of many new AI video creation models from rival startups in the U.S. and China, including Runway, Kling, Pika 2.0, OpenAI’s Sora, Google’s Veo 2, MiniMax’s Hailuo and open source alternatives such as Hotshot and Genmo’s Mochi 1, to name but a few. Even Luma itself recently updated its Dream Machine platform to include new still image generation and brainstorming boards, and also debuted an iOS app.But the updates continue: Today, the San Francisco-based startup released Ray2, its newest video AI generation model, available now through its Dream Machine website and mobile apps for paying subscribers (to start).The model offers “fast, natural coherent motion and physics,” according co-founder and CEO Amit Jain says on his X account, and was trained with 10 times more compute than the original Luma AI video model, Ray1.“This skyrockets the success rate of usable production-ready generations and makes video storytelling accessible to a lot more people,” he added.Luma’s Dream Machine web platform offers a free tier with 720 pixel generations capped at a variable number each month: Paid plans begin at $6.99 per month: From “Lite,” which offers 1080p visuals, to Plus ($20.99/month), to Unlimited ($66.49/month) and Enterprise ($1,672.92/year).A leap forward in video genRight now, Ray2 is limited to tex-to-video, allowing users to type in descriptions that are transformed into 5 or 10 second video clips. The model can generate new videos in a matter of seconds, although right now it can take minutes at a time due to a crush of demand from new users.Examples shared by Luma and early testers in its Creators program showcase the model’s versatility, including a man running through an Antarctic snowstorm surrounded by explosions, and a ballerina performing on an ice floe in the Arctic.Impressively, all the motions in the example videos appear lifelike and fluid — and often, with subjects moving much faster and more naturally than videos from rival AI generators, which often appear to generate in slow motion.The model can even create realistic versions of surreal ideas such as a giraffe surfing, as X user @JeffSynthesized demonstrated

The Bridge
Dec 16th, 2024
Google、新Ai動画生成モデル「Veo 2」を発表——視聴者の体験はOpenaiの「Sora」を上回ると主張

「Veo 2」Image credit: Google. Google は、より写実的な映像を生成すると主張する、動画生成モデルの最新バージョン「Veo 2」で、OpenAI の「Sora」に対抗している。. 同社は、画像生成モデル Imagen 3 も更新し、より豊かで詳細な写真を作成するようにした。

VentureBeat
Dec 16th, 2024
Google Debuts New Ai Video Generator Veo 2 Claiming Better Audience Scores Than Sora

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Google is going head to head against OpenAI’s Sora with the newest version of its video generation model, Veo 2, which it says makes more realistic-looking videos.The company also updated its image generation model Imagen 3 to produce richer, more detailed photos. Google said Veo 2 has “a better understanding of real-world physics and the nuances of human movement and expression.” It is available on Google Labs’ VideoFX platform — but only on a waitlisted basis. Users will need to sign up through a Google Form and wait for access to be granted provisionally by Google at a time of its choosing.“Veo 2 also understands the language of cinematography: Ask it for a genre, specify a lens, suggest cinematic effects and Veo 2 will deliver — at resolutions up to 4K,” Google said in a blog post. Video generated with Veo 2While Veo 2 is available only to select users, the original Veo remains available on Vertex AI. Videos created with Veo 2 will contain Google’s metadata watermark SynthID to identify these as AI-generated. Google admits, though, that Veo 2 may still hallucinate extra fingers and the like, but it promises the new model produces fewer hallucinations.  Veo 2 will compete against OpenAI’s recently released Sora video generation model to attract filmmakers and content creators. Sora had been in previews for a while before OpenAI made it available to paying subscribers. Impressively, Google says that on its own internal tests gauging “overall preference” (i.e