Full-Time

Machine Learning Engineer

Posted on 11/30/2023

Luma AI

Luma AI

51-200 employees

Develops multimodal AI technologies for creativity

No salary listed

Mid

Palo Alto, CA, USA

This role is hybrid, requiring some days in-office.

Category
Applied Machine Learning
Deep Learning
Natural Language Processing (NLP)
AI & Machine Learning
Required Skills
Kubernetes
Python
Pytorch
Docker
NumPy
Requirements
  • Strong programming skills in Python
  • Deep understanding of PyTorch, NumPy, and basic libraries for working with images and structured data
  • Experience with filtering and preparing training data
  • Experience with large model training
  • Experience working with large distributed systems like SLURM, Ray, or similar technologies
  • Experience with deploying models (building docker images, Kubernetes basics)
  • Knowledge of graphics fundamentals, 3D formats, tools (e.g. Blender/UE4), and/or 3D-related Python libraries
  • Ability to implement new models in codebases like diffusers, transformers
Responsibilities
  • Designing, implementing, and improving large-scale distributed machine learning systems
  • Writing bug-free machine learning code
  • Developing underlying models
  • Building and adding features to CLI tools and dashboards for visualization, comparison, and metric tracking

Luma AI develops multimodal artificial intelligence technologies that enhance human creativity and capabilities. Their main product, the Dream Machine, allows users to interact with various types of data and inputs, making it easier for creative professionals, businesses, and developers to utilize AI in their projects. Unlike many competitors, Luma AI focuses on integrating multiple modes of interaction, which helps users explore new possibilities in their work. The company operates on a subscription model, providing access to its AI tools and services, and aims to lead the way in AI-driven creativity and productivity.

Company Size

51-200

Company Stage

Late Stage VC

Total Funding

$87.3M

Headquarters

San Francisco, California

Founded

2021

Simplify Jobs

Simplify's Take

What believers are saying

  • Partnership with HUMAIN boosts Luma's presence in gaming and interactive entertainment.
  • $90 million funding accelerates AI model development, strengthening competitive edge.
  • Photon and Photon Flash models expand Luma's AI image generation offerings.

What critics are saying

  • Competition from Google and OpenAI may overshadow Luma's video generation models.
  • High demand for Ray2 model causes delays, risking customer dissatisfaction.
  • Subscription-based revenue model vulnerable to economic downturns affecting spending.

What makes Luma AI unique

  • Luma AI transforms text into 3D models, enhancing user creativity and engagement.
  • The Dream Machine integrates multimodal AI, pushing boundaries in AI-driven creativity.
  • Luma AI's Ray2 model offers fast, natural motion, surpassing competitors in video generation.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Company Equity

Stock Options

Growth & Insights and Company News

Headcount

6 month growth

8%

1 year growth

0%

2 year growth

-5%
The Korea Herald
May 15th, 2025
HUMAIN and Luma Join Forces to Power the Next Generation of Gaming and INTERACTIVE Entertainment

RIYADH, Saudi Arabia, May 15, 2025 /PRNewswire/ - HUMAIN the new full-stack AI company, owned by PIF, and built to redefine what's possible, has announced a landmark partnership with Luma, a global leader in multimodal generative AI innovation, known for its breakthrough video models, real-time 3D neural rendering, and cinematic AI.

GetCoAI
Jan 16th, 2025
Luma Labs launches new AI video model with improved motion and physics

Luma Labs launches new AI video model with improved motion and physics.

VentureBeat
Jan 16th, 2025
Luma Ai Releases Ray2 Generative Video Model With ‘Fast, Natural’ Motion And Better Physics

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Luma AI made waves with the launch of its Dream Machine generative AI video creation platform last summer.Of course, while that was only seven short months ago, the AI video space has advanced rapidly with the release of many new AI video creation models from rival startups in the U.S. and China, including Runway, Kling, Pika 2.0, OpenAI’s Sora, Google’s Veo 2, MiniMax’s Hailuo and open source alternatives such as Hotshot and Genmo’s Mochi 1, to name but a few. Even Luma itself recently updated its Dream Machine platform to include new still image generation and brainstorming boards, and also debuted an iOS app.But the updates continue: Today, the San Francisco-based startup released Ray2, its newest video AI generation model, available now through its Dream Machine website and mobile apps for paying subscribers (to start).The model offers “fast, natural coherent motion and physics,” according co-founder and CEO Amit Jain says on his X account, and was trained with 10 times more compute than the original Luma AI video model, Ray1.“This skyrockets the success rate of usable production-ready generations and makes video storytelling accessible to a lot more people,” he added.Luma’s Dream Machine web platform offers a free tier with 720 pixel generations capped at a variable number each month: Paid plans begin at $6.99 per month: From “Lite,” which offers 1080p visuals, to Plus ($20.99/month), to Unlimited ($66.49/month) and Enterprise ($1,672.92/year).A leap forward in video genRight now, Ray2 is limited to tex-to-video, allowing users to type in descriptions that are transformed into 5 or 10 second video clips. The model can generate new videos in a matter of seconds, although right now it can take minutes at a time due to a crush of demand from new users.Examples shared by Luma and early testers in its Creators program showcase the model’s versatility, including a man running through an Antarctic snowstorm surrounded by explosions, and a ballerina performing on an ice floe in the Arctic.Impressively, all the motions in the example videos appear lifelike and fluid — and often, with subjects moving much faster and more naturally than videos from rival AI generators, which often appear to generate in slow motion.The model can even create realistic versions of surreal ideas such as a giraffe surfing, as X user @JeffSynthesized demonstrated

The Bridge
Dec 16th, 2024
Google、新Ai動画生成モデル「Veo 2」を発表——視聴者の体験はOpenaiの「Sora」を上回ると主張

「Veo 2」Image credit: Google. Google は、より写実的な映像を生成すると主張する、動画生成モデルの最新バージョン「Veo 2」で、OpenAI の「Sora」に対抗している。. 同社は、画像生成モデル Imagen 3 も更新し、より豊かで詳細な写真を作成するようにした。

VentureBeat
Dec 16th, 2024
Google Debuts New Ai Video Generator Veo 2 Claiming Better Audience Scores Than Sora

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Google is going head to head against OpenAI’s Sora with the newest version of its video generation model, Veo 2, which it says makes more realistic-looking videos.The company also updated its image generation model Imagen 3 to produce richer, more detailed photos. Google said Veo 2 has “a better understanding of real-world physics and the nuances of human movement and expression.” It is available on Google Labs’ VideoFX platform — but only on a waitlisted basis. Users will need to sign up through a Google Form and wait for access to be granted provisionally by Google at a time of its choosing.“Veo 2 also understands the language of cinematography: Ask it for a genre, specify a lens, suggest cinematic effects and Veo 2 will deliver — at resolutions up to 4K,” Google said in a blog post. Video generated with Veo 2While Veo 2 is available only to select users, the original Veo remains available on Vertex AI. Videos created with Veo 2 will contain Google’s metadata watermark SynthID to identify these as AI-generated. Google admits, though, that Veo 2 may still hallucinate extra fingers and the like, but it promises the new model produces fewer hallucinations.  Veo 2 will compete against OpenAI’s recently released Sora video generation model to attract filmmakers and content creators. Sora had been in previews for a while before OpenAI made it available to paying subscribers. Impressively, Google says that on its own internal tests gauging “overall preference” (i.e

INACTIVE