Full-Time

Validation Engineer

Full Robot

Confirmed live in the last 24 hours

Figure

Figure

51-200 employees

Develops versatile humanoid robots for industries

Robotics & Automation
Automotive & Transportation
AI & Machine Learning

Compensation Overview

$170k - $230kAnnually

Mid, Senior

Sunnyvale, CA, USA

5 days/week in-office collaboration required.

Category
QA & Testing
Automation Testing
Quality Assurance
Required Skills
Git
Oscilloscope
Linux/Unix

You match the following Figure's candidate preferences

Employers are more likely to interview you if you match these preferences:

Degree
Experience
Requirements
  • Bachelor’s or Master’s degree in Mechanical, Electrical, Controls, or Electromechanical Engineering.
  • 3-5+ years of hands-on experience in full-system debugging, troubleshooting, and validation of electromechanical systems.
  • Strong understanding of robotic systems and their interactions across hardware and software components.
  • Expertise in using diagnostic tools such as oscilloscopes, multimeters, and data loggers.
  • Proficiency in system-level fault isolation and root cause analysis techniques.
  • Experience with Linux-based systems and version control tools (e.g., Git).
  • Familiarity with motion control, actuators, and sensor integration in robotic systems.
Responsibilities
  • Lead the commissioning, integration, and validation of the full robotic system, ensuring functionality and reliability.
  • Perform hands-on debugging of complex electromechanical systems across mechanical, controls, and software domains.
  • Develop and execute validation plans to assess system performance under various operational conditions.
  • Identify root causes of system failures and implement corrective actions to enhance system reliability.
  • Collaborate closely with cross-functional teams (Mechanical, Electrical, Software, Controls) to diagnose integration challenges and recommend design improvements.
  • Maintain comprehensive documentation of validation processes, test procedures, failure reports, and resolutions.
  • Provide technical training and mentorship to team members on debugging and troubleshooting methodologies.
  • Contribute to the continuous improvement of testing strategies and automation processes.
Desired Qualifications
  • Experience designing and executing test plans for robotic systems in production or R&D environments.
  • Proficiency in C++ and/or Python for debugging and scripting automation.
  • Experience with robotic kinematics, dynamics, and control algorithms.
  • Exposure to AI-driven robotic systems and real-time data processing.
  • Familiarity with CAD software such as CATIA V6 for design review and analysis.

Figure.ai develops humanoid robots designed for various tasks across multiple industries. Their main product, Figure 01, is a 5'6" tall, 60kg robot that can carry up to 20kg, run for 5 hours, and move at a speed of 1.2 meters per second. It is electric-powered and built to work in human-designed environments like manufacturing plants and warehouses. Unlike many competitors that focus on single-function robots, Figure.ai offers a versatile solution that can adapt to different tasks, making it a cost-effective option for industrial clients. The company's goal is to enhance operational efficiency and reduce labor costs through automation, as demonstrated by their partnership with BMW Manufacturing to integrate their robots into production lines.

Company Size

51-200

Company Stage

Late Stage VC

Total Funding

$830.7M

Headquarters

Sunnyvale, California

Founded

2022

Simplify Jobs

Simplify's Take

What believers are saying

  • Recent $1.5 billion funding discussions indicate strong investor confidence in Figure.
  • Partnership with BMW showcases practical applications in automotive production lines.
  • Growing demand for automation solutions boosts Figure's market potential.

What critics are saying

  • Severing ties with OpenAI may challenge Figure's AI capabilities.
  • Safety concerns in factories could lead to regulatory scrutiny.
  • Increased competition from Nvidia's upcoming Jetson Thor compact computers.

What makes Figure unique

  • Figure's Helix AI architecture allows robots to interpret commands without pre-training.
  • Vision-Language-Action model enables precise control of humanoid robots' upper body.
  • Figure 01 combines human agility with advanced AI for versatile industrial applications.

Help us improve and share your feedback! Did you find this helpful?

Growth & Insights and Company News

Headcount

6 month growth

-4%

1 year growth

45%

2 year growth

-5%
Decrypt
Feb 20th, 2025
Figure Ai Is Supercharging Humanoid Robots—Here'S How It Works

Decrypt’s Art, Fashion, and Entertainment Hub. Discover SCENEFigure AI finally revealed on Thursday the "major breakthrough" that led the buzzy robotics startup to break ties with one of its investors, OpenAI: A novel dual-system AI architecture that allows robots to interpret natural language commands and manipulate objects they’ve never seen before—without needing specific pre-training or programing for each one.Unlike conventional robots that require extensive programming or demonstrations for each new task, Helix combines a high-level reasoning system with real-time motor control. Its two systems effectively bridge the gap between semantic understanding (knowing what objects are) and action or motor control (knowing how to manipulate those objects).This will make it possible for robots to become more capable over time without having to update their systems or train on new data. To demonstrate how it works, the company released a video showing two Figure robots working together to put away groceries, with one robot handing items to another that places them in drawers and refrigerators.Figure claimed that neither robot knew about the items they were dealing with, yet they were capable of identifying which ones should go in a refrigerator and which ones are supposed to be stored dry."Helix can generalize to any household item," Adcock tweeted. "Like a human, Helix understands speech, reasons through problems, and can grasp any object—all without needing training or code."How the magic worksTo achieve this generalization capability, the Sunnyvale, California-based startup also developed what it called a Vision-Language-Action (VLA) model that unifies perception, language understanding, and learned control, which is what made its models capable of generalizing.This model, Figure claims, marks several firsts in robotics. It outputs continuous control of an entire humanoid upper body at 200Hz, including individual finger movements, wrist positions, torso orientation, and head direction

FrenchWeb
Feb 17th, 2025
Meta Prêt À Interpeller Trump Face Aux Sanctions De L’Ue / Amazon Accusé Par L’Italie D’Évasion Fiscale / Brevo Affiche Une Forte Croissance

Meta prêt à interpeller Trump face aux sanctions de l’UE / Amazon accusé par l’Italie d’évasion fiscale / Brevo affiche une forte croissanceMeta prêt à interpeller Trump face aux sanctions de l’UELors de la Munich Security Conference qui se tenait se week end en Allemagne, Joel Kaplan, directeur mondial des affaires publiques de Meta, a affirmé que l’entreprise n’hésiterait pas à solliciter l’intervention du président Donald Trump si l’Union européenne appliquait ses réglementations numériques de manière discriminatoire à l’encontre de ses produits. Il a souligné que les sanctions infligées aux entreprises technologiques américaines pourraient être perçues comme une forme de taxation injustifiée, reprenant ainsi un argument avancé par Trump lors du Forum économique mondial de Davos.Meta, qui a déjà écopé de plus de 2 milliards d’euros d’amendes pour infractions aux règles européennes sur la concurrence et la protection des données, fait également l’objet d’une enquête au titre du Digital Services Act. Kaplan a mis en garde contre une approche réglementaire qui mesurerait son efficacité à l’aune des sanctions imposées, estimant qu’elle placerait l’économie européenne dans une position de « désavantage considérable ».Nous n'avons pas pu confirmer votre inscription. Votre inscription est confirmée. La newsletter hebdo Recevez chaque lundi l'actualité de notre écosystème Veuillez renseigner votre adresse email pour vous inscrire Veuillez renseigner votre adresse email pour vous inscrire. Ex

Bloomberg
Feb 14th, 2025
Robotics Startup Figure AI in Talks for New Funding at $39.5 Billion Valuation

Backed by OpenAI and Microsoft, Figure is discussing a $1.5 billion funding round.

Grey Journal
Feb 13th, 2025
What innovations are behind the 350 million dollar investment in humanoid robots

A year earlier, Figure, a Sunnyvale-based company creating AI-powered robots for hazardous tasks and labor shortage mitigation, raised an impressive $675 million in funding, with contributions from Nvidia and other major investors.

Decrypt
Feb 9th, 2025
Figure Ai Dumps Openai Deal After 'Major Breakthrough' In Robot Intelligence

Decrypt’s Art, Fashion, and Entertainment Hub. Discover SCENEFigure AI, a U.S.-based startup focused on building AI-powered humanoid robots, severed its ties with OpenAI last week, with CEO Brett Adcock claiming a "major breakthrough" in robot intelligence that made the partnership unnecessary.The split came just months after the two companies announced their collaboration alongside a $675 million funding round that valued Figure at $2.6 billion to kick-start its Figure 02 robot.The Figure 02. Image: Figure AI"Today, I made the decision to leave our Collaboration Agreement with OpenAI," Adcock tweeted. “Figure made a major breakthrough on fully end-to-end robot AI, built entirely in-house”. The move marked a stark reversal for Figure, which previously planned to use OpenAI's models for its Figure 02 humanoid's natural language capabilities.In a separate post, Adcock explained that, over time, maintaining a partnership with OpenAI to use its LLMs started to make less sense for his company."LLMs are getting smarter yet more commoditized. For us, LLMs have quickly become the smallest piece of the puzzle," Adcock wrote