Full-Time

RMA / FA Engineer

Confirmed live in the last 24 hours

Groq

Groq

201-500 employees

AI inference hardware for cloud and on-premises

Hardware
AI & Machine Learning

Compensation Overview

$104.4k - $221.2kAnnually

Mid

San Jose, CA, USA

Some roles may require being located near primary sites.

Category
Electronics Design Engineering
Embedded Systems Engineering
Electrical Engineering
Required Skills
Python
Perl
C/C++
Linux/Unix
Requirements
  • BS/MS in Electrical Engineering or a related degree
  • 3+ years of hands-on test engineering experience
  • Strong technical core competence and excellent problem-solving and analytical skills
  • Ability to work independently, set high-level goals then prioritize tasks to drive tasks to completion
  • Good fundamental understanding of solid-state device physics, semiconductor processing and characterization techniques
  • Competence using lab equipment such as oscilloscopes, logic analyzers, power analyzers, etc.
  • Proficiency with high speed interfaces (Serdes, PCIe, DDR)
  • Experience testing power sub-sections (e.g. POLs, VRMs, etc.)
  • Familiarity with lower speed interfaces like SPI, I2C, CANbus, etc.
  • Proficiency in Python, Perl, C++, or other languages on UNIX/Linux
  • Experience in Failure Analysis for one (or more) of the following: Microprocessors, complex SOC devices, AI Systems, Servers, Network Systems
  • Excellent knowledge of PCB card and system-level test and debug
Responsibilities
  • Conduct test, debug, and root-cause analysis of field RMAs
  • Collaborate with Product Engineers, Hardware Engineers, and Test Engineers
  • Create Failure Analysis result reports
  • Drive resolution, containment, and mitigation plans for quality alerts
  • Oversee hardware quality performance, monitoring field quality data and associated metrics including RMA Rates, MTBF, and Reliability Ratio.
  • Manage operational performance of Failure Analysis at contract manufacturer(s), ensuring partner(s) achieve key performance indicators, including FA cycle times, fault duplication rates, and fault isolation rates.
  • Drive learning’s from RMA / FA back into Manufacturing, Engineering, and Support teams

Groq specializes in AI inference technology, providing the Groq LPU™, which is known for its high compute speed, quality, and energy efficiency. The Groq LPU™ is designed to handle AI processing tasks quickly and effectively, making it suitable for both cloud and on-premises applications. Unlike many competitors, Groq's products are designed, fabricated, and assembled in North America, which helps maintain high quality and performance standards. The company targets a variety of clients who need fast and efficient AI processing capabilities, generating revenue through direct sales of its advanced hardware and related systems. Groq's goal is to deliver scalable AI inference solutions that meet the demands of industries requiring rapid data processing.

Company Stage

Series D

Total Funding

$1.3B

Headquarters

Mountain View, California

Founded

2016

Growth & Insights
Headcount

6 month growth

35%

1 year growth

63%

2 year growth

35%
Simplify Jobs

Simplify's Take

What believers are saying

  • Groq's recent $300 million Series D funding round, led by BlackRock, values the company at $2.5 billion, indicating strong investor confidence and financial stability.
  • The launch of public demos on platforms like Hugging Face Spaces allows users to interact with Groq's models, potentially increasing user engagement and adoption.
  • Groq's rapid query response times, significantly faster than competitors like Nvidia, position it as a leader in AI inference speed.

What critics are saying

  • The competitive landscape with established players like Nvidia poses a significant challenge to Groq's market penetration.
  • High expectations from investors following substantial funding rounds could pressure Groq to deliver rapid and consistent innovation.

What makes Groq unique

  • Groq's open-source Llama AI models outperform proprietary models from tech giants like OpenAI and Google in specialized tasks, showcasing their superior tool use capabilities.
  • Groq's processors, known as LPUs, are claimed to be 10x faster and 1/10 the price of current market options, providing a significant cost-performance advantage.
  • The company's participation in the National AI Research Resource (NAIRR) Pilot highlights its commitment to responsible AI innovation and real-time AI inference.

Help us improve and share your feedback! Did you find this helpful?