Full-Time

Systems Quality and Reliability Lead

Confirmed live in the last 24 hours

Groq

Groq

201-500 employees

AI inference technology for scalable solutions

Compensation Overview

$186.9k - $305.9k/yr

Senior, Expert

San Jose, CA, USA

Remote

Some roles may require being located near or on our primary sites, as indicated in the job description.

Category
Control Systems Engineering
Instrumentation and Measurement Engineering
Electrical Engineering
Required Skills
Python
Management
Perl
Oscilloscope
C/C++
Linux/Unix
Requirements
  • BS/MS in Electrical Engineering, Physics or a related degree
  • 7+ years of hands-on systems test and/or validation engineering experience
  • Proven hands-on management and leadership experience
  • Competence using lab equipment such as oscilloscopes, logic analyzers, power analyzers, etc.
  • Deeply cognizant of the differences between System test vs ATE test
  • Experience with enabling reliability tests such as HTOL and quality tests such as Burn In
  • Working knowledge of Failure analysis techniques and tools such as FIB, SEM, TDR, VNA, and CSAM
  • Working knowledge of Fault Isolation techniques such as OBIRCH, DLS/LADA, LVP and LVI
  • Proficiency with high speed interfaces (Serdes, PCIe, DDR)
  • Experience testing power sub-sections (e.g. POLs, VRMs, etc.)
  • Familiarity with lower speed interfaces like SPI, I2C, CAN bus, etc.
  • Proficiency in Python, Perl, C++, or other languages on UNIX/Linux
  • Experience in Failure Analysis for one (or more) of the following: Microprocessors, complex SOC devices, AI Systems, Servers, Network Systems
  • Excellent knowledge of PCB card and system-level test and debug
  • Able to manage factory floor partners (CM’s) for RMA / FA activities
Responsibilities
  • Conduct and lead debug and root-cause analysis of field RMAs. Collaborate with Systems Engineers, Hardware Engineers, Software Engineers and Operations Engineers as required.
  • Scale Root Cause Failure Analysis capabilities within your organization.
  • Create Failure Analysis result reports that align with standard 8D or similar processes
  • Develop and optimize RMA testing strategy to improve timeliness and effectiveness of characterization process
  • Analyze RMA, Failure Analysis, and Repair data. Identify trends and raise quality alerts when necessary. Drive resolution, containment, and mitigation plans for such quality alerts.
  • Oversee hardware quality performance, monitoring field quality data and associated metrics including RMA Rates, MTBF, and Reliability Ratio.
  • Manage operational performance of Failure Analysis at contract manufacturer(s), ensuring partner(s) achieve key performance indicators, including FA cycle times, fault duplication rates, and fault isolation rates.
  • Drive learning’s from RMA / FA back into Manufacturing, Engineering, and Support teams.
  • Oversee the set-up of new products into Failure Analysis operations.
Desired Qualifications
  • Humility - Egos are checked at the door
  • Collaborative & Team Savvy - We make up the smartest person in the room, together
  • Growth & Giver Mindset - Learn it all versus know it all, we share knowledge generously
  • Curious & Innovative - Take a creative approach to projects, problems, and design
  • Passion, Grit, & Boldness - no limit thinking, fueling informed risk taking

Groq specializes in AI inference technology, providing the Groq LPU™, which is known for its high compute speed, quality, and energy efficiency. The Groq LPU™ is designed to handle AI processing tasks quickly and effectively, making it suitable for both cloud and on-premises applications. Unlike many competitors, Groq ensures that all its products are designed, fabricated, and assembled in North America, which helps maintain high standards of quality and performance. The company targets a wide range of clients who need fast and efficient AI processing capabilities. Groq's goal is to deliver scalable AI inference solutions that meet the demands of industries requiring rapid data processing.

Company Size

201-500

Company Stage

Growth Equity (Non-Venture Capital)

Total Funding

$2.8B

Headquarters

Mountain View, California

Founded

2016

Simplify Jobs

Simplify's Take

What believers are saying

  • Groq's integration with Hugging Face increases exposure to millions of developers worldwide.
  • Partnership with Samsung Foundry positions Groq as a leader in AI chip performance by 2025.
  • Exclusive partnership with Bell Canada enhances Groq's reach in North American AI infrastructure.

What critics are saying

  • Increased competition from AWS and Google may limit Groq's market share expansion.
  • Reliance on Saudi Arabia contract poses financial risk if disrupted or canceled.
  • Aggressive $6 billion valuation target may lead to financial instability if unmet.

What makes Groq unique

  • Groq's LPU™ offers exceptional compute speed and energy efficiency for AI inference.
  • Groq ensures high-quality standards by designing and assembling products in North America.
  • Groq's deterministic performance guarantees predictable outcomes in AI computations.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Remote Work Options

Company Equity

Growth & Insights and Company News

Headcount

6 month growth

-3%

1 year growth

0%

2 year growth

-6%
Benzinga
Jul 10th, 2025
Groq Eyes $6B Valuation Amid AI Demand

Groq, a rival to Nvidia, is reportedly seeking a $6 billion valuation amid rising AI chip demand. In August 2023, Groq raised $640 million at a $2.8 billion valuation from investors like Cisco, Samsung, and BlackRock. The company was chosen by Saudi-backed AI firm HUMAIN for inference operations. Groq recently opened its first European data center in Helsinki to support global expansion.

TradingView
Jul 9th, 2025
Groq seeks $6B valuation, $500M funding

U.S. semiconductor startup Groq is in talks to raise $300 million to $500 million at a $6 billion valuation, according to The Information. The funds are intended to support a deal with Saudi Arabia, which committed $1.5 billion in February for Groq's AI chips. Groq projects $500 million in revenue this year from Saudi contracts. Previously, Groq raised $640 million in a Series D round, reaching a $2.8 billion valuation.

World Business Outlook
Jul 7th, 2025
Groq Expands to Europe with New Data Center in Helsinki, Finland

Groq, one of the global pioneers in AI inference, announced on July 7th the continued expansion of its global data centre network, establishing its first European data centre footprint in Helsinki, Finland, to meet the growing demands of European customers.

PR Newswire
Jul 7th, 2025
Groq Launches European Data Center Footprint in Helsinki, Finland

Groq launches European data center footprint in Helsinki, Finland.

VentureBeat
Jun 16th, 2025
Groq Just Made Hugging Face Way Faster — And It’S Coming For Aws And Google

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more. Groq, the artificial intelligence inference startup, is making an aggressive play to challenge established cloud providers like Amazon Web Services and Google with two major announcements that could reshape how developers access high-performance AI models.The company announced Monday that it now supports Alibaba’s Qwen3 32B language model with its full 131,000-token context window — a technical capability it claims no other fast inference provider can match. Simultaneously, Groq became an official inference provider on Hugging Face’s platform, potentially exposing its technology to millions of developers worldwide.The move is Groq’s boldest attempt yet to carve out market share in the rapidly expanding AI inference market, where companies like AWS Bedrock, Google Vertex AI, and Microsoft Azure have dominated by offering convenient access to leading language models.“The Hugging Face integration extends the Groq ecosystem providing developers choice and further reduces barriers to entry in adopting Groq’s fast and efficient AI inference,” a Groq spokesperson told VentureBeat. “Groq is the only inference provider to enable the full 131K context window, allowing developers to build applications at scale.”How Groq’s 131k context window claims stack up against AI inference competitorsGroq’s assertion about context windows — the amount of text an AI model can process at once — strikes at a core limitation that has plagued practical AI applications. Most inference providers struggle to maintain speed and cost-effectiveness when handling large context windows, which are essential for tasks like analyzing entire documents or maintaining long conversations.Independent benchmarking firm Artificial Analysis measured Groq’s Qwen3 32B deployment running at approximately 535 tokens per second, a speed that would allow real-time processing of lengthy documents or complex reasoning tasks