Simplify Logo

Full-Time

Sr. NPI Test Engineer

Confirmed live in the last 24 hours

Groq

Groq

201-500 employees

Develops real-time AI inference hardware and software

Hardware
AI & Machine Learning

Compensation Overview

$126.4k - $232.6kAnnually

+ Equity + Benefits

Mid, Senior

Mountain View, CA, USA

Hybrid position in Mountain View, CA.

Category
QA & Testing
Manual Testing
Quality Assurance
Required Skills
Python
Data Structures & Algorithms
Linux/Unix
Requirements
  • BS in Electrical Engineering or a related degree
  • 4+ years of proven experience
  • Experience in hardware testing using bench instruments and self test
  • Proficiency is required in one or more software languages: Python, C#, C++
  • Experience developing test, characterization and debug tools for one of the following: Microprocessor or SOC device test, Network Systems, Computer Servers
  • Familiarity with test automation for system bench testing and manufacturing test
  • Proven knowledge of failure analysis and troubleshooting techniques
  • Experience working with Linux environments
  • Able to manage factory floor partners for test, RMA, and CM processes
Responsibilities
  • Develop test-software and diagnostic tools for system and subsystem manufacturing test
  • Evaluate and improve test coverage to prevent test escape related field failures
  • Develop new algorithms and test solutions for new features and new products
  • Provide on-site CM support as required to help identify and resolve issues and streamline the manufacturing test and debug process
  • Support pilot production phase through successful and sustained project launch. This includes supporting new product introduction schedules, resolving manufacturability issues, and driving improvements for yield, cost and efficiency.
  • Develop process flow and manufacturing process instructions (MPIs), including test procedures and test tools
  • Provide technical support for failure analysis of discrepant production components, assemblies and field returns

At Groq, the focus on real-time AI solutions through a Language Processing Unit™ and a deterministic Tensor Streaming architecture allows the company to excel in delivering ultra-low latency AI inference at scale. This specialized approach not only enhances technology performance but also simplifies processes for developers, accelerating production timelines and improving return on investment. Additionally, Groq's commitment to domestically-based supply chains supports higher sustainability and reliability in operations, making it an effective workplace for driving forward-thinking AI technology solutions.

Company Stage

Series C

Total Funding

$408.6M

Headquarters

Mountain View, California

Founded

2016

Growth & Insights
Headcount

6 month growth

25%

1 year growth

42%

2 year growth

12%
Simplify Jobs

Simplify's Take

What believers are saying

  • Groq's recent $300 million Series D funding round, led by BlackRock, values the company at $2.5 billion, indicating strong investor confidence and financial stability.
  • The launch of public demos on platforms like Hugging Face Spaces allows users to interact with Groq's models, potentially increasing user engagement and adoption.
  • Groq's rapid query response times, significantly faster than competitors like Nvidia, position it as a leader in AI inference speed.

What critics are saying

  • The competitive landscape with established players like Nvidia poses a significant challenge to Groq's market penetration.
  • High expectations from investors following substantial funding rounds could pressure Groq to deliver rapid and consistent innovation.

What makes Groq unique

  • Groq's open-source Llama AI models outperform proprietary models from tech giants like OpenAI and Google in specialized tasks, showcasing their superior tool use capabilities.
  • Groq's processors, known as LPUs, are claimed to be 10x faster and 1/10 the price of current market options, providing a significant cost-performance advantage.
  • The company's participation in the National AI Research Resource (NAIRR) Pilot highlights its commitment to responsible AI innovation and real-time AI inference.