Simplify Logo

Full-Time

Post Silicon Validation Lead

Updated on 10/15/2024

Groq

Groq

201-500 employees

Develops real-time AI inference hardware and software

Hardware
AI & Machine Learning

Compensation Overview

$181.7k - $365.4kAnnually

Mid, Senior

Remote in USA

Category
Embedded Systems Engineering
Instrumentation and Measurement Engineering
Electrical Engineering
Requirements
  • Silicon validation experience, preferably in the area of SERDES, DDR or high-speed interface design. BE or ME Graduate
  • Experience in system marginality validation
  • Good understanding of lab equipment and measurement techniques for high-speed interfaces. High speed scopes, probes, spectrum analyzers, BERTs.
  • Knowledge of board and package design, signal integrity and power integrity a plus
  • Knowledge of DDR trainings and memory system operation a plus
  • Software proficiency for python test scripting, data handling and reporting
  • Laboratory experience, including hands-on use of equipment: oscilloscope, logic analyzer, etc.
  • Excellent problem-solving skills, good communication skills and ability to work cooperatively in a team environment
  • Debug issues with SOC IP and boards as needed.
Responsibilities
  • Bring-up & Silicon Characterization
  • Validation of C2C, PCIe, CXL, DDRx, LPDDR controller/chips
  • Validation of SRAM and Vmin optimization
  • Scripting and test data processing to extract meaningful signals
  • Develop and integrate software test applications for effective product stress and SLT screening and collaborate with software teams to evaluate system performance and HW/SW interaction under various conditions.
  • SLT test time optimization, including shift right strategies STL to ATE
  • Test time, DPM, & yield optimization for effective production screens
  • Root cause analysis and RMA processing
  • Mentor Junior Engineers when the project need arises
  • Experience in Post Silicon Electrical Validation of server processors (if server processor is too specific, you can remove server)

At Groq, the focus on real-time AI solutions through a Language Processing Unit™ and a deterministic Tensor Streaming architecture allows the company to excel in delivering ultra-low latency AI inference at scale. This specialized approach not only enhances technology performance but also simplifies processes for developers, accelerating production timelines and improving return on investment. Additionally, Groq's commitment to domestically-based supply chains supports higher sustainability and reliability in operations, making it an effective workplace for driving forward-thinking AI technology solutions.

Company Stage

Series C

Total Funding

$408.6M

Headquarters

Mountain View, California

Founded

2016

Growth & Insights
Headcount

6 month growth

37%

1 year growth

70%

2 year growth

34%
Simplify Jobs

Simplify's Take

What believers are saying

  • Groq's recent $300 million Series D funding round, led by BlackRock, values the company at $2.5 billion, indicating strong investor confidence and financial stability.
  • The launch of public demos on platforms like Hugging Face Spaces allows users to interact with Groq's models, potentially increasing user engagement and adoption.
  • Groq's rapid query response times, significantly faster than competitors like Nvidia, position it as a leader in AI inference speed.

What critics are saying

  • The competitive landscape with established players like Nvidia poses a significant challenge to Groq's market penetration.
  • High expectations from investors following substantial funding rounds could pressure Groq to deliver rapid and consistent innovation.

What makes Groq unique

  • Groq's open-source Llama AI models outperform proprietary models from tech giants like OpenAI and Google in specialized tasks, showcasing their superior tool use capabilities.
  • Groq's processors, known as LPUs, are claimed to be 10x faster and 1/10 the price of current market options, providing a significant cost-performance advantage.
  • The company's participation in the National AI Research Resource (NAIRR) Pilot highlights its commitment to responsible AI innovation and real-time AI inference.

Help us improve and share your feedback! Did you find this helpful?