Full-Time

Principal Technical Program Manager

Confirmed live in the last 24 hours

Groq

Groq

201-500 employees

AI inference hardware for cloud and on-premises

AI & Machine Learning

Compensation Overview

$226k - $336kAnnually

Senior, Expert

Remote in USA

Geo-agnostic company; work where you are.

Category
Project Management
Engineering Management
Business & Strategy
Required Skills
Agile
Machine Learning
JIRA
SCRUM
FPGA

You match the following Groq's candidate preferences

Employers are more likely to interview you if you match these preferences:

Degree
Experience
Requirements
  • 7+ years of Technical program management in Hardware/ SW System design, AI/ML SW development in AI domains
  • B.S. in engineering, computer science or a related technical discipline, or equivalent experience
  • Have hands-on experience with either System board design, System Software development, AI/ML SW development
  • Demonstrated experience working with external vendors (CM,, etc.), including negotiation of deadlines, holding all parties accountable, etc.
  • Understanding of AI Software development and data center SW stack
  • Experience in managing AI Software product cycle from concept to delivery
  • Understanding of hardware and software integration challenges
  • Experience with project & program management tools and methodologies (e.g., Agile, Scrum, JIRA)
  • Expertise with schedule tools like MS Project or Smartsheet
  • Comfortable with ambiguity but actively seek clarity
Responsibilities
  • Drive and manage end-to-end projects involving the HW design, System SW, ML Compiler, Inference Engine, Infrastructure SW and larger AI SW stack
  • Collaborate with Hardware Systems, Systems SW, product management, operations, and other cross-functional teams to ensure alignment on technical specifications, project goals, and deliverables
  • Provide deep technical guidance on AI chip architectures, FPGA, interconnects, memory hierarchies, low level Embedded SW, ML Compiler, Inference Engine and Infrastructure SW to optimize performance and efficiency in datacenter deployments
  • Drive communication internal and external to the engineering teams including leadership reviews, Core team meetings, etc
  • Define and track clear goals, priorities, and milestones, ensuring alignment with overall Groq’s corporate goals
  • Provide Programmatic guidance, mentorship, and foster a culture of execution within System SW, ML Compiler, Networking and other SW teams
  • Monitor project execution, identify risks, and proactively implement mitigation strategies
Desired Qualifications
  • Extensive experience in System Design, end to end SW development including AI chip development
  • Deep Knowledge of AI HW systems design, ML Compiler design, Inference Engine design, Infrastructure SW design
  • Knowledge of LLVM and compiler architecture
  • Experience with AI data center and cloud markets, technological and business trends, requirements, and ecosystem partners

Groq specializes in AI inference technology, providing the Groq LPU™, which is known for its high compute speed, quality, and energy efficiency. The Groq LPU™ is designed to handle AI processing tasks quickly and effectively, making it suitable for both cloud and on-premises applications. Unlike many competitors, Groq's products are designed, fabricated, and assembled in North America, which helps maintain high standards of quality and performance. The company targets a variety of clients who need fast and efficient AI processing capabilities, and it generates revenue through direct sales of its advanced hardware and related systems. Groq's goal is to deliver scalable AI inference solutions that meet the demands of industries requiring rapid data processing.

Company Stage

Series D

Total Funding

$1.3B

Headquarters

Mountain View, California

Founded

2016

Growth & Insights
Headcount

6 month growth

6%

1 year growth

0%

2 year growth

-4%
Simplify Jobs

Simplify's Take

What believers are saying

  • Groq secured $640M in Series D funding, boosting its expansion capabilities.
  • Partnership with Aramco Digital aims to build the world's largest inferencing data center.
  • Integration with Touchcast's Cognitive Caching enhances Groq's hardware for hyper-speed inference.

What critics are saying

  • Increased competition from SambaNova Systems and Gradio in high-speed AI inference.
  • Geopolitical risks in the MENA region may affect the Saudi Arabia data center project.
  • Rapid expansion could strain Groq's operational capabilities and supply chain.

What makes Groq unique

  • Groq's LPU offers exceptional compute speed and energy efficiency for AI inference.
  • The company's products are designed and assembled in North America, ensuring high quality.
  • Groq emphasizes deterministic performance, providing predictable outcomes in AI computations.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Remote Work Options

Company Equity