Internship

Compiler Engineer Intern

Confirmed live in the last 24 hours

Groq

Groq

201-500 employees

AI inference technology for high-speed processing

AI & Machine Learning

Compensation Overview

$30 - $50Hourly

Palo Alto, CA, USA + 1 more

More locations: Toronto, ON, Canada

Hybrid role requiring in-office presence in Palo Alto, CA or Toronto, Canada.

Category
Backend Engineering
Software QA & Testing
Software Engineering
Required Skills
Python
Tensorflow
Pytorch
C/C++
FPGA
Requirements
  • Completing degree in computer science, computer engineering, or related field
  • Experience with C/C++ or Python programming
  • Knowledge of functional programming an asset
  • Experience with distributed systems or spatial compute such as FPGAs
  • Experience with ML frameworks such as TensorFlow or PyTorch desired
  • Knowledge of ML IR representations such as ONNX and Deep Learning
  • Must be authorized to work in the United States or Canada
Responsibilities
  • Design, develop, and maintain key components and passes within Groq's TSP compiler
  • Propose and expand Groq IR dialect to reflect the ever changing landscape of ML constructs and models
  • Benchmark and analyze output produced by optimizing compiler, and quantify quality-of-results when measured on the Groq TSP hardware
  • Assist in the publication of novel compilation techniques to Groq's TSP at top-tier ML, Compiler, and Computer Architecture conferences
Desired Qualifications
  • Experience with LLVM and MLIR preferred
  • Knowledge with functional programming languages an asset
  • Knowledge with ML frameworks such as TensorFlow and PyTorch desired

Groq specializes in AI inference technology, providing the Groq LPU™, which is known for its high compute speed, quality, and energy efficiency. The Groq LPU™ is designed to handle AI processing tasks quickly and effectively, making it suitable for both cloud and on-premises applications. Unlike many competitors, Groq's products are designed, fabricated, and assembled in North America, which helps maintain high quality and performance standards. The company targets a variety of clients who need fast and efficient AI processing capabilities. Groq's goal is to deliver scalable AI inference solutions that meet the demands of industries requiring rapid data processing.

Company Stage

Series D

Total Funding

$1.3B

Headquarters

Mountain View, California

Founded

2016

Growth & Insights
Headcount

6 month growth

6%

1 year growth

0%

2 year growth

-4%
Simplify Jobs

Simplify's Take

What believers are saying

  • Groq secured $640M in Series D funding, boosting its expansion capabilities.
  • Partnership with Aramco Digital aims to build the world's largest inferencing data center.
  • Integration with Touchcast's Cognitive Caching enhances Groq's hardware for hyper-speed inference.

What critics are saying

  • Increased competition from SambaNova Systems and Gradio in high-speed AI inference.
  • Geopolitical risks in the MENA region may affect the Saudi Arabia data center project.
  • Rapid expansion could strain Groq's operational capabilities and supply chain.

What makes Groq unique

  • Groq's LPU offers exceptional compute speed and energy efficiency for AI inference.
  • The company's products are designed and assembled in North America, ensuring high quality.
  • Groq emphasizes deterministic performance, providing predictable outcomes in AI computations.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Remote Work Options

Company Equity