Full-Time

ML Compiler Stack Engineer

Updated on 3/13/2025

Cerebras

Cerebras

201-500 employees

Develops AI acceleration hardware and software

No salary listed

Mid, Senior

Toronto, ON, Canada

Category
Applied Machine Learning
AI & Machine Learning
Required Skills
Machine Learning
C/C++
Requirements
  • Bachelor’s, Master’s, or Ph.D. in Computer Science, Electrical Engineering, or a related field.
  • Proven experience in compiler development, particularly with LLVM and/or MLIR.
  • Strong background in optimization techniques, particularly those involving NP-hard problems.
  • Proficiency in C/C++ programming and experience with low-level optimization.
  • Familiarity with AI workloads and architectures is a plus.
  • Excellent problem-solving skills and a strong analytical mindset.
  • Ability to work in a fast-paced, collaborative environment.
Responsibilities
  • Design, develop, and optimize compiler technologies for AI chips using LLVM and MLIR frameworks.
  • Identify and address performance bottlenecks, ensuring optimal resource utilization and execution efficiency.
  • Work with the machine learning team to integrate compiler optimizations with AI frameworks and applications.
  • Contribute to the advancement of compiler technologies by exploring new ideas and approaches.
Desired Qualifications
  • Familiarity with AI workloads and architectures is a plus.

Cerebras Systems specializes in accelerating artificial intelligence (AI) processes with its CS-2 system, which is designed to replace traditional clusters of graphics processing units (GPUs) used in AI computations. The CS-2 system simplifies the complexities of parallel programming, distributed training, and cluster management, making AI tasks more efficient. Clients from various sectors, including pharmaceuticals, government research labs, healthcare, finance, and energy, benefit from the system's ability to deliver faster results, which is essential for critical applications like cancer drug response predictions. Cerebras generates revenue by selling its proprietary hardware and software solutions, including the CS-2 systems and related cloud services. The company's goal is to provide a comprehensive solution that enables clients to achieve quicker AI training and lower latency in AI inference, ultimately reducing the costs associated with AI research and development.

Company Size

201-500

Company Stage

Series F

Total Funding

$700.4M

Headquarters

Sunnyvale, California

Founded

2016

Simplify Jobs

Simplify's Take

What believers are saying

  • Growing AI model efficiency demand aligns with Cerebras' energy-efficient accelerators.
  • AI democratization increases need for user-friendly systems like Cerebras' CS-2.
  • Pharmaceutical industry's push for faster drug discovery boosts demand for Cerebras' technology.

What critics are saying

  • Competition from NVIDIA and Graphcore could impact Cerebras' market share.
  • Rapid AI model evolution may necessitate frequent hardware updates, increasing R&D costs.
  • Supply chain vulnerabilities could delay production of Cerebras' hardware.

What makes Cerebras unique

  • Cerebras' Wafer-Scale Engine is the largest chip ever built for AI.
  • The CS-2 system replaces traditional GPU clusters, simplifying AI computations.
  • Cerebras serves diverse industries, including pharmaceuticals and government research labs.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Professional Development Budget

Flexible Work Hours

Remote Work Options

401(k) Company Match

401(k) Retirement Plan

Mental Health Support

Wellness Program

Paid Sick Leave

Paid Holidays

Paid Vacation

Parental Leave

Family Planning Benefits

Fertility Treatment Support

Adoption Assistance

Childcare Support

Elder Care Support

Pet Insurance

Bereavement Leave

Employee Discounts

Company Social Events