Full-Time

ML Compiler Stack Engineer

Confirmed live in the last 24 hours

Cerebras

Cerebras

201-500 employees

Develops AI acceleration hardware and software

Hardware
AI & Machine Learning

Mid

Toronto, ON, Canada

Category
Applied Machine Learning
AI & Machine Learning
Required Skills
C/C++
Requirements
  • Bachelor’s, Master’s, or Ph.D. in Computer Science, Electrical Engineering, or a related field.
  • Proven experience in compiler development, particularly with LLVM and/or MLIR.
  • Strong background in optimization techniques, particularly those involving NP-hard problems.
  • Proficiency in C/C++ programming and experience with low-level optimization.
  • Familiarity with AI workloads and architectures is a plus.
  • Excellent problem-solving skills and a strong analytical mindset.
  • Ability to work in a fast-paced, collaborative environment.
Responsibilities
  • Design, develop, and optimize compiler technologies for AI chips using LLVM and MLIR frameworks.
  • Identify and address performance bottlenecks, ensuring optimal resource utilization and execution efficiency.
  • Work with the machine learning team to integrate compiler optimizations with AI frameworks and applications.
  • Contribute to the advancement of compiler technologies by exploring new ideas and approaches.

Cerebras Systems specializes in accelerating artificial intelligence (AI) processes with its CS-2 system, which is designed to replace traditional clusters of graphics processing units (GPUs) used in AI computations. The CS-2 system simplifies AI tasks by eliminating the need for complex parallel programming and cluster management, making the process more efficient. Cerebras serves a variety of clients, including major pharmaceutical companies and government research labs, providing them with faster results for critical applications like drug response predictions. The company operates in the high-performance computing and AI markets, generating revenue through the sale of its proprietary hardware and software solutions, including the CS-2 system and associated cloud services. Cerebras aims to reduce the overall cost of AI research and development while enabling clients to achieve quicker results and lower latency in AI inference.

Company Stage

N/A

Total Funding

$700.4M

Headquarters

Sunnyvale, California

Founded

2016

Growth & Insights
Headcount

6 month growth

8%

1 year growth

16%

2 year growth

-3%
Simplify Jobs

Simplify's Take

What believers are saying

  • Cerebras' IPO and significant funding, including $720 million raised, position it for substantial growth and market penetration.
  • Collaborations with industry giants and government labs, such as GlaxoSmithKline, AstraZeneca, and Argonne National Lab, validate the effectiveness and demand for Cerebras' technology.
  • The CS-2 system's ability to produce faster results in critical applications like cancer drug response prediction models highlights its transformative potential in healthcare and scientific research.

What critics are saying

  • Competing against established giants like Nvidia poses significant market challenges and could impact Cerebras' market share.
  • The high cost and complexity of developing and maintaining cutting-edge hardware like the WSE-3 chip could strain resources and affect profitability.

What makes Cerebras unique

  • Cerebras' CS-2 system replaces traditional GPU clusters, eliminating complexities in parallel programming and distributed training.
  • The WSE-3 chip, with 40 trillion transistors, is designed to train AI models 10 times larger than current top models like GPT-4, setting a new industry standard.
  • Strategic partnerships with major entities like Dell and Aleph Alpha enhance Cerebras' reach and influence in the AI and high-performance computing markets.

Help us improve and share your feedback! Did you find this helpful?