Internship

Compiler Software Engineer Intern

Confirmed live in the last 24 hours

d-Matrix

d-Matrix

51-200 employees

AI compute platform for datacenters

Enterprise Software
AI & Machine Learning

Toronto, ON, Canada

Hybrid position requiring onsite work in Toronto for 3 days per week.

Category
Embedded Engineering
Software Engineering
Required Skills
Data Structures & Algorithms
C/C++
Requirements
  • Bachelor’s degree in computer science or equivalent 3 years towards an Engineering degree with emphasis on computing and mathematics coursework.
  • Proficiency with C++ object-oriented programming is essential.
  • Understanding of fixed point and floating-point number representations, floating point arithmetic, reduced precision floating point representations and sparse matrix storage representations and the methods used to convert between them.
  • Some experience in applied computer programming (e.g. prior internship).
  • Understanding of basic compiler concepts and methods used in creating compilers (ideally via a compiler course).
  • Data structures and algorithms for manipulating directed acyclic graphs.
  • Familiarity of sparse matrix storage representations.
  • Hands on experience with CNN, RNN, Transformer neural network architectures.
  • Experience with programming GPUs and specialized HW accelerator systems for deep neural networks.
  • Passionate about learning new compiler development methodologies like MLIR.
  • Enthusiastic about learning new concepts from compiler experts in the US and a willingness to defeat the time zone barriers to facilitate collaboration.
Responsibilities
  • design, implement and evaluate a method for managing floating point data types in the compiler.
  • work under the guidance of two members of the compiler backend team.
  • engage and collaborate with engineering team in the US to understand the mechanisms made available by the hardware design to perform efficient floating point operations using reduced precision floating point data types.
  • successful completion of the project will be demonstrated by a simple model output by the compiler incorporating the your code that executes correctly on the hardware instruction set architecture (ISA) simulator.

d-Matrix focuses on improving the efficiency of AI computing for large datacenter customers. Its main product is the digital in-memory compute (DIMC) engine, which combines computing capabilities directly within programmable memory. This design helps reduce power consumption and enhances data processing speed while ensuring accuracy. Unlike many competitors, d-Matrix offers a modular and scalable approach, utilizing low-power chiplets that can be tailored for different applications. The company's goal is to provide high-performance, energy-efficient AI inference solutions to large-scale datacenter operators.

Company Stage

Series B

Total Funding

$149.8M

Headquarters

Santa Clara, California

Founded

2019

Growth & Insights
Headcount

6 month growth

11%

1 year growth

-2%

2 year growth

219%
Simplify Jobs

Simplify's Take

What believers are saying

  • d-Matrix raised $110 million in Series B funding, showing strong investor confidence.
  • The launch of the Corsair AI processor positions d-Matrix as a competitor to Nvidia.
  • Jayhawk II silicon advances low-latency AI inference for large language models.

What critics are saying

  • Competition from Nvidia, AMD, and Intel could pressure d-Matrix's market share.
  • Rapid AI innovation may lead to obsolescence if d-Matrix doesn't continuously innovate.
  • Potential regulatory changes in AI technology could impose new compliance costs.

What makes d-Matrix unique

  • d-Matrix's DIMC engine integrates compute into memory, enhancing efficiency and accuracy.
  • The company's chiplet-based modular design allows for scalable and customizable AI solutions.
  • d-Matrix focuses on power efficiency, addressing critical issues in AI datacenter workloads.

Help us improve and share your feedback! Did you find this helpful?