Internship

Compiler Software Engineer Intern

Confirmed live in the last 24 hours

d-Matrix

d-Matrix

51-200 employees

AI compute platform for datacenters

Hardware
Enterprise Software
AI & Machine Learning

Santa Clara, CA, USA

Hybrid position requiring onsite presence in Santa Clara, CA for 3 days per week.

Category
Backend Engineering
Software Engineering
Required Skills
Data Structures & Algorithms
C/C++
Requirements
  • Bachelor’s degree in computer science or equivalent 3 years towards an Engineering degree with emphasis on computing and mathematics coursework.
  • Proficiency with C++ object-oriented programming is essential.
  • Understanding of fixed point and floating-point number representations, floating point arithmetic, reduced precision floating point representations and sparse matrix storage representations and the methods used to convert between them.
  • Some experience in applied computer programming (e.g. prior internship).
  • Understanding of basic compiler concepts and methods used in creating compilers (ideally via a compiler course).
  • Data structures and algorithms for manipulating directed acyclic graphs.
Responsibilities
  • Design, implement and evaluate a method for managing floating point data types in the compiler.
  • Engage and collaborate with engineering team in the US to understand the mechanisms made available by the hardware design to perform efficient floating point operations using reduced precision floating point data types.
  • Demonstrate successful completion of the project by outputting a simple model from the compiler that executes correctly on the hardware instruction set architecture (ISA) simulator.

d-Matrix focuses on improving the efficiency of AI computing for large datacenter customers. The main product is the digital in-memory compute (DIMC) engine, which combines computing capabilities directly within programmable memory. This design helps reduce power consumption and enhances data processing speed while ensuring accuracy. Unlike many competitors, d-Matrix offers a modular and scalable approach, utilizing low-power chiplets that can be tailored for different applications. The goal is to provide high-performance AI inference solutions that are energy-efficient, catering specifically to the needs of large-scale datacenter operators.

Company Stage

Series B

Total Funding

$149.8M

Headquarters

Santa Clara, California

Founded

2019

Growth & Insights
Headcount

6 month growth

-14%

1 year growth

-3%

2 year growth

235%
Simplify Jobs

Simplify's Take

What believers are saying

  • Securing $110 million in Series B funding positions d-Matrix for rapid growth and technological advancements.
  • Their Jayhawk II silicon aims to solve critical issues in AI inference, such as cost, latency, and throughput, making generative AI more commercially viable.
  • The company's focus on efficient AI inference could attract significant interest from data centers and enterprises looking to deploy large language models.

What critics are saying

  • Competing against industry giants like Nvidia poses a significant challenge in terms of market penetration and customer acquisition.
  • The high dependency on continuous innovation and technological advancements could strain resources and lead to potential setbacks.

What makes d-Matrix unique

  • d-Matrix focuses on developing AI hardware specifically optimized for Transformer models, unlike general-purpose AI chip providers like Nvidia.
  • Their digital in-memory compute (DIMC) architecture with chiplet interconnect is a first-of-its-kind innovation, setting them apart in the AI hardware market.
  • Backed by major investors like Microsoft, d-Matrix has the financial support to challenge established players like Nvidia.

Help us improve and share your feedback! Did you find this helpful?