Simplify Logo

Full-Time

MLIR Compiler Engineer

Staff

Posted on 2/7/2024

d-Matrix

d-Matrix

51-200 employees

AI compute platform using in-memory computing

Data & Analytics
Hardware
AI & Machine Learning

Senior

Santa Clara, CA, USA

Category
Backend Engineering
Embedded Engineering
Full-Stack Engineering
Software Engineering
Requirements
  • Bachelor's degree in relevant field
  • Proficient in modern heterogeneous system level performance evaluation and understanding
  • Familiarity with modern ML compiler infrastructures such as TensorRT, MLIR
  • Familiarity with machine learning frameworks and interfaces like Torch-MLIR, ONNX-MLIR, Caffe, TVM
  • Passion for a fast-paced startup culture
Responsibilities
  • Lead “deep-understanding” R&D initiatives to correlate system performance & utilization observations with ML compiler functionality gaps
  • Design incremental cost models, micro-benchmarks and system performance measurement and evaluation systems across multiple platforms
  • Define requirements to instrument compile time and runtime to gather relevant performance and utilization snapshot
  • Lead representation of compiler performance tracking & understanding aspects in cross-functional collaborations

d-Matrix is developing a unique AI compute platform using in-memory computing (IMC) techniques with chiplet level scale-out interconnects, revolutionizing datacenter AI inferencing. Their innovative circuit techniques, ML tools, software, and algorithms have successfully addressed the memory-compute integration problem, enhancing AI compute efficiency.

Company Stage

Series B

Total Funding

$161.5M

Headquarters

Santa Clara, California

Founded

2019

Growth & Insights
Headcount

6 month growth

-12%

1 year growth

109%

2 year growth

278%
INACTIVE