Full-Time

AI Software Appications Engineer – Technical Lead / Principal

Confirmed live in the last 24 hours

d-Matrix

d-Matrix

201-500 employees

AI compute platform for datacenters

Enterprise Software
AI & Machine Learning

Expert

Santa Clara, CA, USA

Hybrid, working onsite at our Santa Clara, CA headquarters 3 days per week.

Category
Applied Machine Learning
Deep Learning
AI & Machine Learning
Required Skills
Python
Go
Linux/Unix
Requirements
  • Engineering degree in Electrical Engineering, Computer Engineering, Computer Science, or related field
  • Substantial experience in AI/ML software and infrastructure
  • 10+ years of experience in customer engineering and field support for enterprise-level AI and datacenter products
  • In-depth knowledge and hands-on experience with generative AI inference at scale
  • Experience with automation tools and scripting languages (Linux or Windows shell scripting, Python, Go)
  • Ability to communicate complex technical concepts to diverse audiences, from developers to business stakeholders.
Responsibilities
  • Provide expert guidance and support to customers deploying generative AI inference models
  • Work directly with customers to understand their generative AI inference needs and deliver solutions
  • Conduct design reviews and provide consultation on AI/ML infrastructure
  • Lead the installation, configuration, and bring-up of d-Matrix’s AI software stack
  • Partner with internal engineering and product teams to produce developer guides and technical notes.
Desired Qualifications
  • Hands-on experience with AI/ML infrastructure accelerators (e.g., GPUs, TPUs)
  • Strong analytical skills with a proven track record of solving complex problems in AI/ML systems
  • Extensive experience with the deployment of AI/ML frameworks such as PyTorch, OpenAI Triton, VLLM
  • Familiarity with container orchestration platforms like Kubernetes
  • Excellent communication and presentation skills.

d-Matrix focuses on improving the efficiency of AI computing for large datacenter customers. Its main product is the digital in-memory compute (DIMC) engine, which combines computing capabilities directly within programmable memory. This design helps reduce power consumption and enhances data processing speed while ensuring accuracy. d-Matrix differentiates itself from competitors by offering a modular and scalable approach, utilizing low-power chiplets that can be tailored for different applications. The company's goal is to provide high-performance, energy-efficient AI inference solutions to large-scale datacenter operators.

Company Stage

Series B

Total Funding

$149.8M

Headquarters

Santa Clara, California

Founded

2019

Growth & Insights
Headcount

6 month growth

2%

1 year growth

0%

2 year growth

9%
Simplify Jobs

Simplify's Take

What believers are saying

  • Growing demand for energy-efficient AI solutions boosts d-Matrix's low-power chiplets appeal.
  • Partnerships with companies like Microsoft could lead to strategic alliances.
  • Increasing adoption of modular AI hardware in data centers benefits d-Matrix's offerings.

What critics are saying

  • Competition from Nvidia, AMD, and Intel may pressure d-Matrix's market share.
  • Complex AI chip design could lead to delays or increased production costs.
  • Rapid AI innovation may render d-Matrix's technology obsolete if not updated.

What makes d-Matrix unique

  • d-Matrix's DIMC engine integrates compute into memory, enhancing efficiency and accuracy.
  • The company offers scalable AI solutions through modular, low-power chiplets.
  • d-Matrix focuses on brain-inspired AI compute engines for diverse inferencing workloads.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Hybrid Work Options