Full-Time

Unix/Linux Infrastructure Engineer

Staff

Posted on 12/16/2024

d-Matrix

d-Matrix

51-200 employees

AI compute platform for datacenters

Enterprise Software
AI & Machine Learning

Senior, Expert

Santa Clara, CA, USA

Working onsite at Santa Clara, CA headquarters 5 days per week.

Category
DevOps & Infrastructure
Server Administration
DevOps Engineering
Software Engineering
Required Skills
TCP/IP
Chef
Bash
Kubernetes
Microsoft Azure
Python
Puppet
Wireshark
Docker
Perl
Ansible
Linux/Unix
Google Cloud Platform
Requirements
  • Expert-level knowledge of Unix/Linux distributions (e.g., Red Hat, Ubuntu, SUSE).
  • Strong understanding of kernel-level operations, including configuration, tuning, and patching.
  • In-depth knowledge of filesystems (e.g., ext4, xfs, ZFS, Btrfs).
  • Experience with networked filesystems (e.g., NFS, GlusterFS).
  • Knowledge of storage technologies (SAN, NAS, RAID, LVM).
  • Advanced knowledge of network configuration, routing, VLANs, and firewalls.
  • Understanding of TCP/IP, DNS, DHCP, and VPNs.
  • Experience with network troubleshooting tools (e.g., tcpdump, Wireshark).
  • Proficiency in automation tools (e.g., Ansible, Puppet, Chef).
  • Scripting skills in Bash, Python, or Perl for automation and maintenance.
  • Experience with virtualization platforms (e.g., Proxmox, KVM, Xen).
  • Experience with containerization (e.g., Docker, Podman, Kubernetes).
  • Experience with cloud platforms (e.g. Azure, Google Cloud).
  • Ability to design, build, and maintain scalable Unix/Linux infrastructure.
  • Strong understanding of high-availability systems and clustering.
  • Expertise in system performance tuning and optimization.
  • Knowledge of Unix/Linux security best practices.
  • Experience with SELinux, AppArmor, firewalld, and auditing tools.
  • Familiarity with encryption, PKI, and vulnerability management tools.
  • Bachelor’s degree in Computer Science, Information Systems, or equivalent experience.
  • 5–10 years of experience in Unix/Linux systems engineering or administration.
Responsibilities
  • Troubleshoot complex systems and perform root-cause analysis.
  • Collaborate with cross-functional teams and engineering.
  • Document and verbally explain system designs and procedures.

d-Matrix focuses on improving the efficiency of AI computing for large datacenter customers. Its main product is the digital in-memory compute (DIMC) engine, which combines computing capabilities directly within programmable memory. This design helps reduce power consumption and enhances data processing speed while ensuring accuracy. Unlike many competitors, d-Matrix offers a modular and scalable approach, utilizing low-power chiplets that can be tailored for different applications. The company's goal is to provide high-performance, energy-efficient AI inference solutions to large-scale datacenter operators.

Company Stage

Series B

Total Funding

$149.8M

Headquarters

Santa Clara, California

Founded

2019

Growth & Insights
Headcount

6 month growth

11%

1 year growth

-2%

2 year growth

219%
Simplify Jobs

Simplify's Take

What believers are saying

  • d-Matrix raised $110 million in Series B funding, showing strong investor confidence.
  • The launch of the Corsair AI processor positions d-Matrix as a competitor to Nvidia.
  • Jayhawk II silicon advances low-latency AI inference for large language models.

What critics are saying

  • Competition from Nvidia, AMD, and Intel could pressure d-Matrix's market share.
  • Rapid AI innovation may lead to obsolescence if d-Matrix doesn't continuously innovate.
  • Potential regulatory changes in AI technology could impose new compliance costs.

What makes d-Matrix unique

  • d-Matrix's DIMC engine integrates compute into memory, enhancing efficiency and accuracy.
  • The company's chiplet-based modular design allows for scalable and customizable AI solutions.
  • d-Matrix focuses on power efficiency, addressing critical issues in AI datacenter workloads.

Help us improve and share your feedback! Did you find this helpful?