Runtime Software Engineer – Principal

Confirmed live in the last 24 hours



51-200 employees

AI compute platform using in-memory computing

AI & Machine Learning
Data & Analytics


Santa Clara, CA, USA

Required Skills
Quality Assurance (QA)
  • BS / MS Preferred degree in Computer Science, Computer Engineering or similar w/ 12 - 15+ Years of Industry Experience.
  • Experience with multi-threaded C programming on multi-core CPUs running an RTOS in both AMP and SMP configurations.
  • Understanding of methods used to synchronize many-core and many-CPU architectures.
  • Managing static resources without an MMU.
  • Zephyr OS experience is an advantage.
  • Experience with PIC programming and developing interrupt service routines.
  • Knowledge of bootloaders and Linux device drivers is an advantage.
  • Ability to interpret HW-centric data sheets and register definitions, to determine how to best program the architecture.
  • Ability to work with HW design teams at both the early definition phase and during development.
  • Experience with FPGA based development and system emulators is an advantage.
  • Ability to work with SW Architecture teams and propose considered feedback on SW architecture.
  • Knowledge of assembly language programming of pipelined RISC architecture processors.
  • Runtime FW debugging on target hardware using IDE via JTAG
  • Experience with current SW development methodologies including Git, Kanban, Sprints, Jenkins, Jira (or similar).
  • Experience collaborating in SW development projects that span multiple time zones and geographical regions.
  • Ability to work autonomously without day-to-day supervision, yet capable of delivering to agreed milestones in the development schedule (tracked weekly).
  • Skills that include unit level testing, documentation, and interfacing with QA & Test teams.
  • Skills in Mathematical quantization, floating point arithmetic, block floating point, sparse matrix processing, and linear algebra is an advantage.
  • Developing and debugging code on the FPGA-based systems containing CPU subsystems and SystemC models of the AI subsystems and SoC.
  • Porting the software to a “big iron” emulation system (e.g. Veloce, Palladium) containing the final RTL.
  • Bring up of the software on the AI subsystem hardware and validating silicon and software performance.

d-Matrix is developing a unique AI compute platform using in-memory computing (IMC) techniques with chiplet level scale-out interconnects, revolutionizing datacenter AI inferencing. Their innovative circuit techniques, ML tools, software, and algorithms have successfully addressed the memory-compute integration problem, enhancing AI compute efficiency.

Company Stage

Series B

Total Funding



Santa Clara, California



Growth & Insights

6 month growth


1 year growth


2 year growth