Simplify Logo

Full-Time

Runtime Software Engineer

Staff

Posted on 2/7/2024

d-Matrix

d-Matrix

51-200 employees

AI compute platform using in-memory computing

Data & Analytics
Hardware
AI & Machine Learning

Junior, Mid

Santa Clara, CA, USA

Category
Backend Engineering
Embedded Engineering
Software QA & Testing
Software Engineering
Required Skills
Git
Quality Assurance (QA)
JIRA
Jenkins
Natural Language Processing (NLP)
FPGA
Linux/Unix
Requirements
  • BS / MS Preferred degree in Computer Science, Computer Engineering or similar.
  • Experience with multi-threaded C programming on multi-core CPUs running an RTOS in both AMP and SMP configurations.
  • Understanding of methods used to synchronize many-core and many-CPU architectures.
  • Managing static resources without an MMU.
  • Zephyr OS experience is an advantage.
  • Experience with PIC programming and developing interrupt service routines.
  • Knowledge of bootloaders and Linux device drivers is an advantage.
  • Ability to interpret HW-centric data sheets and register definitions, to determine how to best program the architecture.
  • Ability to work with HW design teams at both the early definition phase and during development.
  • Experience with FPGA based development and system emulators is an advantage.
  • Ability to work with SW Architecture teams and propose considered feedback on SW architecture.
  • Knowledge of assembly language programming of pipelined RISC architecture processors.
  • Runtime FW debugging on target hardware using IDE via JTAG
  • Experience with current SW development methodologies including Git, Kanban, Sprints, Jenkins, Jira (or similar).
  • Experience collaborating in SW development projects that span multiple time zones and geographical regions.
  • Ability to work autonomously without day-to-day supervision, yet capable of delivering to agreed milestones in the development schedule (tracked weekly).
  • Skills that include unit level testing, documentation, and interfacing with QA & Test teams.
  • Skills in Mathematical quantization, floating point arithmetic, block floating point, sparse matrix processing, and linear algebra is an advantage.
Responsibilities
  • Developing an AI inference processor for accelerating the inference of NLP, Vision, and Recommendation workloads.
  • Architecting, documenting, and developing the runtime firmware that executes in the various on-chip multi-core CPU subsystems.
  • Bringing the software up on FPGA platforms and debugging it using JTAG-connected IDE.
  • Collaborating with hardware teams to interpret the hardware specifications and suggest changes that improve utilization and throughput, and/or reduce power.
  • Collaborating with other members of the SW team, the SW quality & Test team, and the HW verification team.
  • Developing and debugging code on the FPGA-based systems containing CPU subsystems and SystemC models of the AI subsystems and SoC.
  • Involvement in porting the software to a “big iron” emulation system (e.g. Veloce, Palladium) containing the final RTL.
  • Bringing up the software on the AI subsystem hardware and validating silicon and software performance.

d-Matrix is developing a unique AI compute platform using in-memory computing (IMC) techniques with chiplet level scale-out interconnects, revolutionizing datacenter AI inferencing. Their innovative circuit techniques, ML tools, software, and algorithms have successfully addressed the memory-compute integration problem, enhancing AI compute efficiency.

Company Stage

Series B

Total Funding

$161.5M

Headquarters

Santa Clara, California

Founded

2019

Growth & Insights
Headcount

6 month growth

-12%

1 year growth

109%

2 year growth

278%
INACTIVE