Full-Time

Compiler Engineer

Kernelize

Kernelize

Kernelize

1-10 employees

Migrates ML models to AI accelerators

No salary listed

United States

Remote

Remote role with preference for employees located in the USA; contractors in other locations may apply.

Category
Software Engineering (2)
,
Requirements
  • passion for low-level optimizations, code generation, and pushing performance to the next level
  • building custom backends for IP accelerators on Triton
  • enabling and optimizing Triton kernels for broad adoption for ML training and inference
  • bridge the disconnect between AI optimizers and HW vendors' custom AI accelerators through Triton
  • remote work capability (role described as flexible and remote)
Responsibilities
  • Build custom backends for IP accelerators on Triton
  • Enable and optimize Triton kernels for broad adoption for ML training and inference
  • Bridge the gap between AI optimizers and HW vendors'' custom AI accelerators through Triton
  • Work on cutting-edge compiler technologies and help shape the core of our tech from the ground up

Kernelize helps organizations run AI inference on specialized hardware. It migrates machine learning models from CPU/GPU setups to AI accelerators and optimizes them for that hardware. The process relies on the Triton compiler ecosystem to adapt models, ensure compatibility, and tune performance. By providing targeted compiler and runtime work, Kernelize offers fine-grained control for each accelerator, enabling faster development and deployment of new AI hardware. Compared with others, Kernelize combines deep compiler engineering with hands-on system integration, focusing on de-risking and accelerating the software side of new accelerators for businesses through services like proof-of-concept projects and custom system development. The goal is to help clients achieve efficient, reliable AI inference on their hardware, shorten development cycles, and bring new accelerator solutions to production."}7f32d9a8-1b5b-4e2b-8f8a-9a1a2b1c2d2e{ }? }')->{

Company Size

1-10

Company Stage

N/A

Total Funding

N/A

Headquarters

Oregon

Founded

2025

Simplify Jobs

Simplify's Take

What believers are saying

  • Meta's KernelEvolve validates Triton automation with 17x gains.
  • Red Hat promotes Triton democratizing accelerator programming.
  • Open Core Ventures backs Kernelize bridging CUDA lock-in.

What critics are saying

  • Meta's KernelEvolve automates kernels, undercutting services in 6-12 months.
  • Triton upstreams AMD backends, eliminating on-ramp needs in 12-24 months.
  • vLLM automates inference optimization, capturing B2B clients in 6-12 months.

What makes Kernelize unique

  • Kernelize automates Triton compiler backends for AI accelerators.
  • Triton Extensions enable chip-specific optimizations without fragmentation.
  • Platform supports heterogeneous clusters across CPUs, GPUs, NPUs.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Remote Work Options