Full-Time
Migrates ML models to AI accelerators
No salary listed
United States
Remote
Remote role with preference for employees located in the USA; contractors in other locations may apply.
| , |
Kernelize helps organizations run AI inference on specialized hardware. It migrates machine learning models from CPU/GPU setups to AI accelerators and optimizes them for that hardware. The process relies on the Triton compiler ecosystem to adapt models, ensure compatibility, and tune performance. By providing targeted compiler and runtime work, Kernelize offers fine-grained control for each accelerator, enabling faster development and deployment of new AI hardware. Compared with others, Kernelize combines deep compiler engineering with hands-on system integration, focusing on de-risking and accelerating the software side of new accelerators for businesses through services like proof-of-concept projects and custom system development. The goal is to help clients achieve efficient, reliable AI inference on their hardware, shorten development cycles, and bring new accelerator solutions to production."}7f32d9a8-1b5b-4e2b-8f8a-9a1a2b1c2d2e{ }? }')->{
Company Size
1-10
Company Stage
N/A
Total Funding
N/A
Headquarters
Oregon
Founded
2025
Help us improve and share your feedback! Did you find this helpful?
Remote Work Options