Cerebras has developed a radically new chip and system to dramatically accelerate deep learning applications. Our system runs training and inference workloads orders of magnitude faster than contemporary machines, fundamentally changing the way ML researchers work and pursue AI innovation.
We are innovating at every level of the stack – from chip, to microcode, to power delivery and cooling, to new algorithms and network architectures at the cutting edge of ML research. Our fully-integrated system delivers unprecedented performance because it is built from the ground up for deep learning workloads.
The Role
Cerebras Systems is a pioneer in large-scale AI Supercomputers. These multi-exaflop supercomputers are deployed in some of the biggest datacenters. These supercomputers are built using our Wafer-Scale Cluster technology - a cluster of several Wafer Scale Engine (WSE) chips. The Cluster engineering team is responsible for delivering software that are all-things related to cluster.
Responsibilities
- Assist in automating the configuration of networking, OS, and application software for large clusters of Cerebras WSE, servers, and switches.
- Contribute to building workflows for cluster upgrades, downgrades, and security patching, with a focus on minimizing downtime.
- Support the development of orchestration and scheduling systems for resource allocation, job submission, and placement in a multi-user cluster environment.
- Help ensure seamless deployment and operations for both on-premise and cloud-based clusters.
- Contribute to monitoring systems that detect and handle failures across various cluster resources, including High Availability configurations.
- Assist in building broad cluster and job monitoring tools, along with alerting systems.
- Develop user-facing tools for monitoring job status and gathering metrics.
- Create administrator-facing tools to help manage and operate large-scale clusters effectively.
Skills and Qualifications
- Solid understanding of software architecture, system design, and development principles.
- Familiarity with development in distributed cluster environments.
- Basic knowledge of Kubernetes (K8s), Prometheus, and Grafana.
- Proficiency in at least one programming language such as Go, Python, or bash.
- Strong problem-solving and debugging skills.
- Ability to write tests for new features and ensure existing features are properly tested
Why Join Cerebras
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
- Build a breakthrough AI platform beyond the constraints of the GPU
- Publish and open source their cutting-edge AI research
- Work on one of the fastest AI supercomputers in the world
- Enjoy job stability with startup vitality
- Our simple, non-corporate work culture that respects individual beliefs
Read our blog: Five Reasons to Join Cerebras in 2024.
Apply today and become part of the forefront of groundbreaking advancements in AI.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.