company logo

Deployment Engineer, AI Inference

Cerebras Systems.com

Office

Sunnyvale CA or Toronto Canada

Full Time

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.

About Us

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.   

Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In 2024, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. 

About The Role

We are seeking a highly skilled and experienced Deployment Engineer to build and operate our cutting-edge inference clusters. These clusters would provide the candidate an opportunity to work with the world's largest computer chip, the Wafer-Scale Engine (WSE), and the systems that harness its unparalleled power.  

You will play a critical role in ensuring reliable, efficient, and scalable deployment of AI inference workloads across our global infrastructure. On the operational side, you’ll own the rollout of the new software versions and AI replica updates, along the capacity reallocations across our custom-built, high-capacity datacenters. 
 
Beyond operations, you’ll drive improvements to our telemetry, observability and the fully automated pipeline. This role involves working with advanced allocation strategies to maximize utilization of large-scale computer fleets.  

The ideal candidate combines hands-on operation rigor with strong systems engineering skills and thrives on building resilient pipelines that keep pace with cutting-edge AI models. 

This role does not require 24/7 hour on-call rotations. 

Responsibilities

  • Deploy AI inference replicas and cluster software across multiple datacenters 
  • Operate across heterogeneous datacenter environments undergoing rapid 10x growth 
  • Maximize capacity allocation and optimize replica placement using constraint-solver algorithms 
  • Operate bare-metal inference infrastructure while supporting transition to K8S-based platform 
  • Develop and extend telemetry, observability and alerting solutions to ensure deployment reliability at scale 
  • Develop and extend a fully automated deployment pipeline to support fast software updates and capacity reallocation at scale 
  • Translate technical and customer needs into actionable requirements for the Dev Infra, Cluster, Platform and Core teams 
  • Stay up to date with the latest advancements in AI compute infrastructure and related technologies.  

Skills And Requirements

  • 5-7 years of experience in operating on-prem compute infrastructure (ideally in Machine Learning or High-Performance Compute) or id developing and managing complex AWS plane infrastructure for hybrid deployments 
  • Strong proficiency in Python for automation, orchestration, and deployment tooling 
  • Solid understanding of Linux-based systems and command-line tools 
  • Extensive knowledge of Docker containers and container orchestration platforms like K8S 
  • Familiarity with spine-leaf (Clos) networking architecture 
  • Proficiency with telemetry and observability stacks such as Prometheus, InfluxDB and Grafana 
  • Strong ownership mindset and accountability for complex deployments 
  • Ability to work effectively in a fast-paced environment.  

Location

  • SF Bay Area. 
  • Toronto, Canada. 

Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU.
  2. Publish and open source their cutting-edge AI research.
  3. Work on one of the fastest AI supercomputers in the world.
  4. Enjoy job stability with startup vitality.
  5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Five Reasons to Join Cerebras in 2025.

Apply Today And Become Part Of The Forefront Of Groundbreaking Advancements In Ai!

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Deployment Engineer, AI Inference

Office

Sunnyvale CA or Toronto Canada

Full Time

October 3, 2025

company logo

Cerebras Systems