ByteDance logo

Software Engineer Graduate (Cloud Native Infrastructure) - 2026 Start (PhD)

ByteDance

Posted 3 days ago

About this role

Full Time Entry-level Software Engineer Graduate (Cloud Native Infrastructure) - 2026 Start (PhD) in AI at ByteDance in Seattle, Washington, United States. Apply directly through the link below.

At a glance

Work mode
Office
Employment
Full Time
Location
Seattle, Washington, United States
Salary
148k - 301k USD
Experience
Entry-level

Core stack

  • Infrastructure
  • Microservices
  • Documentation
  • Service Mesh
  • Optimization
  • Contributing
  • Architecture
  • Open Source
  • Performance
  • Distributed
  • Scalability
  • Kubernetes
  • Serverless
  • Onboarding
  • Innovation
  • Efficiency
  • Design
  • K8s
  • LLM
  • ML
  • AI

Quick answers

  • What is the salary range?

    The salary range is 148k - 301k USD annually.

  • What skills are required?

    Infrastructure, Microservices, Documentation, Service Mesh, Optimization, Contributing, Architecture, Open Source, Performance, Distributed, and more.

ByteDance is hiring for this role. Visit career page

Seattle, United States

Team Introduction
The Compute Infrastructure team uses Kubernetes and Serverless technologies to build a large, reliable, and efficient compute infrastructure. This infrastructure powers hundreds of large-scale clusters globally, with over millions of online containers and offline jobs daily, including AI and LLM workloads. The team is dedicated to building cutting-edge, industry-leading infrastructure that empowers AI innovation, ensuring high performance, scalability, and reliability to support the most demanding AI/LLM workloads.The team is also dedicated to open-sourcing key infrastructure technologies, including projects in the K8s portfolio such as kubewharf (KubeBrain, Katalyst, Godel, etc)..

At ByteDance, as we expand and innovate, powering global platforms like TikTok and various AI/ML & LLM initiatives, we face the challenge of enhancing resource cost efficiency on a massive scale within our rapidly growing compute infrastructure. We're seeking talented software engineers excited to optimize our infrastructure for AI & LLM models. Your expertise can drive solutions to better utilize computing resources (including CPU, GPU, power, etc.), directly impacting the performance of all our AI services and helping us build the future of computing infrastructure. Also, with the goal of growing compute infrastructure in overseas regions, including North America, Europe, and Asia Pacific, you will have the opportunities of working closely with leaders from ByteDance’s global business units to ensure that we continue to scale and optimize our infrastructure globally.

We are looking for talented individuals to join our team in 2026. As a graduate, you will get unparalleled opportunities for you to kickstart your career, pursue bold ideas and explore limitless growth opportunities. Co-create a future driven by your inspiration with ByteDance

Successful candidates must be able to commit to an onboarding date by end of year 2026.
Please state your availability and graduation date clearly in your resume.
Applications will be reviewed on a rolling basis - we encourage you to apply early.

Joining this team, you will,
• Experience and support the super-fast growing ByteDance business and its ever-expanding fleet of machines across multi-clouds globally hosting hundreds of millions of containers and applications.
• Have direct opportunity to work with world-class engineers, CNCF & SIG experts, distributed system experts, etc, and grow your career with most popular cloud and open source technologies - Container, Kubernetes, etcd, Service Mesh, Serverless, etc.
• Have the chance to architect the next-generation of Cloud-Native Infrastructure for hosting highly complex workloads: AI/LLM, Microservices, big data, etc.
• Challenge yourself to solve unique, highly complex technical problems for Kubernetes cluster management and scheduling in a super-large distributed system.

Responsibilities
- Assist in analyzing and supporting enhancements to Hyper-Scale AI Infrastructure platforms, focusing on improving performance, scalability, and resilience for both traditional workloads and large language model (LLM) applications.
- Contribute to performance optimization efforts for Kubernetes-based infrastructure, including monitoring pod lifecycle, tracking resource utilization, and analyzing system behavior under varying load conditions—working closely with senior engineers to identify improvement opportunities.
- Lead small-scale development tasks related to resource management and scheduling in Kubernetes clusters, such as testing configuration updates, automating routine resource allocation workflows, or contributing to tooling for efficiency tracking.
- Engage actively in team discussions on AI infrastructure design and optimization strategies, leveraging academic knowledge and personal projects to propose fresh insights and potential solutions.
- Develop and maintain clear technical documentation, including runbooks, architecture diagrams, and process guides, to strengthen knowledge sharing and operational efficiency across the team.

The base salary range for this position in the selected city is $148200 - $300960 annually.

Job details

Workplace

Office

Location

Seattle, Washington, United States

Job type

Full Time

Experience

Entry-level

Salary

148k - 301k USD

per year

Similar

Company

Jobr Assistant extension

Get the extension →