Nebius logo

Senior Technical Product Manager - Serverless AI

Nebius

Posted 2 days ago

Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.

Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.

The role
Nebius Serverless AI is our consumption-based compute platform for running AI workloads — training jobs, inference endpoints, and interactive development environments — without managing infrastructure. Users submit containerized workloads via CLI or UI, access GPU compute with pay-per-second billing, and the platform handles provisioning, lifecycle, and cleanup. We launched GA in Q1 2026 and are now scaling toward 1,000+ users while building the next generation of capabilities: autoscaling, multi-node distributed workloads, and developer-first tooling.

We are looking for a Senior Technical Product Manager to join the Serverless AI product team. Together you will divide ownership across the product surface — but individually, you will own your areas with full autonomy. This is not a role where you write requirements and hand them off. You will be the person who understands container runtimes, GPU scheduling, cold start optimization, and inference serving deeply enough to make correct technical trade-offs — and also the person who talks to customers, shapes the CLI experience, defines pricing, and drives adoption.

We are building the next generation of AI cloud — infrastructure designed from the ground up for GPU-intensive workloads, not retrofitted from legacy cloud. This is a lean, high-impact team where every person shapes the product directly. You need to be the kind of PM who amplifies engineering output by making the right calls on what to build and what to skip.

What success looks like in 12 months:
  • Serverless AI has clear product-market fit with measurable activation and retention metrics improving quarter over quarter.
  • Multi-node jobs and autoscaling endpoints are shipped and adopted by customers running production workloads.
  • Cold start time is reduced from 1-3 minutes to under 60 seconds for common workloads through a combination of product and infrastructure improvements you drove.
  • Developer experience (CLI, docs, error messages, onboarding flow) sets the standard that developers expect from a next-generation AI cloud.
  • At least 3 product decisions you made are directly attributable to customer conversations or data analysis you conducted.
Your responsibilities will include:

1. Product Ownership
  • Co-own the Serverless AI product roadmap — Jobs, Endpoints, and DevPods — taking primary ownership of specific product areas while collaborating closely with the other PM on shared priorities and cross-cutting decisions.
  • Write detailed, technically precise PRDs that engineering teams can execute against. Our PRDs specify CLI syntax, API contracts, state machines, and billing models — not abstract feature descriptions.
  • Make build/buy/defer decisions on capabilities like autoscaling, multi-node orchestration, HTTPS termination, secret injection, and health checking based on customer signal and strategic priorities.
2. Technical Depth:
  • Understand the full workload lifecycle: container image pull → VM provisioning → GPU attachment → workload execution → cleanup — well enough to identify bottlenecks and propose solutions.
  • Evaluate technical trade-offs in areas like container cold start optimization (image caching, snapshot restore, warm pools), GPU scheduling and bin-packing, and storage mount performance.
  • Work directly with engineers on architecture decisions for distributed training support, endpoint autoscaling policies, and fault tolerance mechanisms.
  • Stay current on the fast-moving serverless GPU infrastructure space — new inference frameworks (vLLM, TensorRT-LLM, SGLang), container runtimes, orchestration approaches — and translate trends into product direction.
3. Customer & Market:
  • Run customer discovery and feedback sessions with ML engineers and platform teams at AI startups and enterprises. Turn qualitative insight into specific product actions.
  • Analyze usage data, activation funnels, and churn patterns to identify where users get stuck and what features drive retention.
  • Track market dynamics, emerging technologies, and industry trends to inform product strategy and ensure Nebius stays ahead of where the market is heading.
  • Define and iterate on pricing, packaging, and tier strategy for Serverless AI.
4. Go-to-Market:
  • Own the technical content strategy: quickstart guides, tutorials, reference architectures, and example workloads that reduce time-to-first-job.
  • Partner with marketing on developer-focused campaigns, webinars, and conference presence.
  • Work with Solution Architects and Sales to qualify serverless-fit opportunities and support technical evaluations.
Requirements

Non-negotiables — you must have hands-on experience with:
  • You have built, shipped, and iterated on infrastructure or platform products used by developers or ML engineers. Not consumer apps. Not dashboards. Infrastructure.
  • You understand containers at a practical level — Docker, image registries, container runtimes, resource limits, networking. You've debugged why a container won't start, why GPU isn't visible inside it, or why a mount isn't working.
  • You have working knowledge of GPU computing for AI/ML: what GPU types exist and when to use them, how training and inference workloads differ in resource requirements, what vLLM / TensorRT-LLM / Triton are and why they matter.
  • You can read a CLI reference and know if it's well-designed. You've shaped developer-facing APIs, CLIs, or SDKs.
  • You have run real customer discovery — not surveys, but technical conversations with engineers where you learned something that changed your product direction.
  • You have 3+ years of product management experience in cloud infrastructure, AI/ML platforms, or developer tools.
Technical skills we will test in the interview:
  • Ability to whiteboard a workload lifecycle (submit → schedule → provision → execute → cleanup) and identify failure modes at each step.
  • Understanding of autoscaling trade-offs: scale-to-zero vs. warm pools, scaling metrics (queue depth, latency, utilization), cold start implications.
  • Familiarity with inference serving concepts: batching, model loading, quantization, KV-cache management, multi-model serving.
  • Understanding of distributed training concepts: data parallelism, model parallelism, communication overhead, checkpointing.
  • Ability to reason about pricing models: per-second vs. per-request vs. per-token, and how pricing interacts with product architecture.
It will be an added bonus if you have:
  • Experience at a serverless or GPU cloud company.
  • Hands-on ML engineering background — you've trained models, deployed inference endpoints, or built ML pipelines yourself.
  • Experience with Kubernetes for ML workloads (Kubeflow, KServe, Ray Serve) and understanding of why many ML teams want to avoid it.
  • Prior experience building a product from early stage to scale in a fast-growing market.
  • Background in systems engineering, distributed systems, or site reliability engineering.
Who thrives in this role
  • You are more comfortable in a terminal than in a slide deck.
  • You form strong opinions based on data and direct customer signal, and you update them when evidence changes.
  • You are energized by building at pace — small team, fast-evolving product, big opportunity.
  • You care about developer experience at the level of error messages, CLI flag naming, and documentation quality.
  • You'd rather ship a smaller thing that works perfectly than a bigger thing that's mediocre.

About Nebius

Nebius AI is an AI cloud platform with one of the largest GPU capacities in Europe. Launched in November 2023, the Nebius AI platform provides high-end, training-optimized infrastructure for AI practitioners. As an NVIDIA preferred cloud service provider, Nebius AI offers a variety of NVIDIA GPUs for training and inference, as well as a set of tools for efficient multi-node training.

Nebius AI owns a data center in Finland, built from the ground up by the company’s R&D team and showcasing our commitment to sustainability. The data center is home to ISEG, the most powerful commercially available supercomputer in Europe and the 16th most powerful globally (Top 500 list, November 2023).

Nebius’s headquarters are in Amsterdam, Netherlands, with teams working out of R&D hubs across Europe and the Middle East.

Nebius AI is built with the talent of more than 500 highly skilled engineers with a proven track record in developing sophisticated cloud and ML solutions and designing cutting-edge hardware. This allows all the layers of the Nebius AI cloud – from hardware to UI – to be built in-house, distictly differentiating Nebius AI from the majority of specialized clouds: Nebius customers get a true hyperscaler-cloud experience tailored for AI practitioners. We’re growing and expanding our products every day.

If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

What we offer

  • Competitive salary and comprehensive benefits package.
  • Opportunities for professional growth within Nebius.
  • Flexible working arrangements.
  • A dynamic and collaborative work environment that values initiative and innovation.

We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

Job details

Workplace

Remote

Location

Europe; Remote

Similar
Nebius logo

Nebius

About

Discover the most efficient way to build, tune and run your AI models and applications on top-notch NVIDIA® GPUs.

Online Presence

Jobr Assistant extension

Get the extension →