company logo

Software Engineer - Model API's

Baseten.com

150k - 230k USD/year

Office

San Francisco

Full Time

About Baseten

Baseten powers inference for the world's most dynamic AI companies, like OpenEvidence, Clay, Mirage, Gamma, Sourcegraph, Writer, Abridge, Bland, and Zed. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. With our recent $150M Series D funding, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction, we’re scaling our team to meet accelerating customer demand.

The Role:

Baseten’s Model Performance (MP) team is responsible for ensuring the models running on our platform are fast, reliable, and cost‑efficient. As part of this team, you’ll focus on Model API's — the infrastructure powering our hosted API endpoints for the latest open‑source models. This work spans distributed systems, model serving, and developer experience. You’ll join a small, high‑impact team operating at the intersection of product, model performance, and infra, helping to define how developers interact with AI models at scale.

Responsibilities:

  • Design, build, and operate the Model APIs surface with focus on advanced inference capabilities: structured outputs (JSON mode, grammar-constrained generation), tool/function calling and multi-modal serving
  • Profile and optimize TensorRT-LLM kernels, analyze CUDA kernel performance, implement custom CUDA operators, tune memory allocation patterns for maximum throughput and optimize communication patterns across multi-GPU setups
  • Productionize performance improvements across runtimes with deep understanding of their internals: speculative decoding implementations, guided generation for structured outputs, custom scheduling and routing algorithms for high-performance serving
  • Build comprehensive benchmarking frameworks that measure real-world performance across different model architectures, batch sizes, sequence lengths, and hardware configurations
  • Productionize performance improvements across runtimes (e.g.TensorRT, TensorRT‑LLM): speculative decoding, quantization, batching, and KV‑cache reuse.
  • Instrument deep observability (metrics, traces, logs) and build repeatable benchmarks to measure speed, reliability, and quality.
  • Implement platform fundamentals: API versioning, validation, usage metering, quotas, and authentication.
  • Collaborate closely with other teams to deliver robust, developer‑friendly model serving experiences.
  • 3+ years experience building and operating distributed systems or large‑scale APIs.
  • Proven track record of owning low‑latency, reliable backend services (rate‑limiting, auth, quotas, metering, migrations).
  • Infra instincts with performance sensibilities: profiling, tracing, capacity planning, and SLO management.
  • Comfortable debugging complex systems, from runtime internals to GPU execution traces.
  • Strong written communication; able to produce clear design docs and collaborate across functions.
  • Implement platform fundamentals: API versioning, validation, usage metering, quotas, and authentication.
  • Collaborate closely with other teams to deliver robust, developer‑friendly model serving experiences.
  • 3+ years experience building and operating distributed systems or large‑scale APIs.
  • Proven track record of owning low‑latency, reliable backend services (rate‑limiting, auth, quotas, metering, migrations).
  • Infra instincts with performance sensibilities: profiling, tracing, capacity planning, and SLO management.
  • Comfortable debugging complex systems, from runtime internals to GPU execution traces.
  • Strong written communication; able to produce clear design docs and collaborate across functions.

Requirements:

Nice To Have:

  • Experience with LLM runtimes (vLLM, SGLang, TensorRT‑LLM) or contributions to open-source inference engines (vLLM, TensorRT-LLM, SGLang, TGI)
  • Knowledge of Kubernetes, service meshes, API gateways, or distributed scheduling.
  • Background in developer‑facing infrastructure or open‑source APIs.
  • We value infra‑leaning generalists who bring strong engineering fundamentals and curiosity. ML experience is a plus, but not required.

Benefits

  • Competitive Compensation Package.

  • This is a unique opportunity to be part of a rapidly growing startup in one of the most exciting engineering fields of our era.
  • An inclusive and supportive work culture that fosters learning and growth.
  • Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.

Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.


At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.

Software Engineer - Model API's

Office

San Francisco

Full Time

150k - 230k USD/year

October 11, 2025

company logo

Baseten

basetenco