
Senior / Intermediate - Data Engineer – GCP, BigQuery, Pub/Sub, Kafka, GKE, Java/Python/c#
UPS
Posted 4 days ago
About this role
Before you apply to a job, select your language preference from the options available at the top right of this page.
Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level.
Job Description:
Job Summary:
We are seeking a GCP-focused Data Engineer to build scalable, high‑quality data pipelines supporting our Data Maturity initiative for Logistics/Parcel Services. The ideal candidate has strong experience in GCP data services, data modeling, data quality frameworks, and understands logistics domain data such as shipment tracking, routing, and warehouse operations.
Key Responsibilities:
Core Engineering (All Levels)
- Pipeline Development: Design and develop scalable ETL/ELT pipelines using BigQuery, Pub/Sub, and Dataflow/Dataproc.
- Microservices: Build and deploy APIs using Python/Java/C# to integrate enterprise and external logistics systems.
- Orchestration: Orchestrate workloads via Composer (Airflow) or GKE using Docker and Kubernetes.
- Data Quality: Implement validation checks, lineage tracking, and monitoring for pipeline SLAs (freshness, latency).
- Modeling: Model logistics and supply chain data in BigQuery for analytics and operational insights.
- DataOps: Apply CI/CD, automated testing, and versioning best practices.
Intermediate / Senior additions
- System Design: Take ownership of end-to-end technical design for complex data modules.
- Mentorship: Actively mentor junior engineers and conduct rigorous code reviews to ensure high engineering standards.
- Best Practices: Establish and document DataOps standards and reusable patterns for the team.
Lead additions
- POD Leadership: Act as the technical head of the data pod, ensuring sprint goals are met and unblocking the team.
- Architecture: Define the high-level architecture and long-term technical roadmap for the logistics data platform.
- Stakeholder Management: Partner with business leaders to translate complex logistics requirements into technical specifications.
- Negotiation: Manage requirements scoping and prioritize backlogs by balancing technical debt with business value.
- Coaching: Drive the professional growth of the entire engineering team through structured coaching and performance feedback.
Required Skills:
- Relevant experience –
Lead – Min 7+ yrs of relevance experience
Senior – Min 5+ yrs of relevance experience
Intermediate – Min 3+ yrs of relevance experience
Associate – Min 2+ yrs of relevant experience
- Strong hands‑on experience with GCP BigQuery, Pub/Sub, GCS, Dataflow/Dataproc.
- Proficiency in Python/Java/C#, RESTful APIs, and microservice development.
- Experience with Kafka for event-driven ingestion.
- Strong SQL and experience with data modeling
- Expertise in Docker/Kubernetes (GKE) and CI/CD tools (Cloud Build, GitHub Actions, or ADO).
- Experience implementing Data Quality, Metadata management, and Data Governance frameworks.
Preferred Qualifications:
- Experience with Terraform, Cloud Composer (Airflow)
- Experience in Azure Databricks, Delta Lake, ADLS, and Azure Data Factory
- Experience in Knowledge Graph Engineering using Neo4j and/or Stardog
- Familiarity with Data Governance tools or Cataloging systems (AXON Informatica)
- Logistics domain experience
Education
Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field
Employee Type:
UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.