EXL logo

Palantir Data Engineer

EXL

Posted 12 days ago

About this role

Job Description: Data Engineer with Palantir Expertise

We are seeking a seasoned Data Engineer with 6 years of experience in data engineering, AWS, microservices architecture, and AI/ML frameworks, along with hands-on expertise or certification in Palantir Foundry. The ideal candidate will work on designing scalable data solutions, building robust pipelines, and integrating machine learning models into data workflows.

Location - Gurugram

 

Key Requirements:

  •  3-6-years of hands-on experience in data engineering with a focus on ETL workflows, data pipelines, and cloud computing.
  • Proven expertise or certification in Palantir Foundry is highly preferred.
  • Strong experience with AWS services for data processing and storage (e.g., S3, Glue, Athena, Lambda, Redshift).
  • Proficiency in programming languages such as Python, PySpark
  • Deep understanding of microservices architecture and distributed systems.
  • Familiarity with AI/ML tools and frameworks (e.g., TensorFlow, PyTorch) and their integration into data pipelines.
  • Experience with big data technologies like Apache Spark, Kafka, and Snowflake.
  • Strong problem-solving and performance optimization skills.
  • Exposure to modern DevOps practices, including CI/CD pipelines and container orchestration tools like Docker and Kubernetes.
  • Experience working in agile environments delivering complex data engineering solutions.

Key Requirements:

  •  3-6-years of hands-on experience in data engineering with a focus on ETL workflows, data pipelines, and cloud computing.
  • Proven expertise or certification in Palantir Foundry is highly preferred.
  • Strong experience with AWS services for data processing and storage (e.g., S3, Glue, Athena, Lambda, Redshift).
  • Proficiency in programming languages such as Python, PySpark
  • Deep understanding of microservices architecture and distributed systems.
  • Familiarity with AI/ML tools and frameworks (e.g., TensorFlow, PyTorch) and their integration into data pipelines.
  • Experience with big data technologies like Apache Spark, Kafka, and Snowflake.
  • Strong problem-solving and performance optimization skills.
  • Exposure to modern DevOps practices, including CI/CD pipelines and container orchestration tools like Docker and Kubernetes.
  • Experience working in agile environments delivering complex data engineering solutions.

Bachelor's    

Job details

Workplace

Office

Location

Gurugram, Haryana, India

Job type

Full Time

Similar

Company

Website

Visit site

Twitter

@exl_service

Jobr Assistant extension

Get the extension →