company logo

Data Engineer - Databricks, PySpark, AWS

Comcast.com

Office

Philadelphia, 1701 John F Kennedy Blvd, United States

Full Time

Make your mark at Comcast -- a Fortune 30 global media and technology company. From the connectivity and platforms we provide, to the content and experiences we create, we reach hundreds of millions of customers, viewers, and guests worldwide. Become part of our award-winning technology team that turns big ideas into cutting-edge products, platforms, and solutions that our customers love. We create space to innovate, and we recognize, reward, and invest in your ideas, while ensuring you can proudly bring your authentic self to the workplace. Join us. You’ll do the best work of your career right here at Comcast. (In most cases, Comcast prefers to have employees on-site collaborating unless the team has been designated as virtual due to the nature of their work. If a position is listed with both office locations and virtual offerings, Comcast may be willing to consider candidates who live greater than 100 miles from the office for the remote option.)

Job Summary

This role is responsible for developing data structures and pipelines aligned to established standards and guidelines to organize, collect, standardize, and transform data that helps generate insights and address reporting needs.

Job Description

Data Engineering & Pipeline Development

  • Develops data structures and pipelines aligned to established standards and guidelines.
  • Ensures data quality during ingest, processing, and final load to target tables.
  • Creates standard ingestion frameworks for structured and unstructured data.
  • Checks and reports on the quality of data being processed.

Data Consumption & Access

  • Creates standard methods for end users and downstream applications to consume data, including:
  • Database Views

  • Extracts

  • Application Programming Interfaces (APIs)
  • Develops and maintains information systems (e.g., data warehouses, data lakes), including data access APIs.
  • Database Views

  • Extracts

  • Application Programming Interfaces (APIs)

Platform Implementation & Optimization

  • Implements solutions via data architecture, data engineering, or data manipulation on:
  • On-prem platforms (e.g., Kubernetes, Teradata)
  • Cloud platforms (e.g., Databricks)
  • Determines appropriate storage platforms across on-prem (minIO, Teradata) and cloud (AWS S3, Redshift) based on privacy, access, and sensitivity requirements.
  • On-prem platforms (e.g., Kubernetes, Teradata)
  • Cloud platforms (e.g., Databricks)

Data Lineage & Collaboration

  • Understands data lineage from source to final semantic layer, including transformation rules.
  • Enables faster troubleshooting and impact analysis during changes.
  • Collaborates with technology and platform management partners to optimize data sourcing and processing rules.

Design Standards & System Review

  • Establishes design standards and assurance processes for software, systems, and applications development.
  • Reviews business and product requirements for data operations.
  • Suggests changes and upgrades to systems and storage to accommodate ongoing needs.

Data Strategy & Lifecycle Management

  • Develops strategies for data acquisition, archive recovery, and database implementation.
  • Manages data migrations/conversions and troubleshooting of data processing issues.
  • Applies data sensitivity and customer data privacy rules and regulations consistently in all Information Lifecycle Management activities.

Monitoring & Issue Resolution

  • Monitors system notifications and logs to ensure database and application quality standards.
  • Solves abstract problems by reusing data files and flags.
  • Resolves critical issues and shares knowledge such as trends, aggregates, and volume metrics regarding specific data sources.

Must-Have Technical Skills

  • AWS (including S3, Redshift)
  • Pyspark

  • Databricks

Additional Technical Skills

  • Big Data Architecture

  • Python, Sql

  • Apache Spark

  • Data Modeling & Pipeline Design
  • Kafka / Kinesis (Streaming)
  • Apache Airflow

  • GitHub, CI/CD (Concourse preferred)
  • Minio

  • Tableau

  • Performance Tuning

  • Jira (Ticketing)

  • Shell Commands

  • Data Governance & Best Practices

Disclaimer:

  • This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications.

Skills

Amazon Kinesis, Amazon MQ, Amazon S3, Amazon Web Services (AWS), Databricks Platform, PySpark

We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That's why we provide an array of options, expert guidance and always-on tools that are personalized to meet the needs of your reality—to help support you physically, financially and emotionally through the big milestones and in your everyday life.


Please visit the benefits summary on our careers site for more details.

Education

Bachelor's DegreeWhile possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience.

Certifications (If Applicable)

Relevant Work Experience

5-7 YearsComcast is an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law.

Data Engineer - Databricks, PySpark, AWS

Office

Philadelphia, 1701 John F Kennedy Blvd, United States

Full Time

October 9, 2025

company logo

Comcast

comcastcareers