The Fedcap Group logo

Senior Data Engineer

The Fedcap Group

Posted 9 days ago

About this role

Position Summary

The Fedcap Group (TFG) is seeking a transformational and highly strategic Data Engineer to architect and lead the enterprise data warehouse, and data capabilities. This role is instrumental in enabling operational excellence, mission alignment, and scalable growth across TFG’s international network. Reporting to the Head of Data and Analytics. 


Goals of the Position (Hands on Sr. Data Engineering)

The Sr. Data Engineer will:

  • Deliver Reliable, Analytics-Ready Data Models
    • Design and implement Snowflake/dbt data models that enable BI dashboards, advanced analytics, and AI workloads.
  • Build Secure and Compliant Data Infrastructure
    • Implement role-based access, data masking, and governance controls in both ADF and snowflake, ensuring compliance with organizational and regulatory delivery. 
  • Lead development of dbt and transformation workflows

    •  Write, test, and deploy dbt models and transformations with automated quality checks.

    • Build reusable macros and packages to accelerate pipeline delivery.

  • Ensure performance and cost optimization 
    • Tune Snowflake queries and warehouses to improve efficiency and reduce costs.

    • Build monitoring and alerting frameworks to detect performance or data quality issues.

  • Enable end to end Data pipeline 

    • Build ingestion (Snowpipe, Streams, Tasks) and transformation workflows that move data from raw (Bronze) to curated (Gold) layers.

    • Deliver pipelines that are automated, resilient, and production ready.

  • Directly Support Business & Analytics Teams

    • Partner with stakeholders to understand data needs and translate them into solutions.

    • Take ownership from requirements to delivery, ensuring solutions are deployed accurately and are following engineering standard frameworks.


Key Responsibilities

  • Collaborate with the Head of Data and Analytics to implement the enterprise Medallion Architecture (Bronze → Silver → Gold)

  • Design, build, and maintain data ingestion pipelines in Azure Data Factory (ADF) to move data from diverse sources into Azure Data Lake Storage Gen2 (Bronze).

  • Configure and manage secure integrations between Azure and Snowflake, including external stages, storage integrations, and automated ingestion patterns (Snowpipe, Streams, Tasks).

  • Develop and optimize Snowflake data models (fact, dimension, staging tables) aligned to Bronze–Silver–Gold architecture and business KPIs.

  • Implement role-based access control (RBAC), data masking, and row/column-level security in Snowflake to ensure data privacy and compliance.

  • Build and maintain a modular dbt framework, including models, macros, tests, and snapshots, to enforce data quality and accelerate transformations.

  • Create and manage CI/CD pipelines for dbt using GitHub Actions or Azure DevOps, ensuring reliable deployments across environments.

  • Write and optimize complex SQL and Python scripts to automate workflows, monitor data pipelines, and troubleshoot production issues.

  • Implement data validation, quality checks, and monitoring frameworks to ensure freshness, accuracy, and reliability of data products.

  • Collaborate directly with BI, Analytics, and Data Science teams to deliver curated, business-ready datasets.

  • Take end-to-end ownership of assigned data engineering projects: requirements -design - build - deploy - support.

  • Document pipelines, transformations, and models to ensure reproducibility and team-wide adoption


Qualifications

Education & Certification

  • Bachelor’s degree in information systems, Computer Science, Engineering, or related field.
  • Advanced degrees in related fields are plus, however hands-on experience is strongly preferred.
  • Snowflake Snowpro Advanced  Data Engineer / Architect certification (Preferred).
  •  Data Governance certifications (preferred).

Professional Experience

  • 5+ years of proven experience in data engineer roles.
  • Deep expertise in enterprise system implementations, data lifecycle management, modular framework and data platform architecture.
  • Strong hands-on experience with dbt , Azure and snowflake are a must.
  • Demonstrated ability to design and implement scalable, secure and modular data pipeline. 
  • Experience with data quality frameworks, lineage and governance practice. 
  • Track record of delivering end-to-end data solutions in cloud environments.


Success Metrics (First 6–12 Months)

  • Reliable Ingestion: Design and deploy at least 10 production-ready ADF pipelines that ingest data into ADLS and Snowflake with metadata-driven, reusable templates.
  • Modular Framework Adoption: Establish a dbt modular framework (Bronze → Silver → Gold) with 50+ tested models and reusable macros adopted across the team.
  • Data Quality: Implement automated dbt tests and validation frameworks achieving 95%+ test coverage for curated (Silver/Gold) datasets.
  • Performance & Cost Optimization: Reduce Snowflake warehouse costs by 15–20% through query optimization, partitioning, and warehouse right-sizing.
  • Security & Compliance: Implement role-based access (RBAC) and masking policies in Snowflake with zero security audit findings.
  • Pipeline SLAs: Deliver pipelines that consistently meet agreed SLAs (e.g., data availability within X hours of source refresh).
  • Business Impact: Enable at least 5–10 critical BI dashboards or analytics use cases by delivering curated, business-ready datasets

Job details

Workplace

Office

Location

Canada

Job type

Full Time

Similar

Company

Website

Visit site

Twitter

@FedcapGroup

Jobr Assistant extension

Get the extension →