Director of Data Engineering
Human Agency.com
Office
USA
Full Time
Location: Remote (U.S.)
Type: Full-Time
About Human Agency
We’re scaling rapidly and have a growing pipeline of opportunities that demand exceptional talent across disciplines. Our mission is to bring on individuals — from creative producers to technical experts to entrepreneurial leaders — who can help us realize this next chapter of growth.
Our team of creators, builders, and investors specializes in marketing, technology, artificial intelligence, and business—so people with big ideas can expand their agency and live lives they find meaningful.
We partner with organizations of all sizes to explore, design, and implement AI strategies that are secure, scalable, and human-centered. From advisory and tooling to implementation and education, we meet clients where they are and help them integrate AI in ways that align with their mission and values.
At Human Agency, we believe AI should amplify human potential—not replace it. Our goal is to empower teams to work smarter, move faster, and unlock new possibilities through thoughtful, responsible innovation.
The Opportunity
Join Human Agency as a Data Engineering Consultant to stabilize and grow modern data platforms and lead AI-enabled outcomes across multiple client engagements. You’ll rotate across projects—as needed, you may temporarily coordinate a client’s data function to ensure continuity and accelerate delivery, then shift into hands-on engineering, reliability work, or AI implementation. This is a role that can evolve and be data one day and be an AI implementation leader the next. You’ll pair deep engineering craft with clear executive communication and consulting polish.
Key Responsibilities
Engagement Leadership (As Needed)
- Coordinate across data engineering, analytics, and data science leads; run operating cadences, triage priorities, and manage releases.
- Map ownership and dependencies; reduce single points of failure; maintain a living service catalog and decision log.
- Lead transition planning and knowledge transfer with internal teams and vendors while sustaining delivery.
Platform & Pipeline Ownership
- Build, operate, and improve ELT/ETL pipelines across batch and streaming sources.
- Manage orchestration (e.g., Airflow), transformations, environments, and CI/CD for analytics code.
- Optimize warehouse performance (e.g., Snowflake) and cost.
- Rapid discovery of existing pipelines and data contracts; map dependencies, SLAs/SLOs, and single points of failure; propose immediate stabilizations.
Data Reliability & Governance
- Implement monitoring/alerting, data quality checks, and tests with clear SLOs.
- Maintain lineage/metadata visibility and role-based access controls.
- Participate in an incident response rotation; maintain runbooks and postmortems.
- Establish change-management controls (versioning, approvals, environment promotion) for analytics code.
Analytics Enablement
- Partner with analysts and business stakeholders to deliver trusted datasets and semantic models.
- Support BI tools (Looker/Power BI/Tableau) and establish versioned, documented sources of truth.
Client Collaboration & Consulting
- Translate business needs into technical data solutions and clear option sets (impact, risk, effort).
- Facilitate discovery/working sessions; align requirements and prioritize tradeoffs.
- Prepare executive-ready updates: concise narratives, metrics, and decision logs.
- Manage scope and expectations; escalate risks early; build trust and influence across engineering, analytics, and business teams.
Documentation & Communication
- Produce concise technical docs, decision logs, and release notes.
- Translate technical tradeoffs into clear options for non-technical stakeholders.
- Own day‑to‑day reliability for priority pipelines and critical dashboards; implement pragmatic monitoring/alerting.
- Triage/resolve incidents; create or harden runbooks, playbooks, and on‑call rotations.
- Establish lightweight governance: data quality checks, lineage visibility, access reviews, and change‑management basics.
- Be able to read, debug, and improve existing pipelines; create new connectors/transformations as needed.
- Standardize patterns (e.g., ELT with versioned transformations, environment promotion, CI/CD for analytics code).
- Recommend and implement pragmatic tooling upgrades without destabilizing production.
- Maintain a living service catalog and decision log.
- Lead structured knowledge transfer sessions and create handover materials.
Qualifications
Required
- 7+ years in data engineering/analytics engineering with ownership of production pipelines and BI at scale.
- Demonstrated success owning and stabilizing production data platforms and critical pipelines.
- Strong grasp of modern data platforms (e.g., Snowflake), orchestration (Airflow), and transformation frameworks (dbt or equivalent).
- Competence with data integration (ELT/ETL), APIs, cloud storage, and SQL performance tuning.
- Practical data reliability experience: observability, lineage, testing, and change management.
- Operates effectively in ambiguous, partially documented environments; creates order quickly through documentation and standards.
- Prior ownership of core operations and reliability for business-critical pipelines with defined SLOs and incident response.
- Demonstrated client-facing experience (consulting/agency or internal platform teams with cross-functional stakeholders) and outstanding written/verbal communication (executive briefings, workshops, decision memos).
Preferred
- Deep interest in Generative AI and Machine Learning.
- Basic scripting ability in Python.
- Practical Generative AI experience: shipped at least one end-to-end workflow (e.g., RAG) including ingestion, embeddings, retrieval, generation, and evaluation.
- Working knowledge of LLM behavior (tokens, context windows, temperature/top-p, few-shot/tool use) and how to tune for quality/cost/latency.
- Comfort with vector search (e.g., pgvector or a hosted vector store) and hybrid retrieval patterns.
- Evaluation & safety basics: offline evaluation harnesses, lightweight online A/B tests, and guardrails for PII and prompt-injection.
- MLOps for LLMs: experiment tracking, versioning of prompts/configs, CI/CD for data & retrieval graphs, and production monitoring (latency, cost, drift).
- Python scripting for data/LLM utilities and service integration (APIs, batching, retries).
- Familiarity with BI tools (Power BI/Tableau) and semantic layer design.
- Exposure to streaming, reverse ETL, and basic MDM/reference data management.
- Security & governance awareness (role‑based access, least privilege, data retention).
Considerations
- Education: Bachelor’s degree or equivalent experience.
- Ethics: Commitment to ethical practices and responsible AI.
- Travel: Occasional (10–30%) for client activities and events.
- Location: Remote-friendly with preference for candidates in St. Louis, MO and major tech hubs
Compensation
Base Salary: $120,000–$200,000/yr + performance-based incentives; final compensation commensurate with experience and location.
Why Work With Human Agency
Join a team of thinkers and builders creating meaningful impact across sectors—with autonomy to lead, the resources to succeed, and room to grow.
Equal Opportunity Commitment
Human Agency is an Equal Opportunity Employer. We value diverse backgrounds and strive to build an inclusive culture where everyone feels welcomed and empowered.
Director of Data Engineering
Office
USA
Full Time
September 23, 2025