
About this role
Hybrid onsite at HNB - 3 days in. Orlando, FL (exact address TBD) Max bill rate of 55h c2c H1's accepted Skills Needed: DataStage, ETL Development (Data Mapping and Transformation Implementation), Strong SQL Server w/ advanced T-SQL, File Based Ingestion, etc. 12-month contract (possible extension) MUST be able to interview onsite for 1 of the rounds
**Important submittal instructions: Reminder: Please submit the following in the notes section when you submit the resume
- Linkedin Link (Include in Resume):
- Github Link(Include in Resume):
- Rate:
- Location of the Candidate (Include in Resume):
- Prepared for onsite interview (Y/N):
- Open to Relocate:
- Start Date Availability
- Legally Authorized to work In USA :
- Require Sponsorship:
- Visa Status:
- Is this contractor currently on your payroll?
- Will the contractor be on your payroll when they begin working with Capco?
- If the contractor will not be on your payroll, please list the name of the vendor who will be paying the contractor: *this is where any layers should be identified*
- Name of the project, group and reporting client manager from most recent two years of work:
- Actual rate being paid to the candidate on W2 (Optional if you are comfortable sharing)
- Interview Availability:
Role Description
Role summary Software Engineer with strong ETL experience to design, build, and support file-to-table data transformations using IBM InfoSphere DataStage. You’ll turn inbound file feeds into reliable, auditable SQL Server table loads with solid performance, clear error handling, and repeatable operations.
Key responsibilities
- Design, develop, and maintain IBM DataStage ETL jobs that ingest file feeds (CSV, fixed-width, delimited) and load curated destination tables in SQL Server.
- Build end-to-end ETL flows, including staging, transformations, validations, and publishing to downstream schemas.
- Perform source-to-target mapping and implement transformation logic based on business and technical requirements.
- Use common DataStage stages and patterns (e.g., Sequential File, Transformer, Lookup, Join/Merge, Aggregator, Sort, Funnel, Remove Duplicates), with attention to partitioning and parallel job design.
- Write, optimize, and tune SQL Server queries, stored procedures, and T?SQL scripts used in ETL workflows.
- Implement restartable and supportable jobs: parameterization, robust logging, reject handling, auditing columns, and reconciliation checks.
- Apply data quality controls (format checks, referential checks, null/duplicate checks, threshold checks) and produce clear exception outputs for remediation.
- Monitor and troubleshoot ETL runs using DataStage Director/Operations Console and SQL Server tooling; perform root-cause analysis and fix defects.
- Improve performance through job design tuning (partitioning strategy, sorting choices, buffering, pushdown where appropriate) and SQL tuning (indexes, statistics, set-based logic).
- Participate in code reviews, testing, documentation, and release activities; maintain clear runbooks and operational procedures.
- Collaborate with business analysts, data modelers, QA, and production support to deliver stable pipelines.
Required skills and experience
- Hands-on IBM DataStage ETL development experience, including data mapping and transformation implementation.
- Strong SQL Server experience with advanced T?SQL (joins, window functions, CTEs, temp tables, indexing basics, query plans).
- Solid understanding of file-based ingestion and parsing (CSV, fixed-width, headers/trailers, control totals, encoding, delimiters, quoting/escaping).
- Experience designing ETL jobs with good operational characteristics: parameter-driven design, logging, error handling, restart/re-run strategy, and auditability.
- Ability to troubleshoot data issues end-to-end (source file ? stage tables ? target tables) and communicate findings clearly.
Preferred qualifications
- Experience with DataStage Parallel Jobs tuning (partitioning methods, collect/sort trade-offs, skew handling).
- Familiarity with UNIX/Linux basics and shell scripting for orchestration and file handling.
- Experience with job scheduling/orchestration tools (e.g., Control?M, Autosys) and CI/CD practices.
- Knowledge of common warehousing patterns (incremental loads, slowly changing dimensions, surrogate keys, effective dating).
- Experience with version control (Git) and structured promotion/release processes across environments (dev/test/prod).
- Exposure to data governance practices (metadata, lineage, naming standards) and secure handling of sensitive data.
Education
- Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent practical experience.
What success looks like in this role
- File feeds land and load consistently with clear reconciliation results.
- Failures are diagnosable from logs and reject outputs without deep forensics.
- Jobs meet runtime SLAs through solid DataStage design and SQL tuning.
- Mappings and transformations are documented and traceable to requirements.