
About this role
Syffer is an all-inclusive consulting company focused on talent, tech and innovation. We exist to elevate companies and humans all around the world, making change, from the inside to the outside.
We believe that technology + human kindness positively impacts every community around the world. Our approach is simple, we see a world without borders, and believe in equal opportunities. We are guided by our core principles of spreading positivity, good energy and promote equality and care for others.
Our hiring process is unique! People are selected by their value, education, talent and personality. We dont present ethnicity, religion, national origin, age, gender, sexual orientation or identity.
Its time to burst the bubble, and we will do it together!
What You'll do:
- Design, build, and maintain scalable and high-performance data engineering solutions on Azure;
- Develop and manage secure ETL/ELT pipelines for large-volume datasets using Azure Databricks, Azure Data Factory, and Azure Data Lake Gen2;
- Implement and optimize Lakehouse architectures, including Delta Lake and data modeling best practices;
- Ensure data reliability, performance, security, and compliance across data platforms;
- Automate deployments and platform management using Infrastructure as Code (IaC);
- Build and maintain CI/CD pipelines using Azure DevOps (YAML, Bicep, PowerShell);
- Collaborate with cross-functional teams in Agile environments to deliver end-to-end data solutions;
- Provide technical guidance and contribute to architectural decisions and continuous improvement initiatives.
Who You Are:
- + 8-10 years of overall IT experience;
- 5+ years of Data Engineering experience designing and building scalable data solutions;
- Minimum 3 years of hands-on experience with Azure, including Databricks, Azure Data Factory, and Azure Data Lake Gen2;
- Strong knowledge of data engineering concepts, including Data Warehousing, Lakehouse architecture, Delta Lake, and data modeling;
- Practical experience with Azure Databricks, Python/PySpark, SQL databases, Delta Lake, Azure Blob Storage, and Parquet;
- Proven experience building secure, scalable, and high-performance ETL pipelines;
- Hands-on experience with Azure DevOps and CI/CD pipelines;
- Experience with Infrastructure as Code, including deployment and management of Azure resources (Resource Groups, Databricks, ADF, ADLS, VNETs, Private Endpoints, and access controls);
- Experience working in Agile teams and collaborating with diverse stakeholders;
- Excellent communication skills;
- Preferred certifications: DP-203 – Azure Data Engineer Associate, Databricks Data Engineering Associate, AZ-400 – Designing and Implementing Microsoft DevOps Solutions;
- Fluent in English;
- Located in Portugal.
What youll get:
- Wage according to candidate's professional experience;
- Remote Work whenever possible;
- Delivery of work equipment adjusted to the performance of functions;
- Benefits plan;
- And others.
Work together with expert teams on projects of large magnitude and intensity, long term together with our clients, all leaders in their industries.
Are you ready to step into a diverse and inclusive world with us?
Together we will promote uniquess!