
About this role
Full Time Senior Data Engineer / Big Data Engineer in enterprise at iLink Digital in Chennai, India. Apply directly through the link below.
At a glance
- Work mode
- Office
- Employment
- Full Time
- Location
- Chennai, India
- Experience
- Senior
Core stack
- System Design
- Optimization
- Architecture
- Performance
- Distributed
- Kubernetes
- Power BI
- MongoDB
- Jupyter
- Python
- Docker
- Presto
- Scala
- Kafka
- Spark
- Agile
- Java
- ETL
- ML
Quick answers
What skills are required?
System Design, Optimization, Architecture, Performance, Distributed, Kubernetes, Power BI, MongoDB, Jupyter, Python, and more.
iLink Digital is hiring for this role. Visit career page
Chennai, India
Key Responsibilities
1. Application Development
- Develop and maintain data-driven applications using Python and/or Scala .
- Design modular, scalable, and maintainable backend components.
- Participate in system analysis, technical design, and modelling of information systems.
2. ETL & Data Integration
- Design, develop, and optimize ETL processes (Extraction, Transformation, Load).
- Implement data pipelines using DataStage/Informatica/ Data Integration tools, or similar platforms .
- Ensure data quality, transformation logic validation, and performance optimization.
- Manage batch and streaming data ingestion workflows.
3. Database Development
- Develop solutions using relational and non-relational databases: Need to have experience using one of the below
- Oracle
- Informix
- Teradata
- MongoDB
- Hive
- Design optimized schemas, indexes, and queries.
- Implement data modelling (conceptual, logical, physical models).
4. Distributed Architecture & Streaming
- Work within enterprise architectures including one / more of the following
- JEE-based systems
- Kubernetes-based containerized deployments
- Kafka-based streaming data platforms
- Support real-time and near-real-time data processing frameworks.
5. Data Visualization & Reporting
- Develop dashboards using Power BI or equivalent BI tools .
- Translate business requirements into analytical views and KPIs.
- Ensure performance-efficient data models for reporting.
6. Landing Zone & Advanced Data Platform Tools
Work with or support environments leveraging one or more of the following
- Presto
- Feast (Feature Store)
- Spark
- MLflow
- Kubeflow
- HPE Ezmeral
- Superset
- Ray
- Jupyter / Notebooks
- Java-based services
Required Experience
- Hands-on experience in:
- Python and/or Scala development
- ETL process design and implementation
- Database development (relational & NoSQL including MongoDB)
- Information systems modelling (analysis & design)
- Dashboard development
Required Technical Knowledge
Advanced Knowledge In:
- Enterprise Architectures: JEE, Kubernetes, Kafka (and/ or)
- ETL Tools: DataStage / Informatica / equivalent (and/ or )
- Databases: Oracle, Informix, Teradata, MongoDB, Hive
Working Knowledge In:
- Scala and/or Python
- Power BI
- Distributed data processing frameworks
Preferred Qualifications
- Experience with Big Data ecosystems (Spark, Hive, Kafka).
- Exposure to ML platforms (MLflow, Kubeflow, Feast).
- Understanding of containerized deployments (Docker + Kubernetes).
- Experience working in Agile environments.
- Strong problem-solving and system design skills.
Requirements
Required Technical Knowledge
Advanced Knowledge In:
- Enterprise Architectures: JEE, Kubernetes, Kafka (and/ or)
- ETL Tools: DataStage / Informatica / equivalent (and/ or )
- Databases: Oracle, Informix, Teradata, MongoDB, Hive
Working Knowledge In:
- Scala and/or Python
- Power BI
- Distributed data processing frameworks
Preferred Qualifications
- Experience with Big Data ecosystems (Spark, Hive, Kafka).
- Exposure to ML platforms (MLflow, Kubeflow, Feast).
- Understanding of containerized deployments (Docker + Kubernetes).
- Experience working in Agile environments.
- Strong problem-solving and system design skills.