Job Description
Job Description
Job Description:
We are looking for a Data Engineer experienced in Azure Databricks, ADF, and PySpark to build and optimize ETL pipelines for a Data Lakehouse architecture.
Responsibilities:
-
Build and maintain ETL processes using ADF, PySpark, Databricks
-
Convert Informatica ETL workflows to cloud
-
Ensure data quality, lineage, and performance
-
Create self-service data products using semantic layers
-
Work closely with data architects and business teams
Required Skills:
-
8+ years in data engineering
-
Strong skills in Databricks, PySpark, SQL, Azure
-
Experience in legacy ETL migration
-
Familiarity with financial risk datasets, data marts
-
Agile project exposure and strong problem-solving skills