Bachelor’s degree in Computer Science,
4+ years’ experience in cloud data engineering, data platforms, or analytics engineering.
Strong proficiency with Databricks (Apache Spark, PySpark, Spark SQL).
Expertise in building and optimizing data pipelines with Azure Data Factory.
Solid programming skills in SQL and Python (handling structured and semi-structured formats like JSON, XML).
Experience with API integrations and ingestion pipelines.
Strong knowledge of data modeling, warehousing, ETL/ELT, and data quality best practices.
Familiarity with CI/CD pipelines (Azure DevOps, GitHub Actions) and infrastructure-as-code (Terraform).
Strong communication, analytical, and problem-solving skills.
Knowledge of financial data architecture and dimensional modeling.
Familiarity with governance/security frameworks.
Open for contract-base jobs
Principals only. Recruiters, please don't contact this poster.