Job Summary
We are seeking a Sr. Developer with 8 to 12 years of experience to join our team. The ideal candidate will have strong technical skills in Spark in Scala Delta Live Pipelines Azure Data Lake Store Azure DevOps Python Databricks SQL Databricks Workflows and PySpark. Additionally experience in Asset & Wealth Management is mandatory. This is a hybrid work model with day shifts and no travel required.
Responsibilities
Develop and maintain scalable data pipelines using Spark in Scala and Delta Live PipelinesImplement and manage data storage solutions using Azure Data Lake StoreUtilize Azure DevOps for continuous integration and continuous deployment (CI/CD) processesWrite efficient and maintainable code in Python for data processing tasksDesign and optimize SQL queries in Databricks SQL for data analysisCreate and manage workflows in Databricks Workflows to automate data processing tasksDevelop and maintain PySpark applications for large-scale data processingCollaborate with cross-functional teams to gather and analyze requirementsEnsure data quality and integrity throughout the data lifecycleMonitor and troubleshoot data pipeline performance and reliabilityProvide technical guidance and mentorship to junior developersStay updated with the latest industry trends and best practices in data engineeringContribute to the continuous improvement of development processes and methodologies
Qualifications
Possess strong experience in Spark in Scala and Delta Live PipelinesDemonstrate expertise in Azure Data Lake Store and Azure DevOpsHave proficiency in Python and PySpark for data processing tasksShow advanced knowledge of Databricks SQL and Databricks WorkflowsExhibit experience in Asset & Wealth Management domainDisplay excellent problem-solving and analytical skills