Login Sign Up

Associate Manager Data Analyst - Data Bricks

Optum

5 - 10 years

Hyderabad

Posted: 16/05/2026

Getting a referral is 5x more effective than applying directly

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.    

Primary Responsibilities:

  • Support the design, development, testing, and deployment of data analytics programs and processes for supporting various operational data stores using SAS, and Relational Databases
  • Collect, interpret, and aggregate data from traditional and non-traditional data sources for supporting programs and applications utilizing various data analytics purposes
  • Be able to understand data requirements and business need to develop data tools such as dashboards and data visualizations
  • Use business intelligence, data visualization, query, analytic and statistical software to build solutions, perform analysis and interpret data.
  • Solves moderately complex problems and can translate concepts into practice
  • Recognize problems and make recommendations for solutions
  • Works under minimal guidance and within tight deadlines for deliverables
  • Has an exploring mindset
  • Effectively interact with business users for new projects, enhancement projects, and issue resolution including addressing issues reported regarding data and existing applications
  • Adopts a structured approach focusing on understanding needs of business users, documenting requirements, and clarifying expectations. This involves active listening, utilizing various techniques to gather information, and ensuring clear communication to address any uncertainties or inconsistencies
  • Create and document high level design, detail design, implementation and standard operating procedures guides
  • Perform Production support tasks including job monitoring, addressing production failures, perform data analysis, root cause analysis and issue resolutions
  • Design, develop, and maintain scalable ETL/ELT pipelines using ADF, Python, Apache Spark, and PySpark in Databricks
  • Develop batch and incremental data processing pipelines handling large-scale structured, semi-structured and unstructured datasets
  • Implement optimized data transformation logic using Spark SQL and DataFrame APIs
  • Ensure pipelines follow enterprise data engineering best practices for performance, scalability, and maintainability
  • Implement reusable ingestion patterns and transformation templates aligned with enterprise architecture standards
  • Ensure compliance with enterprise metadata management, monitoring, and operational standards
  • Design and manage datasets stored in Apache Iceberg and Delta Lake (in open table formats)
  • Implement schema evolution, partitioning strategies, and version control for large datasets
  • Optimize data lake storage structures in Azure Data Lake Storage (ADLS)
  • Develop scalable pipelines using Databricks notebooks, jobs, and clusters
  • Manage dataset governance and access controls using Unity Catalog
  • Optimize Spark performance through partitioning, caching, and cluster tuning
  • Develop and schedule ETL pipelines using Apache Airflow
  • Implement dependency management, monitoring, alerting, and failure recovery mechanisms
  • Build pipelines that integrate with Snowflake data warehouse
  • Optimize transformations and data loading using Snowflake SQL and staging techniques
  • Design efficient data models for analytics and reporting
  • Support migration of legacy SAS pipelines to modern Spark-based frameworks and Databricks where applicable
  • Use Unix/Linux commands for common tasks and shell scripting to automate data engineering workflows
  • Support CI/CD deployment processes for ETL pipelines
  • Implement logging, auditing, and monitoring for production pipelines
  • Work with data architects, analysts, and business stakeholders to gather requirements and deliver data solutions
  • Participate in design reviews, architecture discussions, and code reviews
  • Mentor junior data engineers and provide technical guidance
  • Be the SME for DBX and do knowledge training for team
  • Focuses on building and optimizing large-scale data pipelines using ADF, Apache Iceberg, Delta Lake, cloud data lakes (ADLS/S3), and workflow orchestration tools like Airflow. The analyst will work closely with data architects, and platform teams to build reliable and governed data solutions aligned with enterprise standards
  • Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so

Required Qualifications:

  • Bachelor's degree in computer science, Computer Applications, Analytics, Data Science, or Information Technology
  • Certification on data bricks
  • 6+ years of experience in ETL / Data Engineering
  • 6+ years of experience working in Unix/Linux environments
  • 6+ years of experience writing Shell scripts
  • 6+ years of experience with ADF
  • 6+ years of experience working with large enterprise datasets
  • 5+ years of experience with programming using Python
  • 5+ years of experience with Databricks ecosystem including Lakehouse, Delta Lake, Workflows, Medallion Architecture, Apache SPARK, PySpark, Unity Catalog, Delta Sharing, Notebooks, SQL, GIT.
  • 4+ years of experience with Snowflake
  • Experience implementing governance using Unity Catalog
  • Experience working with Apache Iceberg or other open table formats
  • Experience working with Azure Data Lake Storage (ADLS) or AWS S3
  • Hands-on experience with Apache Airflow
  • Experience developing pipelines for Snowflake
  • Experience migrating SAS ETL pipelines to Spark and Databricks
  • Healthcare experience
  • Solid experience working with Databricks (Lakehouse, Delta Lake, Workflows, Medallion Architecture, Apache SPARK, Unity Catalog, Delta Sharing, Notebooks, SQL, GIT), PySpark, Python, Snowflake, and ADF frameworks
  • Knowledge of data governance frameworks
  • Good Knowledge on Azure services
  • Solid understanding of SAS programming, SAS Data step, SAS Macros, PROC SQL
  • Understanding of cloud data lake architecture
  • Proven solid analytical and troubleshooting skills
  • Proven excellent communication and collaboration abilities
  • Proven ability to work independently and mentor junior analysts
  • Proven solid documentation and design skills
  • Proven solid SQL skills

At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

#NJP

About Company

Optum is a leading health services and innovation company, part of UnitedHealth Group. It combines data, technology, and clinical expertise to improve healthcare delivery, reduce costs, and enhance outcomes. Optum operates across three core areas: OptumHealth (care delivery), OptumInsight (data and analytics), and OptumRx (pharmacy care services), serving millions of individuals, employers, and healthcare organizations globally.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.