🔔 FCM Loaded

Senior Data Scientist

LogixHealth

5 - 10 years

Bengaluru

Posted: 31/01/2026

Getting a referral is 5x more effective than applying directly

Job Description

Job Title: Senior Data Scientist

Location: Bangalore/Coimbatore

Job Type: Full Time, Hybrid

Immediate joiners or notice period less than 15days are needed.


Purpose

As a Senior Data Scientist at LogixHealth, you will work with a globally distributed team of engineers to design and build cutting edge solutions that directly improve the healthcare industry. Youll contribute to our fast-paced, collaborative environment and bring your expertise to continue delivering innovative technology solutions, while mentoring others.


Duties and Responsibilities

As a Senior Data Scientist, you will play a vital role in developing and deploying intelligent solutions that enhance healthcare operations. You will collaborate with engineers, analysts, and business leaders across the globe to turn data into actionable insights. This position is an exciting opportunity to apply your expertise in advanced machine learning, Deep Learning, & Generative AI to solve complex problems. This position requires strong technical expertise and a practical understanding of healthcare data systems.



  • Advanced Modelling & Analytics: Utilize traditional and advanced machine learning algorithms to design, build, and improve decision making systems that drive business outcomes.
  • Scalable Model Implementation: Apply modern software engineering practices to implement and deploy machine learning models in a scalable, robust, and maintainable manner. Ensure models are production-ready and optimized for performance.
  • MLOps & System Integration: Lead technical MLOps projects including the redesign of existing infrastructures, maintaining and improving current models and systems and integrating the latest technologies into our data science workflows.
  • Work with DevOps teams to deploy, monitor, scale and continuously improve AI Applications within standardized CI/CD pipelines.
  • Collaborate on data warehousing and data lake solutions using tools like BigQuery or Snowflake.
  • Implement robust validation and monitoring processes to ensure data quality and integrity.


Qualifications

To perform this job successfully, an individual must be able to perform each duty satisfactorily. The requirements listed below are representative of the knowledge, skills, and/or ability required. Reasonable accommodation may be made to enable individuals with disabilities perform the duties.


Education

(Degrees, Certificates, Licenses, Etc.) BS (or higher, MS / PhD) degree in Computer Science / related field, or equivalent technical experience.


Experience:

  1. 8+ years of experience in data science or a related field, with a proven track record of developing and deploying scalable machine learning models in a production environment and delivering high-impact projects
  2. Experience in programming languages such as Python, with extensive experience in machine learning libraries and frameworks (e.g., scikit-learn, TensorFlow, PyTorch).
  3. Comprehensive experience in full life cycle of ML Model development, including ideation, training/testing, deployment, and monitoring in a production setting.
  4. Azure Databricks implementation experience preferred
  5. Experience designing and implementing data security and governance platform adhering to compliance standards (HIPPA, SOC 2) preferred


Specific Job Knowledge, Skill and Ability


1. Programming & Data Analysis: Proficiency in Python, R, and SQL for data analysis, statistical modelling, and machine learning model development.

2. Data Modelling & Architecture: Experience designing conceptual, logical, and physical data models, including canonical data models, metadata standards, and data dictionaries.

3. Data Modeling Best Practices: Ability to evaluate and implement data modeling best practices across OLTP, OLAP, Master Data Management (MDM), and Change Data Capture (CDC) architectures.

4. Generative AI & Agentic Systems: Hands-on experience with Generative AI, Large Language Models (LLMs), Agentic AI, and multi-agent systems.

5. Responsible AI & Interpretability: Strong understanding of Responsible AI principles, including model explainability, interpretability, fairness, and governance.

6. MLOps & Cloud Platforms: Hands-on experience with MLOps practices for AI model deployment, monitoring, and scaling within Azure, AWS, or Google Cloud Platform (GCP) environments.

7. LLM Optimization Techniques: Practical knowledge of LLM fine-tuning, quantization, and optimization techniques for performance and cost efficiency.

8. Model Optimization & Automation: Familiarity with Bayesian optimization, AutoML frameworks, and hyperparameter tuning tools for model performance optimization.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.