Senior AI Engineer – LLM / SLM Fine-Tuning
GrowthPal
8 - 10 years
Pune
Posted: 29/01/2026
Job Description
About the Company
GrowthPal is building the intelligence layer for M&A deal sourcing, transforming how corporate development teams and investment banks discover acquisition targets. After proving our methodology across 500+ M&A mandates and $2B+ in deal flow, we've launched Aelina: an AI-native platform that combines knowledge graphs, agentic reasoning, and multi-modal search to surface hidden acquisition opportunities at scale.
Our technical challenges span advanced RAG architectures, real-time entity resolution across fragmented data sources, LLM-powered research automation, and building evaluation frameworks for ambiguous business intelligence tasks. We're a lean, product-focused team solving hard problems at the intersection of enterprise AI and financial services, where accuracy, interpretability, and competitive moats matter as much as innovation speed.
About the Role
This role is focused on hands-on model training, adaptation, and alignment, not just API consumption. We are looking for a highly hands-on AI Engineer (510 years experience) who has previously fine-tuned language models and understands the complete lifecycle of building domain-adapted SLMs. You will own model training workflows end-to-end, from data preparation to evaluation and deployment.
Responsibilities
- Designing and training in-house Small Language Models (SLMs)
- Fine-tuning models on proprietary company skills and capability data
- Building alignment pipelines using SFT, PEFT, and RL-based optimization
- Designing evaluation frameworks to measure relevance, accuracy, and drift
- Optimizing models for cost, latency, and inference efficiency
- Collaborating with data and platform teams on training and deployment pipelines
- Enabling runtime feedback loops for continuous model improvement
Qualifications
58 years of experience in AI Engineering with a focus on fine-tuning language models.
Required Skills
- Proven experience fine-tuning LLMs or SLMs on domain-specific datasets
- Strong experience with Supervised Fine-Tuning (SFT)
- Practical expertise in PEFT techniques such as LoRA and QLoRA
- Experience with post-training alignment techniques, including RLHF / RLAIF
- Hands-on use of preference-based optimization methods (DPO, IPO, ORPO)
- Strong Python programming skills
- Deep hands-on experience with PyTorch
- Solid understanding of tokenization, embeddings, and training objectives
- Experience curating, cleaning, and preparing large-scale training datasets
- Experience evaluating model quality, bias, and performance degradation
- Ability to debug training instability and performance issues
Preferred Skills
- Experience with open-source models such as LLaMA, Mistral, Gemma, Phi, or similar
- Knowledge of model compression, quantization, and distillation
- Experience with online or continual learning systems
- Applying RL techniques to ranking, search relevance, or agent behavior
- Familiarity with distributed training, GPUs, and memory optimization
- Exposure to knowledge-augmented systems (RAG vs fine-tuning trade-offs)
- Experience deploying models for production or internal platforms
Why This Role
This role offers deep ownership of core AI capabilities. You will work on real model training problems, shape proprietary intelligence, and influence how AI is embedded into production systems. If you enjoy building models that challenge conventional knowledge systems, this role is for you.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
