🔔 FCM Loaded

Data Engineer

Pocket FM

3 - 5 years

Bengaluru

Posted: 12/02/2026

Getting a referral is 5x more effective than applying directly

Job Description

About Pocket FM

Pocket FM is a leading audio entertainment platform focused on immersive, long-form audio storytelling. The platform offers episodic audio series across genres such as romance, drama, thriller, and fantasy. Pocket FM follows a mobile-first approach, enabling users to listen anytime and anywhere. Founded in India, the company has expanded rapidly across global markets, including the US.It supports multiple regional and international languages to reach a diverse audience. Pocket FM empowers creators through a strong content and monetization ecosystem



About the Role

We are seeking a talented AI & Data Engineer to join our team. In this role, you will design, build, and maintain robust data pipelines while developing and deploying cutting-edge AI solutions. You will work at the intersection of data engineering and artificial intelligence, leveraging modern cloud platforms and AI frameworks to drive business value.


Responsibilities

  • Design, develop, and optimize scalable ETL/ELT pipelines using Databricks, Apache Spark, and cloud-native services
  • Build and maintain data lakehouse architectures leveraging Delta Lake and Databricks Unity Catalog
  • Develop AI-powered applications using agentic frameworks such as LangChain, LlamaIndex, AutoGen, or CrewAI
  • Fine-tune and deploy Large Language Models (LLMs) for domain-specific use cases using techniques like LoRA, QLoRA, and PEFT
  • Implement RAG (Retrieval-Augmented Generation) systems for enterprise knowledge management
  • Create and manage vector databases for semantic search and embedding storage
  • Collaborate with data scientists and ML engineers to productionize machine learning models
  • Ensure data quality, governance, and security across all pipelines and AI systems
  • Monitor, troubleshoot, and optimize data infrastructure for performance and cost efficiency
  • Document technical designs, processes, and best practices for knowledge sharing


Qualifications

  • 3-5 years of hands-on experience in data engineering, AI/ML engineering, or related roles
  • Strong proficiency with Databricks platform including Delta Lake, MLflow, and Databricks SQL
  • Expert-level knowledge of ETL/ELT processes, data modeling, and pipeline orchestration
  • Experience with AI agentic frameworks (LangChain, LlamaIndex, AutoGen, Semantic Kernel)
  • Hands-on experience with LLMs including GPT-4, Claude, Llama, Mistral, or similar models
  • Practical knowledge of fine-tuning techniques: LoRA, QLoRA, PEFT, and full fine-tuning approaches
  • Proficiency in Python and SQL; familiarity with Scala is a plus
  • Experience with cloud platforms (AWS, Azure, or GCP) and their AI/ML services
  • Understanding of vector databases (Pinecone, Weaviate, Chroma, Milvus)
  • Strong foundation in software engineering principles and version control (Git)


Required Skills

  • Experience with prompt engineering and LLM evaluation frameworks
  • Knowledge of MLOps practices and tools (Kubeflow, MLflow, Weights & Biases)
  • Familiarity with streaming data technologies (Kafka, Spark Streaming)
  • Experience with containerization (Docker) and orchestration (Kubernetes)
  • Background in NLP, computer vision, or other AI domains
  • Relevant certifications (Databricks, AWS, Azure, or GCP)


Technology Stack

  • Data Platform: Databricks, Delta Lake, Apache Spark, Unity Catalog
  • AI/ML: LangChain, LlamaIndex, Hugging Face, PyTorch, OpenAI API, Anthropic API
  • Cloud: AWS/Azure/GCP, Terraform, CI/CD pipelines
  • Languages: Python, SQL, PySpark
  • Tools: Git, Docker, MLflow, Airflow/Prefect

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.