Data Engineer

Job Category: Data Engineer
Job Type: Full Time
Job Location: Pune
Job Experience: 6 years
About the Role:

We are looking for a skilled and experienced Senior Data Engineer to lead the design and implementation of scalable, high-performance data infrastructure. You will play a critical role in building robust ETL pipelines, architecting data models, and enabling real-time data processing systems across our platform. If you’re passionate about big data, cloud-native development, and engineering best practices, this role is for you.

Key Responsibilities

  • Design, develop, and maintain high-performance ETL/ELT pipelines using Python, Apache Airflow, and DBT.
  • Architect and optimize data models across relational (PostgreSQL), NoSQL (MongoDB), and cloud-based data warehouses like Redshift and Clickhouse.
  • Build and maintain APIs for data access and integration using FastAPI.
  • Lead real-time data processing and streaming solutions using Kafka or RabbitMQ.
  • Design and implement scalable Spark-based data processing pipelines and caching solutions with Redis.
  • Implement and manage search and analytics capabilities using Elasticsearch.
  • Containerize and orchestrate data services using Docker and Kubernetes.
  • Leverage AWS cloud services (S3, Lambda, Glue, ECS/EKS, RDS) to ensure secure and efficient data infrastructure.
  • Establish and enforce data quality, governance, and security standards.
  • Mentor junior data engineers and contribute to architectural decision-making and technical leadership.

 

Required Skills & Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • 4-6+ years of experience in data engineering, with a strong track record of building and scaling data infrastructure.
  • Extensive experience with workflow orchestration tools like Apache Airflow.
  • Deep expertise in relational (PostgreSQL) and NoSQL (MongoDB) database systems.
  • Proficiency in designing data lakes and data warehouses with Clickhouse and Amazon Redshift.
  • Practical experience with streaming frameworks such as Kafka or RabbitMQ.
  • Solid hands-on experience with Spark, Redis, and Elasticsearch.
  • Strong skills in containerization (Docker) and orchestration (Kubernetes).
  • Cloud-native development experience using AWS services.
  • Strong understanding of data modeling, data architecture, and performance optimization.

Preferred Skills:

  • Experience with infrastructure as code (e.g., Terraform, CloudFormation).
  • Knowledge of CI/CD pipelines and DevOps practices.
  • Familiarity with data security, compliance (GDPR, HIPAA), and governance.
  • Leadership experience in cross-functional teams or agile environments.

What We Offer:

  • Free meals to all our employees over & above the CTC
  • A collaborative, innovation-driven work culture
  • Great learning curve
  • Opportunities for leadership and technical mentorship
  • Exposure to cutting-edge technologies and scalable product development
  • Health benefits

Technical stack: Python, Fast API, Postgres SQL, Mongo DB, ETL pipelines, Docker, Kubernetes, Cloud (preferred AWS).

Expected Joining: Within a month.

 

Apply for this position

Allowed Type(s): .pdf, .doc, .docx