πŸ“ Location: Gurugram / Bangalore (Hybrid)
πŸ•’ Experience: 4–6 Years
πŸ“„ Employment Type: Full-Time | Permanent | Hybrid (Office + Home)


πŸ” About the Role

We are seeking a Senior Data Engineer to join our growing data engineering team. This is a key role focused on architecting, building, and optimizing robust data pipelines and infrastructure to support our data-driven business processes. If you’re passionate about working on modern data stacks and enabling business insights through clean, scalable, and efficient data systems, we want to hear from you.

You will play a critical part in developing ETL/ELT pipelines, managing cloud data environments, and collaborating with analytics and BI teams to deliver high-quality data solutions across the organization.


πŸ›  Key Responsibilities

1. Data Modeling & SQL Development (PostgreSQL)

  • Design, develop, and optimize complex SQL queries, stored procedures, and indexes
  • Analyze execution plans and perform database performance tuning
  • Contribute to schema design and normalization strategies

2. Data Migration & Transformation

  • Lead data migrations from diverse sources to cloud or ODS platforms
  • Create robust transformation logic and schema mapping
  • Ensure high data quality, integrity, and consistency

3. Python-Based Automation & ETL

  • Build and maintain reusable Python scripts for data ingestion, transformation, and validation
  • Work with various file formats (CSV, JSON, XML) and APIs
  • Leverage cloud SDKs (e.g., Boto3) for automated data operations

4. Workflow Orchestration with Apache Airflow

  • Develop and schedule Airflow DAGs for batch and streaming processes
  • Implement task dependencies, retries, alerting, and error handling
  • Integrate Airflow with cloud storage, data lakes, and warehouses

5. Cloud Data Infrastructure (AWS / Azure / GCP)

  • Manage cloud data resources (e.g., S3, Blob, GCS), IAM roles, encryption, and monitoring
  • Optimize cloud service performance and cost
  • Ensure compliance with security best practices

6. Data Marts & Analytics Enablement

  • Design and maintain dimensional data models and star/snowflake schemas
  • Implement incremental loading and partitioning strategies
  • Enable self-service analytics and data discoverability

7. Modern Data Stack Integration

  • Work with tools such as DBT, Fivetran, Snowflake, Redshift, BigQuery, or Kafka
  • Build metadata-driven and modular pipelines
  • Ensure system scalability and high availability

8. BI Tool Integration & Support

  • Partner with BI teams to build optimized datasets and reporting layers
  • Support Power BI, Apache Superset, or Supertech dashboards
  • Manage user access, refresh schedules, and performance tuning

βœ… Qualifications & Skills

  • 4–6 years of proven experience in a Data Engineering role
  • Strong expertise in PostgreSQL and writing optimized, complex SQL
  • Advanced proficiency in Python for scripting and automation
  • Deep knowledge of Apache Airflow and custom DAG development
  • Practical experience with AWS (or Azure/GCP) cloud services
  • Solid understanding of dimensional modeling and data mart architecture
  • Familiarity with modern data stack tools (e.g., DBT, Kafka, Snowflake, Fivetran)
  • Exposure to BI/reporting tools like Power BI, Superset, or Supertech BI
  • Version control knowledge (Git) and CI/CD experience is a plus
  • Strong communication, collaboration, and problem-solving skills

πŸš€ Why Join Us?

Work with a forward-thinking data engineering team

Exposure to cutting-edge technologies in cloud and data stack

Hybrid work environment with flexibility and autonomy

If interested, please share your CV atΒ iuliana@euroasiarecruiting.com.