Ajay Yadav

Software Engineer

Mumbai, Maharashtra, India5 yrs 5 mos experience

Key Highlights

  • Improved pipeline runtime by 300% using optimized workflows.
  • Created 50+ dashboards enhancing data-driven decisions.
  • Automated observability reporting, eliminating manual overhead.
Stackforce AI infers this person is a Data Engineering expert in SaaS, specializing in cloud-native solutions and data pipeline optimization.

Contact

Skills

Core Skills

Data EngineeringCloud ComputingData VisualizationWeb Development

Other Skills

API GatewayAWS LambdaAlgorithmsAndroid DevelopmentAndroid StudioApache AirflowAzure DatabricksAzure Datalake storageAzure Synapse AnalyticsBig DataBlockchainC (Programming Language)C++Cascading Style Sheets (CSS)Continuous Integration and Continuous Delivery (CI/CD)

About

I'm a Senior Data Engineer with 5 years of experience designing scalable data platforms and automated data pipelines across AWS, Azure, and Snowflake ecosystems. My focus is on building robust ELT/ETL workflows, optimizing pipelines, and enabling data-driven decisions through real-time and batch data processing. I bring hands-on expertise in tools like dbt, Apache Airflow, PySpark, Snowflake, Looker, and Qlik Sense, and cloud platforms like AWS (Lambda, S3, API Gateway) and Azure (Databricks, Synapse, Data Factory). My codebase spans Python, SQL, and LookML, and I frequently deploy solutions via Serverless Framework and GitLab CI/CD. 💡 What I’ve Delivered: ▪️ Improved pipeline runtime by 300% and reduced data ops costs by 60% using optimized dbt & Airflow workflows ▪️ Created 50+ dashboards in Looker & Qlik Sense — including an ROI dashboard that identified product gaps ▪️ Built APIs on AWS to expose Snowflake data, enabling integrated customer views ▪️ Automated Observability reporting with Python & Lambda, eliminating 100% of manual overhead ▪️ Developed a recommendation POC with Scikit-Learn, cutting decision time by 15% I'm also passionate about creating self-service data tools, such as a Qlik-based planner tool that cut turnaround time from 15 days to just 1. Let’s connect if you’re looking for someone who can build, optimize, and scale your modern data stack from day one!

Experience

Sailpoint

Senior Software Engineer - Data

Aug 2025Present · 7 mos · Pune, Maharashtra, India · Hybrid

Cimpress india

2 roles

Senior Data Engineer

Jan 2025Aug 2025 · 7 mos · Mumbai, Maharashtra, India

  • Designed and implemented observability features using GitLab data for teams engineering efficiency, establishing DORA metrics for engineering KPIs using Gitlab REST API's Python and Airflow
  • Built a POC for SKU recommendation system using product descriptions, Snowflake Cortex and product metadata to accelerate product matching and drive personalisation
  • Developed and integrated data quality DAGs using Apache Airflow and dbt's built-in testing framework and test expectations library into production pipelines for early anomaly detection and automated alerting
  • Led cross-functional training sessions to upskill analytics and business teams on snowflake and Looker, increasing adoption and reducing dependency on our analytics team by 70%
  • Contributed to internal platform documentation and tooling guides to promote knowledge sharing, streamline onboarding, and standardize best practices across teams
GitLab REST APIApache AirflowSnowflakeLookerPythonData Engineering+1

Data Engineer

Jan 2023Dec 2024 · 1 yr 11 mos · Mumbai, Maharashtra, India

  • Designed and managed cloud-native, scalable data pipelines using dbt, Snowflake, and Apache Airflow, achieving 99.9% data accuracy while reducing data operations costs by 60% through workflow optimization and compute clustering
  • Improved pipeline execution time by 300% via data pipeline automation, task parallelization, and resource orchestration, significantly reducing engineering load and runtime costs
  • Created and optimized 50+ BI dashboards in Looker using LookML, introducing dynamic filters and modular design — which cut dashboard duplication by 50% and improved report load times by 20%
  • Developed a centralized metadata-driven data product with built-in quality checks and lineage tracking, improving data discoverability and platform efficiency by 30%
  • Engineered real-time data APIs using AWS Lambda, API Gateway, and Snowflake, enabling integration with customer-facing tools and supporting seamless data access
  • Built a machine learning-based quote recommender system using Python and Scikit-Learn, helping reduce user decision time by 15% and enhancing personalisation
  • Automated error tracking and alerting via AWS Lambda + New Relic API integration, eliminating 100% of monthly manual checks and reducing issue triage time by 30%
  • Contributed to an overall 15–20% infrastructure cost reduction across pipelines by optimising warehouse sizing, workload scheduling, and improving developer documentation
dbtSnowflakeApache AirflowLookerAWS LambdaData Engineering+1

Fractal

Data Engineer (Imagineer)

Sep 2020Jan 2023 · 2 yrs 4 mos · Mumbai, Maharashtra, India · Remote

  • Built a planner tool from scratch using QlikSense, reducing the overall planning turnaround time from 15 days to a single day.
  • Created BI dashboards in QlikSense using data stored in various data stores, including Azure SQL DB, Oracle, and Azure Synapse Analytics.
  • Drove technical improvements by merging different data layer scripts, resulting in time savings and increased data accuracy.
  • Created a pipeline to pull data from different marketplaces using Azure Databricks and third-party Python APIs, storing it in various Azure storage solutions like Blob and Delta Lake, and creating tables on top of it in Azure Synapse Analytics.
  • Developed a data migration template in Azure Databricks using PySpark to read data stored on the server and write it back to Azure Delta Lake Storage.
  • Participated in continuous improvement by generating suggestions and engaging in problem-solving activities to support teamwork.
QlikSenseAzure DatabricksPythonData Engineering

The sparks foundation

Intern

Jun 2019Jul 2019 · 1 mo · Mumbai Metropolitan Region

  • Learned Flask Stack: Acquired foundational knowledge of the Flask framework.
  • Developed REST APIs: Built basic RESTful APIs using Flask, enabling data exchange between applications.
  • Tested APIs: Utilized Postman to test and validate API functionality.
Flask

Education

Veermata Jijabai Technological Institute (VJTI)

Bachelor of Technology — Information Technology

Jan 2017Jan 2020

Government Polytechnic Mumbai

High School Diploma — Computer Engineering

Jan 2014Jan 2017

Stackforce found 100+ more professionals with Data Engineering & Cloud Computing

Explore similar profiles based on matching skills and experience