Jagadeesh Dasari

Data Engineer

Bengaluru, Karnataka, India2 yrs 10 mos experience
Highly Stable

Key Highlights

  • Expert in building scalable data pipelines for real-time analytics.
  • Proven track record in deploying machine learning models.
  • Strong background in cloud technologies and data engineering.
Stackforce AI infers this person is a Data Engineer specializing in Fintech with expertise in cloud-based data solutions.

Contact

Skills

Core Skills

Data EngineeringMachine LearningSql

Other Skills

Cloud ComputingETLData WarehousingData ManagementPythonHadoopData AnalysisCI/CDPySparkData QualitySparkInformaticaAzureKafkaNeo4j

About

Dedicated and skilled Data enthusiast with two years of experience in data field. As a Data Engineer at Axis Bank's Business Intelligence Unit, I specialize in building robust, scalable, and automated data pipelines that fuel real-time analytics and decision-making. With deep expertise in PySpark, Kafka, Informatica, SQL, and AWS, I focus on creating systems that ensure data reliability, streamline operations, and unlock business insights—while always staying curious and driven to solve real-world data problems. • Successfully developed and automated real-time data pipelines for various business purposes. • Built a dynamic Data Quality & Source Handshake automation framework that reduced manual monitoring efforts My Portfolio -https://dasarijagadeesh.vercel.app/developer

Experience

2 yrs 10 mos
Total Experience
2 yrs 10 mos
Average Tenure
2 yrs 10 mos
Current Experience

Axis bank limited

2 roles

Senior Manger - Data Engineer | Cards & Payments

Promoted

Apr 2025Present · 1 yr 1 mo · Bengaluru, Karnataka, India · Hybrid

  • Focused on Cloud and AWS
  • 1) Dimensional Modelling – Data Warehouse
  • Designed and implemented dimensional data models using star schema architecture for enterprise data warehouse solutions. Implemented Slowly Changing Dimensions (SCD Type 0, Type 1, and Type 2) using techniques such as watermark columns, hash-based change detection, and latest extraction timestamps. Developed data pipelines using SQL, PL/SQL, and ETL orchestration tools to enable efficient data ingestion, transformation, and loading into the warehouse
  • 2) Data Archival Framework
  • Designed and developed a metadata-driven data archival framework to migrate historical data from multiple BIU datamarts from on-premise HDFS and Oracle systems to BDL object storage. The framework enabled standardized archival across datasets and is utilized across the organization for efficient storage management.
  • 3) Housekeeping and File Management System
  • Developed and implemented a Python-based package deployed on Hadoop servers for BDL table maintenance. The solution automated housekeeping tasks such as removal of redundant files, resolving small file issues, and optimizing table storage management.
  • 4) Customer Propensity Models Deployment
  • Collaborated with analytics and data science teams to deploy machine learning models including propensity, attrition, and revolve models. These models enabled personalized customer engagement strategies, helping drive improved targeting and uplift in credit card acquisition.
  • 5) CI/CD Implementation for Data Pipelines
  • Developed and implemented production-grade CI/CD pipelines for ETL workflows using Bitbucket and Jenkins. Enabled automated code versioning, testing, scheduled builds, and controlled deployment processes to improve development efficiency and release reliability.
Cloud ComputingData EngineeringSQLETLData WarehousingMachine Learning

Manger - Data Engineer | Cards & Payments

Jul 2023Apr 2025 · 1 yr 9 mos · Bengaluru, Karnataka, India · Hybrid

  • 1. Data Quality and Job Monitoring Framework for Hadoop On-Premise Jobs
  • Designed and implemented a centralized PySpark framework to monitor Hadoop-based batch jobs and perform automated Data Quality validations. The framework captures pipeline execution metadata, performs configurable DQ checks, and stores monitoring metrics to provide enhanced operational visibility and faster issue resolution.
  • 2. Credit Card Data Mart Engineering
  • Maintained and enhanced Credit Card domain data assets across both on-premise Hadoop environments and cloud object storage. Developed and optimized ETL pipelines supporting business intelligence, analytics, and data science use cases while ensuring reliable data availability for downstream consumers.
  • 3. Monthly Credit Card Offer Release Pipeline Modernization
  • Re-engineered a legacy Informatica workflow into a modern Spark-based pipeline, significantly improving job performance and reliability. Reduced runtime from 8 hours to 2 hours, minimized manual intervention, improved failure handling, and implemented a metadata-driven architecture with end-to-end monitoring.
  • 4. Azure Blob Storage Data Availability Dashboard
  • Developed dashboards to monitor data asset availability in Azure Data Lake Storage (ADLS) for downstream processing. Leveraged Informatica Intelligent Cloud Services (IICS) as the cloud ETL engine to efficiently move and process data in cloud storage, improving scalability and data accessibility.
  • 5) Real-Time Kafka Streaming Pipeline for Drop-off Events
  • Engineered a real-time data pipeline using Spark Structured Streaming to capture credit card application drop-off events from Kafka topics. The pipeline enabled automated near real-time data delivery to AVC systems, supporting timely customer engagement initiatives.
PySparkData QualityETLSQLData Engineering

Mechismu racing electric

Technical Team Member

Aug 2020May 2023 · 2 yrs 9 mos · Dhanbad, Jharkhand, India

Education

Indian Institute of Technology (Indian School of Mines), Dhanbad

Bachelor's degree — Mechanical Engineering

Jun 2019May 2023

Stackforce found 100+ more professionals with Data Engineering & Machine Learning

Explore similar profiles based on matching skills and experience