Mummurthi G

Data Engineer

Chennai, Tamil Nadu, India3 yrs 7 mos experience

Key Highlights

  • Achieved 97% reduction in data latency for financial services.
  • Led TB-scale cloud migrations with zero data loss.
  • Built modular ELT frameworks for high-performance data processing.
Stackforce AI infers this person is a Data Engineer specializing in Fintech and cloud-based data solutions.

Contact

Skills

Core Skills

Data EngineeringAwsReal-time Data ProcessingData Architecture

Other Skills

Apache IcebergData WarehousingAWS LakehouseData GovernanceSpark StreamingAWS GlueAmazon RedshiftAmazon S3ETL PipelinesData Quality MonitoringData QualityAWS Lake FormationMedallion ArchitectureApache Spark StreamingChange Data Capture

About

Data Engineer with 4 years of experience designing scalable data platforms and real-time data pipelines on AWS and modern lakehouse architectures. Specialized in building petabyte-scale data warehouses and streaming pipelines using PySpark, Apache Iceberg, Airflow, dbt, and Snowflake. Key achievements include reducing data latency by 97% (22 minutes to 33 seconds), leading TB-scale cloud migrations with zero data loss, and building pipelines processing 10M+ records daily for enterprise platforms including General Motors and Toyota Financial Services. Strong expertise in distributed data systems, dimensional modeling, and modern ELT frameworks. Actively seeking Data Engineer / Senior Data Engineer roles where I can design high-performance data platforms and enable real-time analytics at scale.

Experience

3 yrs 7 mos
Total Experience
1 yr 4 mos
Average Tenure
10 mos
Current Experience

Tekion corp

Data Engineer

Jul 2025Present · 10 mos · Chennai · On-site

  • Working on General Motors' enterprise legacy-to-cloud data migration, designing scalable AWS Lakehouse data platforms at TB scale.
  • Key Highlights:
  • Led migration of 35+ PostgreSQL tables and 150+ MongoDB collections with zero data loss during production cutover.
  • Architected a transactional lakehouse on Amazon S3 using Apache Iceberg enabling ACID transactions and time-travel for 200+ TB datasets.
  • Reduced downstream pipeline failures by 40% through data contracts, schema validation, and metadata versioning.
  • Designed Medallion Architecture (Bronze, Silver, Gold layers) with automated schema drift detection and data quality monitoring.
  • Built modular ELT frameworks using dbt and PySpark on Kubernetes (EKS) with automated testing and CDC-based incremental data loads.
  • Implemented enterprise data governance using AWS Lake Formation with fine-grained access controls across datasets.
Apache IcebergData WarehousingData EngineeringAWS

Annalect india

Data Engineer

Oct 2024May 2025 · 7 mos · Chennai · On-site

  • Built real-time and batch data pipelines for Toyota Financial Services North America, achieving a 97% reduction in data latency across 200+ TB of financial data on AWS.
  • Key Highlights:
  • Reduced data latency by 97% (22 minutes → 33 seconds) using Spark Streaming and AWS Glue pipeline architecture.
  • Processed 10M+ daily records from 5+ heterogeneous data sources into Amazon Redshift and AWS Athena.
  • Designed and maintained 15+ Apache Airflow DAGs with retry logic, SLA monitoring, and automated failure alerts ensuring 99%+ pipeline reliability.
  • Reduced archival storage costs by 35% annually using Amazon S3 Intelligent-Tiering and Parquet compression across 200+ TB datasets.
  • Migrated enterprise databases from Oracle, MySQL, and SQL Server to Amazon RDS and Amazon Redshift with minimal downtime.
  • Implemented Apache Iceberg tables with optimized partitioning and compaction strategies for large-scale financial datasets.
Apache IcebergData WarehousingData EngineeringReal-Time Data Processing

Cholamandalam investment and finance company limited

Software Engineer

Aug 2022Oct 2024 · 2 yrs 2 mos · Chennai · On-site

  • Progressed from Software Engineer to Data Engineer, building scalable financial data platforms supporting 15+ investment firms and 10+ commercial banks managing thousands of crores in assets.
  • Key Highlights:
  • Engineered automated financial document ingestion platform supporting 15+ investment firms managing ₹4,000+ crore assets.
  • Built batch and streaming data pipelines publishing hourly financial rate datasets for 10+ commercial banks covering ₹1,600+ crore assets.
  • Designed cloud data warehouse and data lake architecture storing and processing 200+ TB of structured and semi-structured data.
  • Improved financial analytics query performance by 46% through MySQL schema optimization, indexing, and query tuning.
  • Developed parameterized ETL pipelines with incremental loads and automated backfills reducing manual intervention.
  • Implemented data quality monitoring frameworks including row-count reconciliation and schema drift detection preventing production data failures.
Data WarehousingAmazon S3Data EngineeringData Architecture

Education

Adhiyamaan college of engineering

Bachelor of Technology - BTech — Information Technology

Aug 2018Jun 2022

Stackforce found 100+ more professionals with Data Engineering & Aws

Explore similar profiles based on matching skills and experience