Devendra Sawant

Data Engineer

India10 yrs 3 mos experience
AI EnabledAI ML Practitioner

Key Highlights

  • Engineered data pipelines processing over 2TB daily.
  • Achieved 30% improvements in reporting efficiency.
  • Expertise in AWS and data engineering lifecycle.
Stackforce AI infers this person is a Data Engineering expert with strong AWS capabilities in scalable data solutions.

Contact

Skills

Core Skills

Data EngineeringAwsInfrastructure As CodeMonitoringArchitectureData MigrationData AnalysisEtl

Other Skills

AWS GlueAmazon S3AWS LambdaTerraformSQLPythonAmazon RedshiftAmazon KinesisAWS Step FunctionsCloudWatchAthenaEventBridgeAmazon MWAAAirflowDataDog

About

With a Master's in Business Analytics and extensive experience in distributed systems, I architect scalable data solutions that drive measurable business impact. At Tata Consultancy Services, I've engineered data pipelines processing over 2TB daily, achieving 30% improvements in reporting efficiency and 25% performance gains. My expertise spans the entire data engineering lifecycle—from designing robust ETL pipelines using AWS Glue and Step Functions to implementing Data Mesh architectures with Lake Formation. I'm proficient in Python, Java, and SQL, with deep AWS expertise across Lambda, EMR, Redshift, Kinesis, and S3. My infrastructure-as-code approach using Terraform ensures secure, scalable deployments while optimizing costs—recently achieving 20% savings on Lambda functions. Working closely with cross-functional teams including Amazon stakeholders, I translate complex technical requirements into actionable solutions using big data technologies like Spark, Hive, and modern cloud-native approaches. I'm passionate about leveraging emerging technologies to solve complex data challenges.

Experience

10 yrs 3 mos
Total Experience
10 yrs 3 mos
Average Tenure
10 yrs 3 mos
Current Experience

Tata consultancy services

3 roles

Data Engineer

Promoted

Aug 2019Present · 6 yrs 9 mos · Greater Phoenix Area · Hybrid

  • Engineered AWS Glue ingestion modules processing 500GB+ daily datasets across multiple S3 layers, reducing report generation time by 40% through optimized SQL queries (joins, CTEs) with Athena and QuickSight.
  • Automated 15+ data pipelines by integrating AWS Lambda, Glue, S3, and EventBridge with CI/CD using Terraform and Jenkins, reducing manual processing by 70% and accelerating real-time business insights.
  • Designed scalable workflows using Amazon MWAA Airflow with validation, transformation, and partitioning, increasing throughput by 35% and reducing pipeline errors by 25%.
  • Optimized data extract pipelines to reduce turnaround from 8 hours to 2 hours, supporting 10+ teams with timely, accurate datasets.
  • Enhanced pipelines processing 2TB+ daily, achieving 20% cost reduction and 25% performance improvement in Lambda functions.
  • Developed reusable Terraform modules for AWS services (S3, KMS, IAM, Glue, SQS, SNS) following security best practices, improving infrastructure consistency and reducing provisioning time.
  • Implemented comprehensive monitoring using CloudWatch, CloudTrail, and DataDog, decreasing incident response time by 50% and maintaining 99.9% uptime.
  • Leveraged event-driven architecture with EventBridge, SNS, and SQS to integrate 10+ microservices, improving scalability by 40% and reducing processing latency.
  • Led migration of 5TB+ datasets to DynamoDB using AWS DMS, ensuring data consistency and minimal downtime.
  • Performed data validation using Python scripts ensuring 100% data quality and collaborated cross-functionally as liaison between operations and project management teams.
AWS GlueAmazon S3AWS LambdaTerraformSQLPython+6

Data Analyst

Promoted

Nov 2015Jun 2017 · 1 yr 7 mos

  • Involved in creating Hive tables, loading with data and writing Hive queries also Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports.
  • Interpreted data from multiple sources, consolidated the data and performed ETL processes on the data by using Sqoop, Hive, and Pig.
  • Implemented ETL operations using Spark and Scala.
  • Completed 5+ ad-hoc projects for the client that successfully delivered the requested data to Data Scientists
HiveSqoopSparkScalaData AnalysisETL

System Engineer

Nov 2013Oct 2015 · 1 yr 11 mos

  • Authored test cases, logged, tracked and resolved defects in an Agile environment using HP ALM and JIRA tools
  • Good exposure of software development life cycle (SDLC), bug life cycle (BCL), software testing life cycle (STLC), testing techniques and experienced in various tests reporting preparation.
  • Performed smoke, sanity, functional, system integration, regression, End to End testing and cross-browser testing to check for compatibility.
  • Performed QA processes in collaboration with the development team for ensuring-quality software releases using SCRUM(Agile) methodologies for the payment system(SEPA).
  • Developed, Executed and maintained QTP test script designed for regression testing and REST API testing
  • Written SQL queries for identification of test data using Toad for Database Oracle 11g and DATA-QWIK GT.M in FIS Profile core banking solution.
  • Analyzed data from multiple sources, designed SQL scripts to verify the accuracy of data for running automated scripts

Education

Naveen Jindal School of Management, UT Dallas

Master's degree — Business Analytics

Jan 2017Jan 2019

University of Mumbai

Bachelor of Engineering (B.E.) — Electronics and Telecommunication

Jan 2009Jan 2013

Stackforce found 100+ more professionals with Data Engineering & Aws

Explore similar profiles based on matching skills and experience