Pritha Shrivastava

Data Engineer

Bengaluru, Karnataka, India5 yrs 6 mos experience
Highly Stable

Key Highlights

  • Expert in building scalable data pipelines.
  • AWS Certified Cloud Practitioner with cloud-native solutions.
  • Strong focus on data governance and automation.
Stackforce AI infers this person is a Data Engineering expert in SaaS environments, specializing in cloud solutions and data governance.

Contact

Skills

Core Skills

Data EngineeringCloud ComputingData Governance

Other Skills

Extract, Transform, Load (ETL)AWS GluePythonPySparkSQLGitHub ActionsAWS CloudFormationAgile MethodologiesETLAWS Lake FormationTeamworkRequirements GatheringSnowflakeMicrosoft ExcelMicrosoft Office

About

I'm a Senior Data Engineer with around 5 years of experience, mainly working with Python, SQL, and AWS to build secure, scalable data pipelines. I focus on automating data workflows using tools like AWS Glue, CDK, and PySpark to make data reliable, well-governed, and easy to use across the company. As an AWS Certified Cloud Practitioner, I enjoy designing cloud-native solutions that improve business processes and support analytics at scale. I’m hands-on with everything from writing transformation logic to setting up infrastructure as code and fine-grained data access controls. I’m always learning and exploring new technologies, and I love solving real-world data problems in ways that are both practical and impactful.

Experience

5 yrs 6 mos
Total Experience
4 yrs 10 mos
Average Tenure
8 mos
Current Experience

Fidelity international

Modern Data Engineer III

Sep 2025Present · 8 mos · Bengaluru, Karnataka, India · Hybrid

Principal global services

3 roles

Senior Software Engineer

Promoted

Mar 2024Sep 2025 · 1 yr 6 mos

  • With over 5 years of experience in building scalable data pipelines and delivering secure, analytics-ready datasets on the cloud, I lead end-to-end data engineering initiatives—from data ingestion and transformation to governance and automation. I focus on building reusable frameworks, improving data access, and ensuring security and performance across our data platforms.
  • Key responsibilities:
  • Build and maintain scalable ETL/ELT pipelines using AWS Glue, Python, and PySpark to process structured and semi-structured data from Amazon S3.
  • Monitor and debug AWS Glue and Snowflake pipelines to ensure reliability and performance.
  • Automate infrastructure and deployment using GitHub Actions and AWS CloudFormation for CI/CD.
  • Collaborate with business analysts and stakeholders to gather requirements and deliver production-ready datasets.
  • Implement secure, fine-grained access controls using AWS Lake Formation to meet data governance standards.
  • Actively contribute to Agile practices including sprint planning, stand-ups, and retrospectives.
Extract, Transform, Load (ETL)AWS GluePythonPySparkSQLGitHub Actions+4

Software Engineer

Nov 2021Mar 2024 · 2 yrs 4 mos

Agile MethodologiesSQL

Trainee Analyst

Nov 2020Nov 2021 · 1 yr

TeamworkRequirements Gathering

Vodafone idea limited

Summer Intern

May 2019Jun 2019 · 1 mo · Bhopal

  • Worked in the core network switching service department and gained insights about the working of telecom industry.

Education

Shri G S Institute of Technology & Science

Bachelor of Engineering - BE

Jan 2016Jan 2020

Bal Bharti school Rewa

12 — PCM

Jan 2000Jan 2015

Stackforce found 100+ more professionals with Data Engineering & Cloud Computing

Explore similar profiles based on matching skills and experience