Pragya Bhardwaj

Data Engineer

Delhi, India4 yrs 8 mos experience

Key Highlights

  • 4+ years of experience in data engineering.
  • Expert in AWS and big data technologies.
  • Passionate about mentoring aspiring data engineers.
Stackforce AI infers this person is a Data Engineering expert with a strong focus on AWS and Fintech solutions.

Contact

Skills

Core Skills

Data EngineeringAwsDatabase Management

Other Skills

Apache AirflowBig Data TechnologiesCassandraData AnalysisData IngestionData PipelineData TransformationDatabase SchemaDatabasesETL ProcessesGrafanaHadoopHiveMachine LearningMongoDB

About

4+ years of experience building data pipelines, automating workflows, and optimizing data solutions in cloud environments. I specialize in turning raw data into powerful insights using tools like SQL, Python, Spark, and AWS (S3, Glue, Athena, EMR, Lambda, SQS, SNS) — and now, I’m channeling that experience into helping others grow in this space. What I do: Design scalable data pipelines for analytics and reporting Work hands-on with cloud-native tools and big data platforms Break down complex data engineering concepts into simple, actionable content Mentor and share career tips for aspiring data engineers What I’m building: As I grow my own journey in data engineering, I’m sharing what I learn through practical tips, beginner-friendly tutorials, and real-world project insights — with a mission to make data engineering more accessible for everyone. Let’s connect if you’re: Exploring a career in data engineering Curious about AWS + Big Data workflows Looking to collaborate on content, mentorship, or learning communities

Experience

Ameriprise financial services, llc

Associate Data Engineer

Apr 2025Present · 11 mos · On-site

Macquarie group

Senior Associate

Oct 2023Apr 2025 · 1 yr 6 mos · Gurugram, Haryana, India · Hybrid

  • Integral part of the Financial Crime and Risk team, focusing on the Sanctions Screening Project to deliver a high-precision, pre-screened list of securities with potential sanctions or trade restrictions for Macquarie’s Risk Management Group.
  • Engineered scalable data ingestion pipelines, leveraging in-house Simple Ingestion Framework for seamless integration of diverse data sources into the data lake.
  • Implemented ELT workflows using Apache Airflow DAGs to automate end-to-end data orchestration which includes extraction data from vendors (SIX, Golden Source, IHSMarkit) via MFT (Macquarie File Transfer) on AWS EBS volumes mounted on AWS EC2 instances. Pipelines handled data ingestion to the Corporate DataHub (CDH) and applied complex transformation logic using sparksql to meet business requirements.
  • Developed advanced monitoring solutions using Grafana, coding alerts and metrics for proactive infrastructure management. Monitored critical metrics such as CPU and memory utilization, triggering automated alerts at a predefined 80% threshold to ensure optimal system performance and resource allocation.
AWSApache AirflowSparkSQLData IngestionMonitoring SolutionsGrafana+1

Cognizant

3 roles

Data Engineer

Promoted

Jul 2022Oct 2023 · 1 yr 3 mos

  • Ingested data from multiple SOR’s in the data lake using data pipeline created on AWS cloud platform
  • involving services like S3, Lambda, Glue, EMR, Athena, SNS and SQS.
  • Developed functions using pyspark and spark SQL that help in generating multiple campaigns for the
  • customers fetching information about their current credit, debit card statuses.
  • Developed codes for feature set based on groups of clients, combining 12+ aggregates and grouping
  • similar clients into one set of users having similar financial interests.
  • Integrated and updated existing lambda codes with Amazon RDS by creating new triggers using SNS
  • and SQS. Created and deployed using CFT (Cloud Formation Templates) AWS service and
  • Bitbucket as a versioning service.
  • Worked on existing spark applications, added filter conditions to filter out the primary clients in more
  • than 10+ code files generating insights for the advisors that advises clients on their financial planning.
  • Developed 10+ new insights from scratch by using the concept the data curation, extracted data from
  • multiple tables present in the data lake, perform transformation operations using pyspark and spark
  • SQL, segregated the data into two tables – core and presentation based upon business logic and
  • requirements shared by data analyst teams. Deployed all in production environment without any issues
  • or failures.
  • Actively involved in fixing production issues and enhancing the existing insight generating code files,
  • debugging logics and setup the new logic to facilitate correct data to advisors and business teams
AWSPySparkSparkSQLData PipelineData TransformationData Engineering

Programmer Analyst Trainee

Jul 2021Jul 2022 · 1 yr

  • Worked on basic CRUD operations for the telecom client.
  • Developed and implemented database schema using AWS Services like Glue and Athena
  • Worked with data analysts to create SQL queries that can extract data from the databases for
  • generating reports for business
AWSSQLDatabase SchemaDatabase Management

Intern - Artificial Intelligence & Analytics

Apr 2021Jun 2021 · 2 mos

  • Allocated to Artificial Intelligence & Analytics domain, acquired training in data modelling, ETL
  • processes, data analysis and data lake concepts.
  • Gained knowledge on Big Data technologies such as Hadoop, Spark, and Hive.
Data AnalysisETL ProcessesBig Data Technologies

Education

Jaypee Institute Of Information Technology

Masters of Technology — Data Analytics

Jul 2019Jul 2021

Inderprastha Engineering College

Bachelors of technology — Computer Science Engineering

Jan 2015Jan 2019

ASN Senior Secondary School - India

Jan 2000Jan 2015

Stackforce found 100+ more professionals with Data Engineering & Aws

Explore similar profiles based on matching skills and experience