Divya Gopal

Senior Software Engineer

South Delhi, Delhi, India6 yrs 1 mo experience

Key Highlights

  • Over 5 years of experience as a Data Engineer.
  • Expertise in AWS and Azure data technologies.
  • Proven track record of optimizing data workflows.
Stackforce AI infers this person is a Data Engineer specializing in Healthcare and Retail data solutions.

Contact

Skills

Core Skills

Data EngineeringAwsAzure

Other Skills

AWS GlueAWS LambdaAWS Step FunctionsAmazon CloudWatchAmazon Elastic MapReduce (EMR)Amazon RedshiftAmazon S3Amazon Web Services (AWS)Apache SparkAzure Data FactoryAzure DatabricksAzure FunctionsAzure SQLBig DataBig Data Analytics

About

As a graduate in Electronics and Communication Engineering, I’ve always had a strong inclination toward problem-solving and analytical thinking. My academic journey sparked my passion for data, which led me to explore the world of big data technologies and cloud computing. Earning certifications in Azure Fundamentals (AZ-900) and Azure Data Fundamentals (DP-900) further solidified my foundation and commitment to continuous learning. With over 5 years of experience as a Data Engineer, I’ve had the opportunity to work across diverse industries such as healthcare and retail. At Carelon Global Solutions, I led the development of scalable ETL pipelines using AWS services like S3, Redshift, and Step Functions, optimizing large-scale data workflows for accuracy and performance. My work at Mindtree involved transforming clickstream data using Azure Data Lake and Databricks, delivering impactful insights for global retail clients. I specialize in PySpark, SQL, and Python, and have a proven track record of reducing pipeline runtime and improving query efficiency. Looking ahead, I aim to deepen my expertise in cloud-native data architectures and real-time analytics. My goal is to take on roles where I can architect end-to-end data solutions that not only scale but also deliver measurable business outcomes. I’m particularly excited about contributing to data-driven innovation in domains like healthcare and finance, while mentoring upcoming data professionals and staying at the forefront of evolving technologies.

Experience

Clairvoyant

Assistant Manager-Senior Software Engineer

May 2025Present · 10 mos · Gurugram, Haryana, India · Remote

Carelon global solutions

2 roles

Software Engineer III

Apr 2023May 2025 · 2 yrs 1 mo · Gurugram, Haryana, India

  • Spearheaded the development of enterprise-grade ETL solutions to support analytics in the healthcare domain using AWS and PySpark.
  • Improved operational efficiency by 40% through performance optimization of Spark jobs and intelligent partitioning strategies.
  • Automated over 60+ tables in production, significantly cutting down manual dependencies and operational risk.
  • Enabled faster business decisions by optimizing data pipelines for analytics-ready storage in S3 and Redshift.
  • Contributed to migration and modernization initiatives by implementing scalable data frameworks and CDC logic.
  • Facilitated business testing and validation phases to ensure alignment with stakeholder expectations and compliance standards.
AWSPySparkETLData EngineeringData Quality

Software Engineer II

Nov 2022Apr 2023 · 5 mos · Gurugram, Haryana, India

  • Designed and implemented scalable ETL pipelines using AWS services (S3, Redshift, Lambda, Step Functions) to process historical and incremental healthcare data.
  • Improved Spark job performance by 40% through advanced optimization techniques including partitioning, caching, bucketing, and broadcast joins.
  • Automated end-to-end data workflows using AWS Step Functions, significantly reducing manual intervention and operational overhead.
  • Built and maintained robust data quality frameworks to ensure accurate, clean, and validated healthcare datasets.
  • Implemented Change Data Capture (CDC) strategies to handle high-volume data updates, ensuring efficient synchronization across Redshift and Hive.
  • Led unit, system, and UAT testing for ETL workflows, collaborating with stakeholders to validate data accuracy before production deployment.
  • Enhanced data retrieval efficiency by 20% by optimizing data storage patterns across S3 and Redshift.
  • Proactively monitored and debugged data pipelines to ensure stability, reliability, and high performance.
  • Documented cloud architecture and data workflows to support knowledge sharing and transparency across teams.
AWSPySparkETLData QualityChange Data CaptureData Engineering

Mindtree

Software Engineer C1

Jan 2020Oct 2022 · 2 yrs 9 mos · Kolkata, West Bengal, India · Remote

  • Developed and optimized ETL pipelines to process clickstream data for Adidas apps using Azure Data Lake and Databricks.
  • Leveraged PySpark and Spark SQL to transform large datasets across multiple formats (Parquet, ORC), improving pipeline performance by 25%.
  • Created and managed external Hive tables for reporting and analytical use cases, enabling faster data access.
  • Enhanced data processing efficiency by optimizing queries and schema designs, reducing storage and compute costs.
  • Integrated pipelines with SQL-based systems to support downstream analytics, ensuring end-to-end data flow consistency.
  • Supported the Ship Expense Management system through data-driven insights, resulting in a 12% reduction in shipping costs.
  • Ensured high data quality and availability through robust data validations and regular maintenance of ingestion workflows.
  • Collaborated with cross-functional teams to analyze raw data and translate business requirements into technical solutions.
  • Actively participated in agile development cycles, contributing to continuous integration and delivery of data engineering tasks.
AzureDatabricksETLData QualitySQLData Engineering

Education

Asansol Engineering College 108

Bachelor of Technology

Jan 2015Aug 2019

Stackforce found 100+ more professionals with Data Engineering & Aws

Explore similar profiles based on matching skills and experience