Ankita Roy Choudhury

CEO

Bengaluru, Karnataka, India11 yrs 6 mos experience
Highly Stable

Key Highlights

  • 10+ years of experience in data engineering
  • Expertise in building scalable cloud data platforms
  • Proven track record in optimizing data processing workloads
Stackforce AI infers this person is a Data Engineering expert in SaaS environments.

Contact

Skills

Core Skills

Data EngineeringMicrosoft Azure

Other Skills

Azure Data FactoryAzure DatabricksDelta LakePySparkSpark SQLAzure Data Lake Storage Gen2DatabricksSQLBODSData MigrationData ModelingData AnalyticsExtractTransformLoad (ETL)

About

Senior Data Engineer with 10+ years of experience designing and building scalable cloud data platforms on Microsoft Azure. Expertise in Azure Data Factory, Azure Databricks, PySpark, Delta Lake, and Azure Data Lake Storage Gen2 for building enterprise-grade data pipelines and analytics platforms. Proven track record of optimizing distributed data processing workloads, reducing cloud infrastructure costs, and delivering reliable data platforms supporting Power BI and enterprise analytics solutions. Strong experience in data pipeline development, ETL/ELT frameworks, medallion architecture, CDC implementation, and Spark performance optimization. I am Australian Permanent Resident (PR).

Experience

11 yrs 6 mos
Total Experience
3 yrs 7 mos
Average Tenure
9 mos
Current Experience

Career break

Health and well-being

Jul 2025Present · 9 mos · India

G-p/globalization partners

Senior Data Engineer

May 2023Jul 2025 · 2 yrs 2 mos · India · Remote

  • Designed and implemented 20+ Azure-based data pipelines using Azure Data Factory and Databricks processing 2 TB+ data monthly across multiple enterprise systems, reducing data latency by 40%.
  • Built Medallion architecture (Bronze, Silver, Gold) using Delta Lake on Azure Data Lake Storage Gen2.
  • Developed Databricks Autoloader ingestion pipelines to automate incremental data ingestion.
  • Implemented data transformations using PySpark and Spark SQL to apply complex business rules.
  • Implemented Delta Live Tables pipelines for SCD Type 2 historical data tracking.
  • Optimized Spark workloads using Photon acceleration, Catalyst optimizer, and Adaptive Query Execution.
  • Reduced Databricks compute costs by ~35% through performance optimization and cluster configuration tuning.
Azure Data FactoryAzure DatabricksDelta LakePySparkSpark SQLData Engineering+1

Walmart

2 roles

Software Engineer III

Promoted

Aug 2019May 2023 · 3 yrs 9 mos

  • Built enterprise ETL pipelines using Azure Data Factory integrating data from 17+ systems including Oracle, SQL Server, and SFTP sources.
  • Designed metadata-driven ingestion frameworks using ADF activities such as Lookup, Copy Data, Execute Pipeline, and Get Metadata.
  • Processed data using Databricks, PySpark DataFrames, and Spark SQL transformations.
  • Implemented CDC pipelines using SCD Type 2 and Type 4 to maintain historical records.
  • Loaded transformed datasets into Azure SQL Database to support analytics workloads.
  • Created SQL views and data models enabling enterprise Power BI dashboards.
Azure Data FactoryDatabricksPySparkSQLData EngineeringMicrosoft Azure

Software Engineer II

Oct 2017Aug 2019 · 1 yr 10 mos

  • Worked on data migration for multiple markets using a single dynamic BODS job.
  • This helped reduce the involvement of vendor partners significantly, thus saving a hefty amount of the project budget.
  • Since dealing with Finance Book closure activities, I had developed multiple customized reports using Python, on top of the Accounting data which helped our leadership greatly to take informed decisions.
  • I had developed a python based script to perform a post-load validation of the data at an individual field level.
BODSData MigrationData Engineering

Accenture

Software Engineer

Oct 2014Oct 2017 · 3 yrs · Bengaluru, Karnataka, India

  • 1. Developed a tool, Data Quality Dashboard to check the quality of data flowing through a system using SAP Data Services for creating the job, SAP HANA calculation views sit on top of it handling the complex calculation of the metrics which are then represented using a front-end created using SAP Design Studio. This tool exhibits the quality of data in graphical and tabular formats to help make well-informed decisions.
  • 2.Worked in the Enterprise Information Management area, an ERP solution using SAP DS as the base job and pushed down the complex validation logics to SAP HANA layer and utilized the scripted calculation views to achieve this solution. During this worked on different sorts of Data Services Transforms such as Row Generation, Query Transforms, Case Transform, Merge transform, Pivot and Unpivot transform, Table comparison, validation transforms etc.
  • 3. Contributed in design and development of a solution in the Data migration scope for Merger, Acquisition and Divestiture scenarios. Worked extensively to create the FD, TD, FMD and TMD for four Finance objects before laying out scenarios to fit the M&A situation. This unique solution included several Accenture Assets in the data migration space along with SAP Data Services. Created a baseline solution for multiple clients to use this in mergers, acquisitions and divestiture scenarios.
  • 4. Maintained the tasks and timelines for the entire project in a tool called ADT which helped the leadership get the current status of the project. I acted as the POC to maintain the entire details for the team.
BODSData MigrationData Engineering

Education

West Bengal University of Technology, Kolkata

Bachelor of Technology - BTech

Aug 2010Jul 2014

1% Club

Personal Finance — Personal finance

May 2024Present

Stackforce found 100+ more professionals with Data Engineering & Microsoft Azure

Explore similar profiles based on matching skills and experience