Saket Sharma

Data Engineer

Noida, Uttar Pradesh, India5 yrs 6 mos experience
Highly Stable

Key Highlights

  • Experienced in Azure Data Engineering and ETL processes.
  • Proven track record in designing scalable data solutions.
  • Microsoft Certified: Azure Data Engineer Associate.
Stackforce AI infers this person is a Data Engineer specializing in Azure technologies within the Fintech and Telecom sectors.

Contact

Skills

Core Skills

Data EngineeringAzure Data Services

Other Skills

Ad Hoc ReportingAzure Active DirectoryAzure CloudAzure Cosmos DBAzure Data FactoryAzure Data LakeAzure DevopsAzure Logic AppsAzure SQLCI/CDCloudCloud ServicesCommunicationContinuous Integration and Continuous Delivery (CI/CD)Dashboards

About

Hi All, I am an experienced Azure Data Engineer with a solid background in cloud data engineering, data transformation, ETL processes, and big data technologies. Over the course of my career, I have built extensive expertise in designing, developing, and optimizing data solutions in the Microsoft Azure ecosystem. With a proven track record of delivering data solutions that are scalable, efficient, and aligned with business requirements, I have honed my skills in cloud-based data services such as Azure Data Factory, Azure Databricks, Azure Data Lake, and various other Azure-based technologies. My experience spans across industries such as financial services and telecommunications, where I have worked closely with both clients and cross-functional teams to solve complex data challenges. To complement my hands-on experience, I hold the Microsoft Certified: Azure Data Engineer Associate (DP-203) certification. This certification validated my expertise in Azure data engineering, including data storage, processing, security, and integration.

Experience

Optum

Senior Data Engineering Analyst

Mar 2025Present · 1 yr · Noida, Uttar Pradesh, India · Hybrid

Quadrant technologies

Azure Data Engineer

Jul 2024Feb 2025 · 7 mos · Remote

  • Domain : Financial Services
  • Roles & Responsibility :
  • Designed and developed applications on the Data Lake to transform the data according to
  • business users to perform analytics.
  • Responsible for managing data coming from different sources and involved in HDFS
  • maintenance and loading of structured and unstructured data.
  • Worked on different files like CSV, Parquet and a fixed width to load data from various sources to raw tables.
  • Extensive Experience working with large datasets using PySpark and Spark SQL and creating ETL pipelines using.
  • Conducted data model reviews with team members and captured technical metadata through modeling tools.
  • Implemented ETL process, wrote and optimized SQL queries to perform data extraction and merging from SQL Server database.
  • Extract Transform and Load data from Sources Systems to Azure Data Storage services using a
  • combination of Azure Data Factory, Databricks, Azure Functions, PySpark and Spark SQL
  • Data ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL,
  • Azure DW) and processing the data in Azure Databricks.
Azure Data LakeETLPySparkSpark SQLAzure Data FactorySQL+2

Capgemini

Associate Consultant

Sep 2020Jul 2024 · 3 yrs 10 mos · Noida, Uttar Pradesh, India · Hybrid

  • Domain : Telecom
  • Roles & Responsibility :
  • Developed PySpark code using Databricks notebooks to populate the data into different layers Bronze, Silver, and Gold, For each layer, a single generic notebook was developed with Meta-data driven approach.
  • Implemented Autoloader (Databricks feature) for a file-based source system to handle incremental data which resulted in 80% better optimization of the existing pipeline for that source system.
  • Developed Spark Jobs for data cleaning and data quality check, which handles the issues of duplicate records, null values, etc.
  • Experience in developing and scheduling ADF dynamic pipelines for data extraction and ingestion using different Azure Data Factory tools such as copy, stored procedure, Get Metadata, Lookup, For Each, Variable, until and many more from Azure or On-Premises to Azure Storages.
  • Worked with various files format like CSV, JSON, AVRO and Parquet.
  • Worked with the ARM Templated and Rest APIs to load and modify datasets in ADF and ADB.
  • Production Deployments are done using CI-CD pipelines after thorough Integration testing in DEV to QA environments.
  • Using DevOps Boards for Sprint planning and tracking progress.
  • Used PySpark & SQL programming language to process the data in Spark.
PySparkDatabricksAzure Data FactorySQLCI/CDData Engineering+1

Ntpc limited

Summer Trainee

Jun 2018Jun 2018 · 0 mo · Patna Area, India

Bharat sanchar nigam limited

Implant

Dec 2017Dec 2017 · 0 mo · Chennai, Tamil Nadu, India

Education

SRM IST Chennai

Bachelor's degree — Electrical and Electronics Engineering

Jan 2016Jan 2020

Stackforce found 100+ more professionals with Data Engineering & Azure Data Services

Explore similar profiles based on matching skills and experience