Siladitya Sarkar

Data Engineer

Pune, Maharashtra, India4 yrs 10 mos experience
Highly Stable

Key Highlights

  • Expert in cloud data engineering and migration.
  • Proven track record in optimizing ETL workflows.
  • Strong experience in real-time data monitoring solutions.
Stackforce AI infers this person is a Data Engineer specializing in Fintech and Cloud Data Solutions.

Contact

Skills

Core Skills

Data EngineeringCloud MigrationEtl DevelopmentOperational ReliabilityData Migration

Other Skills

ADF pipelinesAWSAirflowAmazon Web Services (AWS)Azure Data FactoryAzure Data Lake StorageAzure SQL Data WarehouseBODSBig PandaCI/CDDatabricksHadoopJDBC connectivityLiquid ClusteringMicrosoft Azure

About

As an Authorised Officer - Data Engineer at UBS, I focus on modernizing data workflows, leveraging Databricks, Azure Data Factory, and PySpark to optimize performance and ensure seamless data migration. My recent contributions include transitioning on-premises databases to cloud-based solutions, enhancing data governance through Unity Catalog, and integrating real-time monitoring for operational reliability. Holding a Bachelor of Technology in Electronics and Communications Engineering from Maulana Abul Kalam Azad University of Technology, I am skilled in Apache Spark, Microsoft SQL Server, and Apache Airflow. My goal is to continue driving innovation in cloud data engineering and deliver impactful solutions that enhance operational efficiency and data processing capabilities.

Experience

Ubs

Authorised Officer - Data Engineer

Aug 2024Present · 1 yr 7 mos · Pune, Maharashtra, India · Hybrid

  • 1) Migrated 200+ on-prem database tables to Databricks, creating Delta tables , transitioning storage
  • to ADLS, and implementing Unity Catalog for governance and security. Optimized performance
  • using Z-ordering and Liquid Clustering, improving query efficiency.
  • 2) Modernized 25+ ETL workflows, transforming shell-scripted jobs into optimized PySpark workflows,
  • orchestrated via ADF pipelines. Designed 10+ ADF pipelines to integrate and manage daily data load in
  • Azure SQL Data Warehouse.
  • 3) Integrated ADF with Big Panda for real-time monitoring and alerting, ensuring operational
  • reliability.
  • 4)Develop optimized Pyspark scripts, Azure SQL queries, improving data processing, transformation,
  • and storage for large-scale datasets. Established JDBC connectivity between Databricks, Azure SQL
  • Data Warehouse, optimizing migrated tables and ensuring seamless data transfer.
  • 5) Implemented Medallion Architecture in Azure Data Lake Storage for newly created Delta
  • tables, enhancing data organization, quality, and accessibility. Optimized Delta table
  • performance using Z-ordering and Liquid Clustering, significantly improving query efficiency and
  • storage management.
  • 6) Reorganized Delta tables using Pyspark and optimized migrated tables by populating data from
  • Azure SQL Database into Databricks, leveraging JDBC connectivity with Azure SQL Data
  • Warehouse for seamless data transfer and improved query performance.
  • 7) Automated Databricks cluster, job creation using Databricks Asset Bundles (DAB), DevPod, enabling
  • seamless environment promotion from Dev to Pre-Prod to Prod. Streamlined notebook deployment
  • and version control via Git integration, improving CI/CD efficiency and reducing manual effort.
  • 8) Streamlined end-to-end deployment of Databricks notebooks using Databricks Asset Bundles and
  • UBS Deploy, including orchestration from DevPods, version control via Git (feature → main branch),
  • environment promotion from Dev to Staging to Preprod through a custom-built UBS Deploy pipeline.
DatabricksAzure Data FactoryPySparkUnity CatalogAzure SQL Data WarehouseJDBC connectivity+2

Deloitte

2 roles

Data Engineer II SQL , Python, PySpark, Snowflake , AWS, Hadoop, Airflow, CI/CD, Data Migration

Promoted

Nov 2021Jul 2024 · 2 yrs 8 mos · Hyderabad, Telangana, India

  • I actively manage a healthcare project within Deloitte's Data Management team, focusing on cloud migration, ETL pipeline orchestration, and data optimization. My work has resulted in significant revenue growth and efficiency improvements.
SQLPythonPySparkSnowflakeAWSHadoop+5

Data Engineer I - Deloitte Consulting

Mar 2021Oct 2021 · 7 mos · Hyderabad, Telangana, India

  • Led data migration projects from SAP BOM to SAP HANA using BODS, ensuring data accuracy and integrity.
  • Developed and deployed jobs in SAP BODS to meet client requirements, showcasing strong problem-solving skills.
  • Automated complex BOM analysis and product chain hierarchy, improving efficiency and generating insightful reports.
SAP BOMSAP HANABODSData MigrationData Engineering

Education

Maulana Abul Kalam Azad University of Technology, West Bengal formerly WBUT

Bachelor of Technology - BTech — Electronics and Communications Engineering

Aug 2016Jul 2020

Stackforce found 100+ more professionals with Data Engineering & Cloud Migration

Explore similar profiles based on matching skills and experience