Mayur Rao

CEO

Pune, Maharashtra, India4 yrs 7 mos experience
Highly Stable

Key Highlights

  • 4 years of experience in Data Engineering.
  • Expertise in designing and optimizing data pipelines.
  • Proficient in batch processing with PySpark and Snowflake.
Stackforce AI infers this person is a Data Engineer specializing in cloud migration and big data processing.

Contact

Skills

Core Skills

Data EngineeringCloud MigrationEtl DevelopmentBig Data Processing

Other Skills

Amazon Elastic MapReduce (EMR)Amazon S3Apache KafkaApache Spark StreamingAzure DatabricksBig DataData IngestionData TransformationDatabricksDistributed ComputingETLHadoopHdfsHiveIBM Db2

About

Proficient Data Engineer offering 4 years of experience with expertise in designing, building, and optimising data pipelines to facilitate data extraction, transformation, and loading (ETL) processes. Experienced in batch processing using technologies such as PySpark (Databricks), Snowflake, and Hadoop. Experienced in implementing business logic using PySpark to create critical business reports. Strong analytical and problem-solving skills, with the ability to analyse and interpret data to uncover insights and trends.

Experience

The citco group limited

2 roles

Lead Developer

Dec 2025Present · 3 mos · Pune, Maharashtra, India

Data Engineer

Aug 2024Dec 2025 · 1 yr 4 mos · Pune, Maharashtra, India

Tata consultancy services

2 roles

Data Engineer

Promoted

May 2022Jul 2024 · 2 yrs 2 mos · Pune, Maharashtra, India

  • Primarily working on Cloud Migration project where we are moving setup from on premise to Snowflake.
  • As a Data Engineer I am involved in designing and developing Ingestion and Transformation pipelines per business requirement.
  • As part of Ingestion pipeline, we developed Framework to read data from multiple RDBMS/ File based sources via Spark and write to Snowflake refined area.
  • As part of Transformation pipeline we are utilizing Databricks as processing platform on Cloud to read data from Snowflake, apply heavy and complex transformation on 20+ Billion dataset and write final dataset back to Snowflake (Domain Objects generation)
SnowflakeSparkDatabricksETLData IngestionData Transformation+2

Hadoop Etl Developer

Jul 2021May 2022 · 10 mos · Pune, Maharashtra, India

  • Application setup in Hadoop cluster and creating Hive/Impala databases with sentry roles
  • Requirement gathering from users and Objects creation in Hive with partitioning scheme helpful to users.
  • Used Sqoop tool to import data from various
  • RDBMS (DB2/Sybase) to HDFS .
  • ETL Development with scripts written in Python.
  • 24*7 running jobs to perform incremental load from various RDBMS environment to Hadoop
  • File System.
HadoopHiveImpalaSqoopPythonETL Development+1

Education

Shri Vaishnav Institute of Technology and Science, Indore

Bachelor of Technology - BTech — Computer Science

Jan 2017Jan 2021

Vindhyachal Academy

High School Diploma — Maths

Jan 2015Jan 2017

Stackforce found 100+ more professionals with Data Engineering & Cloud Migration

Explore similar profiles based on matching skills and experience