Siddhant Gupta

Product Engineer

Noida, Uttar Pradesh, India9 yrs 9 mos experience
Most Likely To SwitchHighly Stable

Key Highlights

  • Over 10 years of Azure cloud experience.
  • Expert in architecting scalable data solutions.
  • Strong command of Hadoop ecosystem tools.
Stackforce AI infers this person is a Data Engineering expert with extensive Azure and Hadoop experience in enterprise environments.

Contact

Skills

Core Skills

Data EngineeringTeam LeadershipAzureBig DataHadoop

Other Skills

ArchitectureAzure Data FactoryBig Data applicationsConfluenceData ingestionData quality parametersDatabricksDenodoDevOpsEmployee BenefitsFlumeHDFSHiveJiraLearning

About

Azure Solution Architect @fractal and Certified Azure Data Engineer with over 10 years of hands-on experience across the Azure cloud ecosystem, including Azure Data Factory, Azure Databricks, Azure Data Lake, Azure Synapse , Azure Fabric and a strong command of the Hadoop ecosystem (Hive, Pig, Sqoop, Flume, HBase, Oozie), along with Spark and Scala. Proven expertise in architecting and implementing scalable data solutions, developing advanced Spark logic, and collaborating with stakeholders to align cloud-based data strategies with business objectives. Skilled at enabling organizations to leverage Azure capabilities to drive data-driven decision-making and digital transformation. Actively seeking opportunities that offer challenging problem-solving environments and team building/managerial roles with scope for leveraging cloud technologies to deliver impactful business outcomes while continuously learning and evolving with emerging tools and platforms.

Experience

Fractal

2 roles

Architect

Promoted

Apr 2025Present · 11 mos

Senior Engineer

Feb 2022Mar 2025 · 3 yrs 1 mo

  • ·Worked to manage team and extensively worked with Data ingestion across different source systems Across different Ingestion types supported in Telstra .
  • ·Collect requirements from clients on data ingestion and feasibility of the solution in the data core environment following standards.
  • ·Worked on writing Data quality Parameters towards ingested data using databricks and maintaining Data Quality Across platform
  • ·Working as a team lead to support junior engineers.
  • ·Extensively used confluence to document data ingestion process and save code in Devops.
  • ·Have experience in running sprint planning, daily standups, Retros and creating user stories using Jira.
Data ingestionData quality parametersConfluenceDevOpsJiraData Engineering+1

Wipro

Senior Software Engineer

Jan 2021Feb 2022 · 1 yr 1 mo · Bangalore Urban, Karnataka, India

  • Worked as a developer
  • Involved in developing Data Pipelines in Azure Data Factory for various business rules involving data movement using copy activity, data transformations using Databricks or hive activity.
  • Developed Hive scripts and hive transactional tables to update the data and delete it Conditionally.
  • Created Denodo Views for data virtualization over hive tables data.
  • Created power bi Data flows on top of Denodo views for reporting needs.
  • Worked on writing spark code for various transformations over the incoming data using Databricks.
  • Worked on Automation of notification through mails in case of failure of Data Pipelined using Logic Apps.
  • Worked on enhancements of developed spark applications to add functionalities based on business demand.
Azure Data FactoryDatabricksHiveDenodoPower BILogic Apps+3

Infosys

Technology Analyst

Apr 2019Dec 2020 · 1 yr 8 mos · Pune, Maharashtra, India

  • Working with Telenet
  • Working on majorily big data applicatons (sqoop , hive , flume , pig , mapreduce, hbase) and spark framework .
Big Data applicationsSqoopHiveFlumePigMapReduce+3

Cognizant

Hadoop Developer

Nov 2015Oct 2018 · 2 yrs 11 mos · Chennai, Tamil Nadu, India

  • Worked with Duesche telekom. This Project aims at storing the data from various channels , process and store in HDFS .
  • Created Sqoop Jobs to load the data from HDFS to Oracle RDBMS and Vice Versa
  • Used sqoop to load data into hive and used compression techniques .
  • Used Hcatalog to transfer data from pig to hive.
  • Worked on Map Reduce Tasks for user and business requirements to develop java code for data extraction and processing .
  • Involved in KT sessions as the project is migrating to spark , created RDD’S and worked on spark sql.
HDFSSqoopHiveMapReduceSparkHadoop+1

Education

Galgotias University

Bachelor of Technology - BTech — Computer Science

Jan 2011Jan 2015

Stackforce found 100+ more professionals with Data Engineering & Team Leadership

Explore similar profiles based on matching skills and experience