Arshad Ahmad

CTO

Bengaluru, Karnataka, India13 yrs 8 mos experience
Most Likely To SwitchHighly Stable

Key Highlights

  • Expert in Big Data and Cloud technologies.
  • Proven ability to lead complex data projects.
  • Strong analytical skills with a focus on data architecture.
Stackforce AI infers this person is a Big Data and Cloud Architect with expertise in enterprise-level data solutions.

Contact

Skills

Core Skills

Data ArchitectureSnowflake CloudApache SparkData EngineeringBig DataEtl

Other Skills

AkkaApache AirflowApache FlinkApache FlumeApache HiveApache KafkaApache NiFiApache OozieApache PhoenixApache PigApache SolrApache Spark StreamingApache SqoopAppDynamicsAzure

About

Experienced Hadoop , Big Data and Cloud Consultant with a demonstrated history of working in the information technology and services industry. Designed conceptual , logical model for Enterprise Applications. Excellent knowledge in data modelling with good analytical skills. Skilled in Cloudera, Hive, Apache Spark, HBase, Hortonworks, Hadoop ,NIFI, Oozie , Sqoop, Hdfs , MapReduce, Azure HDInsight's, Apache Airflow, Cosmos DB, ADF, Event Hub and other Big Data Related Technologies. Proven ability to work independently and as well as in a team and motivated to face challenges and meet deadlines. Ability to adapt to evolving technology strong sense of responsibility and accomplishment. Excellent in learning and adapting to new technologies. Lead Data Engineer at EDC | Azure Cloud | Apache Spark | Databricks | NoSql | Python/Scala | ADF | Nifi | Snowflake

Experience

Ecolab digital center

4 roles

Lead Architect

Promoted

Jan 2025Present · 1 yr 2 mos

Senior Data Architect

Apr 2024Dec 2024 · 8 mos

  • Working as Senior Data Architect for One Customer Initiative.
Data ArchitectureFivetranData Build Tool (DBT)Snowflake CloudLucidchartSnowflake Rest API+1

Data Architect

Jan 2022Apr 2024 · 2 yrs 3 mos

  • Currently working as Data Architect for Ecolab3D platform and CDS Organization.
  • As a Data Architect - analyzing the problem statement and creating technical documents and
  • taxonomy standards documentation
  • Conceptualize and implement POCs and Pilot Projects for various projects at EDC.
  • Cost Optimization analysis for different approaches and review the same as well as part of COE team.
  • Implementation of design document and approach for different use cases for Modern Analytics Platform in
  • Azure.
  • Implementation of different POC's for verifying workability of suggested approaches and data modelling for different use cases.
SQLApache SparkPython (Programming Language)kafka connectPySparkData Engineering+16

Lead Data Engineer

Apr 2019Dec 2021 · 2 yrs 8 mos

  • Working as Lead Big Data Engineer and handling complex big data/cloud implementation in real/time and batch mode.
  • Responsibilities
  • As Data Engineering Lead - Guide the team on the technical front and clearing technical impediments and resolve their Issues.
  • As a liaison between various stakeholders of a project.
  • Analyzing the problem statement and creating technical documents and taxonomy standards documentation
  • Conceptualize and implement POCs and Pilot Projects for various projects at EDC.
  • As a Reviewer - Review the ETL pipelines created on ADF and Snowflake and ensure that the standards and guidelines are followed strictly.
  • Technology worked upon:
  • Azure Data Factory,Apache Nifi,Apache Spark,Event Hub,Azure Functions,Cosmos DB,Azure SQL Server,Azure Devops,Hortonworks
SQLClouderaApache SparkPython (Programming Language)PySparkData Engineering+15

Tredence inc.

Technology Consultant

Aug 2018Apr 2019 · 8 mos · Bengaluru, Karnataka, India

  • Worked as Technology Consultant in Tredence Analytics and handed project related to Big Data Architecture Designing and Implementation.
  • As part of the MDM and Data Cleansing project for the Barracuda Inc. client
  • As a Technical Consultant/Data Architect - was responsible for designing the ETL Strategy & Architecture for
  • data integration for MDM implementation.
  • Understand the business requirement from client and translating into technical documents.
  • Create High level and low-level data and ETL design documentation and Develop mappings for source to staging.
  • database and staging to Target database and involved in designing the database model.
  • Developed Efficient jobs in Apache spark for data cleansing of Account, Contacts and Leads object for salesforce.
  • data.
  • Used ML Libraries of spark to perform the predictive analysis on the data for deduplication purposes.
  • Build Data models to store the future cleanses and deduplicated data in MySQL.
  • Technology worked upon:
  • Azure HdInsights,Python,Pyspark,MySql,Talend,Hive
SQLApache AirflowPython (Programming Language)Shell ScriptingPySparkData Engineering+8

Capgemini

Senior Hadoop Consultant

Mar 2017Aug 2018 · 1 yr 5 mos · Bangalore

  • Worked for a US Internet Provider as Senior Hadoop Consultant.
  • Responsibilities:
  • As a Senior Consultant handled the team with technical capability and bridge the gap between onsite and offshore team and make them understand the requirement and technical debts.
  • End to End Implementation of data pipelines for real time as well as batch data based on the use cases.
  • Was part of COE team and done multiple POC for different clients across domains for finalization of tools to be used for optimal performance.
  • Exploration of new tools in big data space and performance optimization work done for different tools and technologies.
  • Worked on Below Technologies and tools :
  • Apache Spark , Apache Phoenix, Apache NIFI, Apache Hive , Apache Oozie ,Apache Sqoop, Akka Http, Apache Kakfa , Apache Hbase, Play Framework, Scala and Hortonworks.
SQLPython (Programming Language)Shell ScriptingData EngineeringApache PhoenixApache NiFi+13

Ibm india pvt lmt

2 roles

Application Developer BigData BigInsights

Promoted

Jun 2015Mar 2017 · 1 yr 9 mos

  • Worked for one of the telecommunications services and network providers in Indonesia.
  • Responsibilities:
  • End to End implementation of Big Data Pipelines.
  • Part of team which took transition from Client and documented the whole process.
  • Learn new tools and technologies as part of my transition to Big Data Developer and completed certification from IBM University.
  • Used to prepare of the mapping, HLD, LLD and unit test case documentation.
  • Worked on below Proof of concept to help team building big data architecture.
  • Apache Solr Integration with Apache Hbase for creating a search engine.
  • Continuous File Ingestion from a source using Flume.
  • Build a Store wise revenue Report for client's of a Multinational Bank to give them insight of low performing areas or store.
  • Tool used:
  • Cloudera,Apache Spark,Hive,Oozie,Hbase, Flume, Solr
SQLClouderaApache FlumeShell ScriptingData EngineeringCloudera Impala+11

Package Solution Consultant - CRM

Mar 2015May 2015 · 2 mos

SQLClouderaApache FlumeShell ScriptingData EngineeringCloudera Impala+11

L&t infotech

Siebel CRM Consultant

Feb 2012Feb 2015 · 3 yrs · Vashi, Navi Mumbai

  • Worked as a Siebel Configurator for Freescale Semiconductors.
  • Responsibilities:
  • Implementation of Siebel Configuration objects and workflow.
  • Migration of repository from one environment to other environment for releases.
  • Root Cause analysis of the Issue notified by support team.
  • Documentation of unit test cases, Mapping documents and support documents.
  • Configured system for parallel Open UI and HI implementation.
SQLSiebel CRMStored Procedures

Education

Dev Bhoomi Uttarakhand University

Bachelor of Technology (BTech) — Information Technology

Jan 2007Jan 2011

Kendriya Vidyalaya

Stackforce found 100+ more professionals with Data Architecture & Snowflake Cloud

Explore similar profiles based on matching skills and experience