Maneesh K Bishnoi

Software Engineer

Bengaluru, Karnataka, India9 yrs 11 mos experience
Most Likely To SwitchAI Enabled

Key Highlights

  • 10 years of expertise in data-driven solutions.
  • Proficient in AWS and GCP for data management.
  • Databricks Certified Associate Developer for Apache Spark.
Stackforce AI infers this person is a Data Engineering expert with a strong focus on cloud-native solutions in the Fintech and EdTech sectors.

Contact

Skills

Core Skills

Data EngineeringCloud ComputingCost OptimizationBig Data SolutionsBig DataSeo OptimizationBig Data Processing

Other Skills

Agile MethodologiesAmazon Relational Database Service (RDS)Amazon Web Services (AWS)Apache AirflowApache KafkaApache SparkArtificial Intelligence (AI)Big Data AnalyticsCloud CostCloudera ImpalaData ArchitectureData LakeData LakesData Warehouse ArchitectureDistributed Systems

About

• Engineer with 10 years of expertise specializing in designing and implementing data-driven solutions. Skilled in addressing complex architectural and scalability challenges, with a focus on building robust Data Platforms for large-scale batch and real-time processing. • Proficient in cloud platforms AWS and GCP, adept at leveraging their services for efficient data management and processing. • Domain expertise spans Banking & Finance, EdTech, and Ecommerce industries, enabling tailored solutions to industry-specific challenges. • Databricks Certified Associate Developer for Apache Spark 2.4, proficient in Python 3 and AWS Certified Developer Associate, demonstrating mastery in key technologies. • Well-versed in Data Ingestion, Transformation, and Analysis utilizing Hadoop components such as Spark, Hive, MapReduce, Hudi, Sqoop, Impala, HDFS, Yarn, NiFi, and Airflow. • Experienced in building Data Lakes pipelines for both Batch and Near-Real-Time processing, utilizing Hadoop and Spark technologies. • Skilled in programming languages Java, Python, Scala and Go. • Hands-on experience in importing and exporting data from various databases like Oracle, MySQL into HDFS, Hive, and Redshift. • Expertise in collecting and storing stream data like log data in HDFS using Kinesis/Kafka, ensuring real-time data availability for analysis. • Knowledgeable in NoSQL databases including HBase, Cassandra, DynamoDB, enabling diverse data storage solutions based on project requirements. • Experienced with Airflow to automate and parallelize ETL pipeline, enhancing operational efficiency. • Strong understanding of the Software Development Life Cycle (SDLC), ensuring systematic development and deployment of data solutions. • Good Exposure in Core Java, Servlets, JSP, Hibernate, Spring, Spring MVC, complementing data engineering expertise with full-stack development capabilities.

Experience

9 yrs 11 mos
Total Experience
2 yrs 1 mo
Average Tenure
1 yr 5 mos
Current Experience

Walmart global tech india

Staff Engineer

Nov 2024Present · 1 yr 5 mos · Bengaluru, Karnataka, India · On-site

  • As a Staff Engineer at Walmart, I design and scale cloud-native data pipelines for real-time enrichment and distribution of Walmart’s global product catalog. My work powers the ingestion and processing of billions of daily events—hundreds of terabytes—leveraging GCP, Kafka, and Scala. I enable seamless sharing of enriched catalog data with ad partners like Google, Meta, and TikTok, ensuring high reliability and performance at web scale.
Apache SparkData LakesSystems DesignApache KafkaArtificial Intelligence (AI)Data Engineering+1

Unacademy

2 roles

Technical Lead - Data Engineering

Jul 2023Nov 2024 · 1 yr 4 mos · Bengaluru, Karnataka, India

  • 📌 Leading all data-engineering initiatives with Unacademy.
  • 📌 Cost Optimization across all Data engineering initiatives.
  • 📌 Designed and maintaining scalable data pipelines, and ensuring data infrastructure aligns with business needs.
  • 📌 Driving innovation by implementing new technologies, optimizing data processing systems, and collaborating with cross-functional teams to support data-driven decision-making.
Data LakesData Warehouse ArchitectureDistributed SystemsData ArchitectureData EngineeringCost Optimization

Senior Software Engineer- Big Data

Jun 2022Jun 2023 · 1 yr · Bengaluru, Karnataka, India

  • 📌 Developed high-quality modern pipelining, warehousing, and reporting solutions and deliver them to customers across teams, making data more useful to the enterprise
  • 📌 Designing data solutions to help customers, and so the organization, grow fast
  • 📌 Developed a transactional Data Lake utilising Hudi and Spark supporting wide range of sources/formats.
  • ➡️ ACID Compliance using Apache Hudi
  • ➡️ Utilize Any Data Source - Support different data sources like RDBMS, Rest APIs, Web Sockets,
  • Streaming Data, Flat Files, Clicks Streams
  • ➡️ Cost-Efficient Solution
  • ➡️ Universal Platform for all Data
  • ➡️ Built adaptable pipelines for data lake using Spark and Airflow, facilitating schema evolution and data validations.
  • ➡️ Integrated Hive query engine for BI reporting on data lake and transitioned analytical SQL workloads from Redshift using DSL.
  • 📌 Constructed ETL/reverse ETL pipelines transferring terabytes of CRM data between RDBMS while preserving entity relationships.
JavaAmazon Web Services (AWS)Amazon Relational Database Service (RDS)hudiScalaApache Spark+3

Walmart global tech india

Software Engineer III - Big Data

Feb 2021Jul 2022 · 1 yr 5 mos · Bengaluru, Karnataka, India

  • As a data engineer for Walmart's SEO product, we were putting efforts to drive organic traffic and enhance the company's online visibility on Google and other search engines. Through innovative data engineering approaches, We've created a powerful Keyword Datalake that offers a comprehensive 360-degree view of all keywords with retail intent being searched by users. With this valuable tool at our disposal, we're able to make strategic decisions and optimize our SEO strategy, ultimately improving Walmart's search engine rankings and driving more organic traffic to the site.
  • Tech Stack: Java, Python, Scala, Apache Spark, Hadoop, Google Cloud Plateform (GCP), graphQL, Hive, Cassandra, Airflow and others.
JavaPySparkBig DataAmazon Relational Database Service (RDS)GraphQLApache Airflow+5

Clairvoyant llc

Big Data Engineer

Feb 2018Jan 2021 · 2 yrs 11 mos · Pune Area, India

  • Current Project - Responsibilities include Data Ingestion, Data Transformation and Data Analysis using various Hadoop Techniques.
  • Important insights below.
  • Deal with about a billion events per day with overall data size of around 4 TB in the Data Lake
  • Building automated ingestion pipeline to handle the snapshots of the SOR tables.
  • Building common libraries that can be reused across various project for client.
  • Implementation of the ETL pipelines that spawn across different networks.
  • Delivery of the project in phases.
  • Working with the client’s technical team to understand the integration points.
  • Production Support
  • Previous Project - Worked on project based on Internet of Things(IoT) solution using Big Data technologies.
  • Important insights below.
  • Developed reliable historical data(Sensor/ Calibrators data ) load pipeline from sql-server to apache Kudu using Sqoop, Apache Hive, Apache Impala.
  • Prediction of abnormalities on real time sensors data coming through Eclipse Kura using OSGi bundle embedding PMMl file having Machine Learning model for prediction.
  • Worked on Eclipse Kapua which IoT Cloud Platform.
  • Developed the shell script to monitor the various process.
  • Storing the same real time data directly into Cloud(Eclipse Kapua) and then sending that data into Apache Kudu for business analysis using Kafka and Spark Streaming.
Amazon Relational Database Service (RDS)

Tata consultancy services

3 roles

Hadoop Developer

Promoted

Mar 2017Feb 2018 · 11 mos

  • Roles and Responsibilities: Developing and helping team members to complete POC’s for client. Involved in POC’s like Sensex Log Data Processing (Excel File Processing in Map Reduce), Health Care Management System.
  • Big Data Stack: MapReduce, Pig, Hive, MySQL, Sqoop, HBase.
  • Key Points:
  • Analyse or transform stored data by writing MapReduce or Pig jobs based on business requirements.
  • Participated in multiple big data POC to evaluate different architectures, tools and vendor products.
  • Experimented to dump the data from MySQL to HDFS using Sqoop.
  • Involved in developing the Pig scripts for processing customer data.
  • Involved in creating Hive tables, and loading and analyzing data using hive queries.
  • Handled importing of customer data from various data sources.
  • Load and transform large sets of structured, semi structured.
  • Implemented Partitioning, Dynamic Partitions, Buckets in HIVE.
  • Worked with Sqoop for importing metadata from relational databases.
MapReducePigHiveMySQLSqoopHBase+1

Java Developer

Mar 2016Feb 2018 · 1 yr 11 mos

  • Currently working for Barclays's Project of Online Payment Services.
  • Project mainly based on development and enhancement of online payments, funds transfer and bill payment .
  • Key Points:
  • Reintegration of code from live branch to existing branch using GIT.
  • Using JIRA to track task completion.
  • Using Jenkins to deploy project releases on local testing environments (SIT)
  • Using Confluence to create a repository of information gained while working on project.
  • Using hp quality centre to keep track of bugs/defects of each release.
  • Unit testing each scenario.
  • Supporting releases every month for new changes and fixes.
  • Monitoring the new changes using Splunk in production.
  • Fixing problem records to solve live incidents in payments.
  • Using FATWIRE/ZBTT for content management.

Assistant System Engineer-(Trainee)

Jan 2016Mar 2016 · 2 mos

  • Started carrier with TCS at Garima Park,Ghandhinagar(Gujrat).
  • Completed Initial Learning Programming(ILP) in JAVA Stream. Training was mainly focused on Java, Servlets, Collections, JDBC, ORACLE Database, HTML, CSS,JSP, JQuery, AJAX and Bootstrap.
  • At the end of training I completed one online trading project using above technologies.

Bodacious it hub

Engineering Intern

Feb 2014Jun 2014 · 4 mos · Greater Jaipur Area

  • I completed my internship at Bodacious IT Hub Pvt. Ltd. in Core Java Technology.

Education

Government Engineering College Bikaner

Bachelor of Technology (B.Tech.) — Computer Scince Engineering

Jan 2011Jan 2015

Apex International School - India

12th — Science & Mathematics

Jan 2010Jan 2011

Stackforce found 100+ more professionals with Data Engineering & Cloud Computing

Explore similar profiles based on matching skills and experience