R

Ravi Kumar N K

Engineering Manager

Bengaluru, Karnataka, India13 yrs 4 mos experience
Most Likely To SwitchHighly Stable

Key Highlights

  • Over a decade of experience in large-scale distributed systems.
  • Expert in Big Data and Data Lakehouse architectures.
  • Proven track record in building high-performing engineering teams.
Stackforce AI infers this person is a Big Data and Cloud Data Engineering expert with a focus on SaaS solutions.

Contact

Skills

Core Skills

Data EngineeringCloud Data PlatformsBig DataSoftware Engineering

Other Skills

AI-driven automationAWSAirflowAlgorithmsApache IcebergApache PhoenixApache PigApache SparkArchitectural DesignAvroAzure Blob StorageCC++Cloud ApplicationsCloud Computing IaaS

About

With over a decade of experience in designing, building, and managing large-scale distributed systems, I specialize in architecting resilient, scalable software solutions with a strong focus on technical excellence, performance optimization, and strategic architectural planning. As a seasoned engineering leader, I have successfully built and scaled high-performing software and data engineering teams from the ground up. I foster a culture of innovation, collaboration, and technical rigor, enabling the development of cutting-edge, data-driven solutions that drive business success. My expertise lies in Big Data, Data Engineering, and modern Data Lakehouse architectures, where I have led platform modernization efforts, migrating legacy architectures to Lakehouse frameworks for improved scalability, governance, and analytics capabilities. By partnering closely with Product Management, Sales, and Customers, I have consistently developed innovative data-driven solutions aligned with strategic objectives. I am passionate about leveraging advanced technologies, driving operational efficiency, and inspiring teams to achieve engineering excellence in software, data, and cloud-driven ecosystems. Skill Set: Software Engineering & Architecture: Java, Scala, Python, Microservices, Distributed Systems, Cloud-Native Applications Big Data & Data Engineering: Apache Spark, Apache Iceberg, Delta Lake, Data Pipelines, ETL/ELT Optimization Lakehouse & Cloud Data Platforms: Databricks, Snowflake, AWS, Azure, Google Cloud, Legend (Finos) Data Governance & Compliance: Data Lineage, Audit Trails, Regulatory Compliance, Data Quality Frameworks DevOps & Infrastructure: Kubernetes, Docker, Terraform, CI/CD, Observability, Performance Tuning Leadership & Strategy: Team Building, Engineering Best Practices, Cross-Functional Collaboration, Digital Transformation Let’s connect to explore how I can contribute to your vision.

Experience

Sigmoid

2 roles

Engineering Manager

Promoted

Mar 2024Present · 2 yrs · Bengaluru, Karnataka, India

  • Leading multiple global Data Engineering teams delivering high-performance, scalable data solutions across cloud and on-prem ecosystems.
  • Driving Data Lakehouse modernization initiatives using Snowflake, Databricks, and Apache Iceberg, and establishing Centers of Excellence (COEs) for reusable frameworks and accelerators.
  • Managing multiple architects and senior engineers to ensure architectural consistency, technical excellence, and solution scalability.
  • Spearheading Agentic AI solutioning, integrating AI-driven automation and reasoning agents into Data Engineering workflows for intelligent orchestration, observability, and performance optimization.
  • Overseeing end-to-end project delivery with a focus on quality, timelines, and cost optimization across multi-region engagements.
  • Leading pre-sales and strategic client engagements with CXOs and technology stakeholders across geographies to shape large-scale data modernization programs.
  • Defining reference architectures and best practices for cloud-native data platforms on AWS and Azure, integrating Spark, Snowflake, Airflow, and AI capabilities.
  • Collaborating with cross-functional teams to translate business needs into robust, production-grade Data + AI Engineering solutions.
  • Implementing delivery governance frameworks emphasizing agility, automation, AI-assisted monitoring, and observability for large-scale data platforms.
  • Driving capability building, innovation, and thought leadership across COEs, focusing on Lakehouse, Snowflake, and Agentic AI initiatives.
Data EngineeringSnowflakeDatabricksApache IcebergCloud-native applicationsAI-driven automation+1

Senior Technical Lead - Data Engineering

Nov 2021Feb 2024 · 2 yrs 3 mos · Bengaluru, Karnataka, India

  • Architected and developed end-to-end data engineering solutions leveraging Snowflake, Apache Spark, AWS, DBT, Databricks, Python, Java, and Airflow.
  • Led a team of 10–15 engineers, driving technical excellence, design reviews, and scalable solution development.
  • Designed data pipelines and frameworks enabling real-time and batch ingestion, transformation, and analytics at enterprise scale.
  • Built high-performance, fault-tolerant, and cost-optimized data architectures across AWS and hybrid environments.
  • Collaborated with business and product teams to translate analytical use cases into technical data models and workflows.
  • Implemented CI/CD pipelines, infrastructure automation, and testing frameworks for rapid, reliable delivery.
  • Optimized data modeling, performance tuning, and query efficiency in Snowflake and Spark ecosystems.
  • Championed data governance, observability, and engineering best practices, improving platform reliability.
  • Mentored junior engineers and nurtured a culture of learning, innovation, and technical depth.
  • Developed solution accelerators and reusable frameworks reducing project delivery timelines by up to 40%.
SnowflakeApache SparkAWSDBTDatabricksPython+4

Intel corporation

2 roles

Technical Lead - Big Data

Promoted

May 2019Nov 2021 · 2 yrs 6 mos

  • Playing Architect/Team Lead role to work closely with Business Customers and Architects to design scalable solutions to handle Terabyte/Petabyte's scale
  • Interacting with various BU's / customers to gather requirements to understand and deliver solutions
  • Design & develop streaming applications for manufacturing data using Spark/Kafka/Hbase with scale
  • Leading team to implement various data plugins using big data technologies, proving project effectiveness by 80%. Reduced turn-around time for productionizing ingestion process system developed with easy plug in capability (configuration driven) with reduced production downtime.
SparkKafkaHBaseBig Data

Big Data Engineer

Oct 2016Apr 2019 · 2 yrs 6 mos

  • Developed data extraction plugins for various data sources such as file share, REST API, SFTP, SDK API etc for an inhouse
  • data engineering framework using big data tools such as hive, impala, Spark, Kafka , Hbase.
  • Built schema evolution framework to maintain dynamically changing datasets across various sources. One of the key
  • capabilities to prove the data driven systems to create Data lake
HiveSparkKafkaHBaseBig Data

Happiest minds technologies

Senior Software Engineer

Dec 2015Oct 2016 · 10 mos · Bangalore

  • Technologies Used: Apache Spark, Apache Kafka, Apache HBase, Confluent, Azure Blob Storage, Apache Phoenix, REST API, Spring Boot with MyBatis, SQL
  • Programming Languages: Java, Scala
  • Experience in the E-Learning Domain:
  • Developed a real-time ingestion and alerting framework leveraging Kafka, Spark, HBase, and Phoenix. Key contributions include:
  • Designed and implemented scalable, out-of-the-box Kafka connectors using the Confluent platform for real-time analytics. Utilized Azure Blob Storage for data persistence and post-processing, scaling the infrastructure to handle petabytes of data.
  • Built transformation modules for streaming data from source systems using Apache Spark and Hadoop MapReduce.
  • Developed an HBase-Phoenix integration component to enhance analytical processing, enabling efficient querying for reporting.
  • Created APIs and microservices using Spring Boot with MyBatis for seamless external consumption.
  • As one of the earliest adopters and contributors to Kafka Connect Pipelines, I developed both source and sink Kafka Connect solutions, optimizing data flow across the ecosystem.
Apache SparkKafkaHBaseAzure Blob StorageSpring BootMyBatis+1

Oracle india pvt. ltd

2 roles

Solutions Engineer

Promoted

Sep 2014Dec 2015 · 1 yr 3 mos

  • Technologies Used: MapReduce, Rest API, Hive, Kafka, Hbase, Zookeeper, Flume, Oracle 11g, Sqoop , ADF
  • Programming Languages Used: Java
  • Developed ADF/Java Applications for SMB's to provide proof of concept using oracle products
  • Worked for Internal analytics teams developing POC’s to showcase the integration of oracle products with open source big
  • data tools.
  • Worked on data extraction capabilities from oracle products such as Oracle 11c DB, Oracle CRM using Map Reduce Api’s to
  • store data as readable files. Developed integrations between Hive and Hbase to perform CRUD operations
  • Implemented orchestrating process and scheduler jobs using Oozie to perform seamless data flows between oracle systems
  • and Hadoop
MapReduceHiveKafkaHBaseBig Data

Associate Solutions Engineer

Oct 2012Sep 2014 · 1 yr 11 mos

Education

International Institute of Information Technology Bangalore

Post Graduate Diploma — Machine Learning and Artificial Intelligence

Jan 2018Jan 2019

PES University

Bachelor's degree — Computer Science

Jan 2008Jan 2012

Stackforce found 100+ more professionals with Data Engineering & Cloud Data Platforms

Explore similar profiles based on matching skills and experience