R

Rajat K.

Software Engineer

Bengaluru, Karnataka, India12 yrs 3 mos experience
Highly Stable

Key Highlights

  • Expert in Big Data tools and frameworks.
  • Strong leadership experience in tech teams.
  • Multilingual programming skills across multiple languages.
Stackforce AI infers this person is a SaaS-focused Big Data engineer with strong leadership capabilities.

Contact

Skills

Core Skills

Big DataScalaApi Development

Other Skills

SparkSQLdbtTeam LeadershipdockerKubernetesGitci/cddatabricksAmazon RedshiftJavaMachine LearningHadoopHTMLPython

About

Experienced engineer with a demonstrated history of working in multiple tech industries. A platforms person at heart. Skilled in Big data tools like Hadoop, Hive, Spark, Falcon, Lens etc. Fluently Multilingual in Java, Scala, Python, JS, Bash. Strong Tech professional with a Bachelor of Technology focused in Computer Science from IIT Delhi.

Experience

12 yrs 3 mos
Total Experience
2 yrs 2 mos
Average Tenure
5 mos
Current Experience

Google

Staff Software Engineer

Dec 2025Present · 5 mos · Bengaluru, Karnataka, India · Hybrid

Career break

Full-time parenting

Sep 2025Nov 2025 · 2 mos · Bengaluru, Karnataka, India

  • Took dedicated time to support and care for my newborn twins. This period allowed me to reset, prioritize family, and return to work with renewed energy and clarity, while informally keeping up with industry trends and technical learning.

Prophecy.io

Architect

Mar 2020Aug 2025 · 5 yrs 5 mos · Bengaluru, Karnataka, India

  • Execution (Mar 20 - Jun 21)
  • Developed key features for integrating with spark runtime systems from scratch:
  • Enabled execution on Databricks and on-prem Spark
  • Introduced step-by-step debug mode with interim data visualization
  • Implemented file system browsing and data preview capabilities
  • IDE (Jul21 - Jul22)
  • Worked on critical areas of the compiler side of the product, focusing on:
  • Developed a code optimizer for cleaner code generation
  • Built an Incremental Compilation Framework and Schema Analysis engine
  • Enabled recursive structures (subgraphs) and editing of reusable pipeline entities
  • Platform
  • As an early engineer, contributed to foundational processes:
  • Designed release processes for Scala monorepo and shared libraries
  • Customized Docker builds to leverage layer caching for faster pushes and deploys
  • Monitoring capabilities to all Scala services.
  • Cross-compilation for prophecy-libs, supporting multiple Spark versions and variants.
  • Team Lead: Execution (Aug 22 - Present)
  • Responsible for driving features and enhancements on the runtime side of the product:
  • Shipped multiple runtime-side enhancements in collaboration with UI team
  • Mentored engineers on Scala architecture, code reviews, and task decomposition
  • Delivered a new product line—Jobs IDE—and customer code release pipelines
  • Translated customer feedback into feature improvements and documentation updates
  • Partnered with product team to shape roadmap and drive execution
  • Led multiple product revamps as scale/usage grew.
  • Started and facilitated biweekly internal knowledge-sharing sessions.
SparkSQLdbtBig DataScalaTeam Leadership+5

Sprinkle data

Founding Engineer

Sep 2018Feb 2020 · 1 yr 5 mos · Bengaluru, Karnataka, India

  • Working across a wide spectrum of problems for our SaaS Data Platform offering. Broadly categorizing as below lines
  • ► Product
  • Pure SaaS: Added support for multi-tenancy to enable multiple organisations having accounts on one installation. This ended up significantly reducing the initial setup time for new trials.
  • Ingestion product: End to end design and implementation of Ingestion product for Batch/Stream/Hybrid kind of data sources. The abstractions developed for this facilitated faster iteration time for implementing new data sources. Ingesting tens of billions of rows daily
  • Incremental processing: Added micro-batching support in the pipeline product and the ingestion product.
  • Embedded Dashboards: Led the effort to enable embedded Dashboards for customers.
  • Notebook Compute: Led the effort to embed Jupyter Notebooks in the product to enable advanced exploratory analyses.
  • ► Platform
  • Form framework: Designed a framework for creating dynamic forms for entities having a hybrid schema. The framework is heavily used for external integrations.
  • Command framework: Designed a functional framework based on reactive streams for running and monitoring hierarchical tasks.
  • Migration Service: Embedded service for managing schema changes across version upgrades.
  • ► Customer Success and Operations
  • Actively Involved in customer interactions from trial to conversion to solutioning.
  • Helping customers manage SLAs by making timely suggestions regarding Hadoop cluster configurations and capacity.
  • The product is serving thousands of users every day, ingesting tens of billions of rows every day. Tens of thousands of jobs are run daily.
API DevelopmentAmazon Redshift

Inmobi

3 roles

Technical Lead

Jun 2016Jul 2018 · 2 yrs 1 mo

  • ► Summarization Framework
  • Led a team of 3 in building a generalized Summarization Framework for big data.
  • The framework Uses Apache Spark and Apache Falcon to generate micro-batching pipelines to summarize data across any dimension cuts and multiple time rollups.
  • Processes hundreds of GBs of data every day.
  • ► Data 2.0
  • Led development from the front for a major organizational restructuring.
  • Involved in end-to-end changes for the data platform from pipelines to reporting.
  • Provided critical support in the face of another challenge: DC migration.
  • ► ML Platform
  • Explored and benchmarked Existing software offerings for model serving frameworks.
  • Involved in the development of an in-house ML model serving framework.
  • The framework can serve models developed on a variety of platform including but not limited to Spark, scikit-learn, VW, TensorFlow etc with low latency and high throughput in an online serving path.
  • Technologies used: Spark, Scala, Docker, Mesos, VW
  • ► Elastic Yarn on Cloud
  • Involved in building auto-scalability in apache yarn on top of Azure's cloud platform with the help of containerization.

Senior Software Engineer

Promoted

Jun 2015Jun 2016 · 1 yr

  • ► Unified reporting:
  • Involved in building a Unified Analytics and Reporting platform from the ground up.
  • The platform can intelligently choose among multiple data sources in a cost-effective way.
  • Contributed extensively to the core engine, a RESTful backend API, A CLI tool, a python client and a slack bot for the same.
  • Technologies used: hadoop, hive, lens, vertica.
  • Powers thousands of users, who run 200k+ queries everyday.

Software Engineer

Jul 2013Jun 2015 · 1 yr 11 mos

  • - Part of a team which built the in-house Ad-hoc Analytics Platform - Yoda: http://technology.inmobi.com/projects/yoda

The apache software foundation

3 roles

Contributor at Apache Falcon

Promoted

Nov 2015Aug 2016 · 9 mos · Bengaluru, Karnataka, India

  • Designed a new CLI for Apache Falcon and implemented the solution in a team of 2 people. Tracking jira: https://issues.apache.org/jira/browse/FALCON-1596

Contributor at Apache Hive

Promoted

Jun 2015Aug 2016 · 1 yr 2 mos · Bengaluru, Karnataka, India

  • Formalized and contributed features required by InMobi to Apache Hive. My contributions: https://github.com/apache/hive/commits?author=prongs.
  • Major Changes:
  • Enriching task status with MR-level details
  • Enabling a client to connect, disconnect and reconnect.

PMC and Committer for Apache Lens

Dec 2014Nov 2017 · 2 yrs 11 mos · Bengaluru, Karnataka, India

  • Lens provides a Unified Analytics interface. Lens aims to cut the Data Analytics silos by providing a single view of data across multiple tiered data stores and optimal execution environment for the analytical query. It seamlessly integrates Hadoop with traditional data warehouses to appear like one.
  • My contributions to Apache Lens: https://github.com/apache/lens/commits?author=prongs
  • Major Contributions:
  • Partition Timeline
  • Retry Framework
  • Monitoring framework
  • Cube-Segmentation
  • CLI Revamp, enhancements
  • Python, JS Clients, Bot
  • Related Time Dimensions
  • Particular Meaty Commits:
  • https://github.com/apache/lens/commit/b58749e2061e0e731fc1855e0bf4a3b37c601c38
  • https://github.com/apache/lens/commit/d6b1216922f1438a7fc000dfcdd9121d87c65149
  • https://github.com/apache/lens/commit/38ab6c6082b6221502daac979551e8c5fca72241
  • https://github.com/apache/lens/commit/dc1fafa91c8407db8e0f1bfdde24df8f5fec7bce
  • https://github.com/apache/lens/commit/803448a18e901a5c3dd9e4a70aa03935be05790b
  • https://github.com/apache/lens/commit/b805ee989188c5889e6472b48572d126b6515414
  • https://github.com/apache/lens/commit/2e5748a8c5dcff9627b8d125820371f1d4667d61
  • https://github.com/apache/lens/commit/640f0568eb35f12b8e674a6a5fee2acd7616d66c

Intuit

Software Engineer Intern

May 2012Jul 2012 · 2 mos · Bangalore

  • Worked on Internationalization of an online product.
  • Developed a platform from scratch for sharing of locale-specific settings.
  • Built UI with HTML, Dojo Framework for Javascript and Backend with SQL, Java Beans.

Education

Indian Institute of Technology, Delhi

Bachelor of Technology (B.Tech.) — Computer Science

Jan 2009Jan 2013

Swami Keshwanand Senior Secondary School, Laxmangarh, Alwar

Senior Secondary — PCM

Jan 2006Jan 2009

Stackforce found 100+ more professionals with Big Data & Scala

Explore similar profiles based on matching skills and experience