Abhishek Raviprasad

Product Manager

Bengaluru, Karnataka, India9 yrs 9 mos experience
AI EnabledHighly Stable

Key Highlights

  • Expert in real-time data architectures and AI integration.
  • Proven track record in scalable data platform design.
  • Strong technical advisor for enterprise-level solutions.
Stackforce AI infers this person is a SaaS Solutions Architect specializing in real-time data and AI-driven systems.

Contact

Skills

Core Skills

Apache KafkaReal-time Data ArchitectureEtlCloud ArchitectureData Engineering

Other Skills

Apache FlinkArtificial Intelligence (AI)Large Language Models (LLM)Retrieval-Augmented Generation (RAG)Vector searchKafkaFlinkAIExtract, Transform, Load (ETL)KubernetesArchitectureAmazon Web Services (AWS)Apache Spark StreamingDatabricks ProductsTerraform

About

Solutions Architect with 9+ years of experience building scalable data platforms and advising enterprises on modern data and AI architectures. Experienced in designing real-time streaming systems using Kafka and Flink to power low-latency data pipelines, event-driven applications, and AI inference workflows. Hands-on with modern LLM ecosystems and AI development workflows, including API-based model integration, prompt-driven applications, evaluation frameworks, and AI-assisted development tools such as Claude Code. Experienced in architecting systems that integrate enterprise data platforms with LLM-powered applications and intelligent agents. Strong background in data engineering, building scalable data warehouses and production data pipelines using technologies such as Spark, Snowflake, Airflow, DBT, Python, and AWS. Recognized for translating complex technical concepts into practical architectures, delivering technical deep dives, and helping enterprise teams successfully adopt event-driven and AI-driven systems.

Experience

9 yrs 9 mos
Total Experience
4 yrs 2 mos
Average Tenure
1 yr 5 mos
Current Experience

Confluent

Advisory Solutions Engineer

Nov 2024Present · 1 yr 5 mos · Bengaluru, Karnataka, India · Remote

  • 1. Partnered with Account Executives across digital-native and mid-market segments to drive new customer acquisition, leading technical discovery, architecture design, and product evaluations for event-driven and real-time data platforms.
  • 2. Acted as a trusted technical advisor to enterprise customers, guiding them through technical discovery, architecture design, product evaluations, and production deployment of real-time data platforms.
  • 3. Became a technical expert across Confluent Cloud and Confluent Platform, advising customers on distributed systems architecture, Kubernetes deployments, networking, connectors, cluster linking, and real-time streaming pipelines.
  • 4. Contributed to internal enablement by creating reusable architecture guides, troubleshooting playbooks, and networking documentation used across the global team.
  • 5. Helped customers design real-time data architectures enabling AI and intelligent applications, leveraging Kafka and Flink to process streaming data and power low-latency decision systems.
  • 6. Partnered with sales teams to translate business requirements into technical architectures, ensuring alignment between customer objectives and scalable platform design.
  • 7. Collaborated closely with product, engineering, and support teams to resolve complex technical challenges and feed customer insights back into product improvements.
  • 8. Presented at Kafka community meetups, sharing expertise on real-time data architectures and streaming platforms.
Apache FlinkApache KafkaArtificial Intelligence (AI)Large Language Models (LLM)Retrieval-Augmented Generation (RAG)Vector search+1

Infoworks.io

4 roles

Principal Customer Solutions Engineer

Feb 2024Nov 2024 · 9 mos

Extract, Transform, Load (ETL)KubernetesArchitectureAmazon Web Services (AWS)ETLCloud Architecture

Senior Customer Solutions Engineer

Jan 2022Jul 2024 · 2 yrs 6 mos

  • 1. Provide Subject Matter Expertise to various stakeholders regarding the product and various Big Data/Data Engineering stack
  • 2. Building and implementing scalable centralized data engineering solutions on either On-Prem, Cloud or Hybrid infrastructures are highly complex and relies on various other technologies like Hadoop, Spark, Hive, etc.
  • 3. Identify and Improve end to end feature velocity for data extraction, transformation and orchestration to optimize enterprise data pipelines/workflows.
  • 4. Build Internal/External Solutions that translate into cost saving solutions for both internal and external teams.
  • 5. Develop tools and features to aid operations and maintenance.
  • 6. Maintaining and understanding of relevant industry trends and current knowledge of the big data ecosystems deployed by various large enterprises.
  • 7. Perform technical/product training for salespeople, estimators, and engineers at targeted accounts.
  • 8. Resolve customer issues or difficulties in a manner that is consistent with the company mission, values, and financial objectives
Extract, Transform, Load (ETL)Amazon Web Services (AWS)Apache Spark StreamingETLData Engineering

Senior Solutions Engineer

Promoted

Jan 2020May 2022 · 2 yrs 4 mos

  • 1. Extensive experience working with Hadoop and related processing frameworks such as Spark, Hive, Sqoop, etc.
  • 2. Hands on experience with performance and scalability tuning
  • 3. Strong programming experience in Python
  • 4. Professionally worked with APIs to extract data for data pipelines
  • 5. Experience working in a AWS, Azure and GCP cloud and ability to architect, design and implement solutions using the cloud components
  • 6. Troubleshooting production issues and performing On-Call duties, at times.
  • 7. Working knowledge with workflow orchestration tools like Apache Airflow
  • 8. Professional experience with source control, merging strategies and coding standards, specifically Git
  • 9. Professional experience in data design, modeling
  • 10. Demonstrated experience developing in a continuous integration environment using tools like Jenkins, Gitlab
  • 11. Demonstrated ability to maintain the build and deployment process through the use of build integration tools
  • 12. Working experience and communicating with business stakeholders and architects
  • 13. Hands on experience on realtime streaming using Confuent kafka and spark streaming
  • 14. Experience of data migration and deployment from On-Prem to Cloud environment
  • 15. Very good understanding of no sql and sql databases and datawarehouse in handling TB s of structured and non-structured data like text, images, videos, audios captured at varying time range.

Customer Technical Support Engineer

Aug 2018Jan 2020 · 1 yr 5 mos

  • 1. Provide comprehensive technical support to our customers - understand the business impact on issues - provide resolutions and/ or workaround to customer queries/ issue as appropriate
  • 2. Assist customers through their Hadoop, Spark and other distributed computing challenges to get past blocking issues
  • 3. Resolving issues all the way from simple queries about the product, to questions around achieving specific technical objectives, to responding to and troubleshooting operational issues and suggesting the best practices in a given scenario
  • 4. Ensure that Service Level Agreements are met at all times
  • 5. Reproduce customer issues for diagnosis/ further analysis, passing acknowledged product issues to Engineering team for fixing and QA
  • 6. Complete Ownership: Track Support issues through to closure upon complete agreement with Customers at all times (few Account Management responsibilities) - with all relevant internal and external progress updates
  • 7. Assist Solutions (Professional Services) team and other teams within the Company when needed or as situation demands and visit Customers when needed
  • 8. Knowledge Sharing - contribution to Knowledge Base Articles, Documentation, Forums,
  • Blogs, etc...)

Oracle

Software Analyst

Jul 2016Aug 2018 · 2 yrs 1 mo · Bengaluru Area, India

  • 1) Worked on Oracle Cloud Platform and automation tools like ansible, puppet.
  • 2) Experience in troubleshooting, debugging and resolving the OS issues.
  • 3) Have a good knowledge of python scripting and automating the tasks.
  • 4) Have hands-on experience in writing auto fix scripts and upload to Oracle Enterprise Manager (EM) to resolve the tickets.
  • 5) Good understanding of Hadoop and Data warehousing skills.
  • 6) Worked on Data-ingestion, Transformation application in Hadoop as per the requirement.
  • 7) Worked on Hadoop/HDFS, YARN/MR technology, Apache Hive, Apache Sqoop, Spark as per the requirement.
  • 8) Developed custom scripts/packages to get the health analysis of remote servers.
  • 9) Hands on experience in creating dashboards and reports.
  • 10) Good knowledge of ETL [Extract, Transform, Load] for Data-warehouse.

Education

N M A M Institute of Technology, NITTE

Bachelor's degree

Jan 2012Jan 2016

Poornaprajna College, Udupi - 576102

PU — PCMB

Jan 2010Jan 2012

Stackforce found 100+ more professionals with Apache Kafka & Real-time Data Architecture

Explore similar profiles based on matching skills and experience