Vikash Prasad

DevOps Engineer

Bengaluru, Karnataka, India9 yrs 5 mos experience
Most Likely To SwitchHighly Stable

Key Highlights

  • Expert in designing scalable cloud-based data solutions.
  • Led successful data migration projects with significant performance improvements.
  • Passionate about optimizing data platforms for efficiency and cost.
Stackforce AI infers this person is a Data Engineering expert specializing in cloud-based solutions and ETL processes.

Contact

Skills

Core Skills

Data EngineeringCloud TechnologiesData MigrationEtlCloud EngineeringBig Data ProcessingData Warehousing

Other Skills

HVRSnowflakeInformaticaGoogle Cloud Platform (GCP)PythonPySparkExtract, Transform, Load (ETL)databricksData VaultApache DruidGoogle BigQueryGoogle Data StudioInfoWorksSplunkComputer Science

About

Around 10 years of experience in data engineering, I specialize in designing and optimizing scalable, cloud-based data solutions. My expertise spans Databricks, Spark, Azure, Snowflake, SQL, ETL, and real-time data pipelines. I have led enterprise-grade data initiatives, focusing on cloud migration, data ingestion, transformation, and performance tuning. I work with structured and semi-structured data, ensuring reliability, efficiency, and scalability. My experience includes handling large-scale data ingestion, complex ETL processes, and advanced analytics solutions for diverse industries. Key Expertise: Cloud & Big Data Technologies: Databricks, Spark, Azure, Snowflake, HVR, Fivetran Data Engineering & ETL: Real-time & batch processing, SQL, performance optimization Leadership & Delivery: End-to-end project ownership, stakeholder collaboration, solution architecture Passionate about building high-performance data platforms, optimizing cloud costs, and enabling data-driven decision-making at scale. Always eager to explore cutting-edge technologies and solve complex data challenges. For deeper dives into data engineering trade-offs and production realities, I write here: https://vixbyte.substack.com/subscribe?params=%5Bobject%20Object%5D

Experience

9 yrs 5 mos
Total Experience
3 yrs 1 mo
Average Tenure
5 yrs 6 mos
Current Experience

Quantiphi

2 roles

Associate Architect

Promoted

Jul 2023Present · 2 yrs 10 mos

  • As a Lead Data Engineer, I have been leading the development of a large-scale insurance data platform, ensuring real-time and batch data ingestion, transformation, and analytics. The project involves replicating data from multiple source systems into Snowflake, enabling stakeholders to efficiently access and utilize data.
  • Key Responsibilities:
  • Data Ingestion & Replication: Designed and optimized HVR pipelines for real-time and batch data ingestion from multiple sources.
  • Data Processing & Transformation: Parsed and processed complex JSON data from S3, ensuring structured data availability in Snowflake.
  • Performance Optimization: Tuned Snowflake queries for better performance, implementing clustering, partitioning, and caching.
  • End-to-End Project Delivery: Managed requirement gathering, task execution, and stakeholder collaboration, ensuring smooth data availability.
  • Key Achievements:
  • Reduced ingestion latency by 40% through optimized HVR strategies.
  • Enhanced query performance by implementing efficient data partitioning.
  • Automated JSON processing, improving data accessibility.
  • Delivered real-time data solutions, enabling better business decision-making.
  • This project plays a key role in ensuring seamless, real-time, and scalable data availability for multiple stakeholders.
HVRSnowflakeData EngineeringCloud Technologies

Senior Data Engineer

Nov 2020Jul 2023 · 2 yrs 8 mos

  • Project 1: TDM – Teradata to Snowflake Migration
  • Objective: Migrate all existing databases from Teradata to Snowflake, including the conversion of Informatica mappings to equivalent SnowSQL queries.
  • Key Responsibilities:
  • Developed and implemented data migration pipelines to transfer tables from Teradata to AWS S3 using Infoworks, ensuring a seamless transition.
  • Designed and optimized SnowSQL queries to replicate Informatica mappings, ensuring functionality and performance were maintained in Snowflake.
  • Validated and tested data integrity post-migration, ensuring consistency between Teradata and Snowflake.
  • Collaborated with cross-functional teams to address migration challenges and optimize the performance of Snowflake queries.
InformaticaSnowflakeData MigrationETL

Capgemini

Senior Consultant (I&D)

Feb 2019Apr 2020 · 1 yr 2 mos · Bengaluru, Karnataka, India · On-site

  • Led the end-to-end development of a cloud-based data engineering project on Google Cloud Platform (GCP), ensuring scalability, reliability, and performance.
  • Designed and implemented ETL workflows using Pub/Sub, BigQuery, and Bigtable, enabling efficient ingestion, transformation, and storage of high-volume data.
  • Automated data pipeline orchestration by developing daily and monthly job scheduling using Apache Airflow (Cloud Composer), reducing manual intervention and improving operational efficiency.
  • Optimized data workflows by fine-tuning query performance in BigQuery and implementing partitioning and clustering strategies to enhance cost-effectiveness.
  • Developed multiple audit and compliance reports using Google Data Studio, providing key insights for stakeholders and ensuring data integrity.
  • Collaborated with cross-functional teams, including data analysts and business stakeholders, to define data requirements and deliver impactful solutions.
Google Cloud Platform (GCP)PythonCloud EngineeringETL

Infosys

Big Data Developer

Sep 2018Dec 2018 · 3 mos · Bengaluru, Karnataka, India · On-site

  • Developed data pipelines using PySpark, Hive, and Apache Druid to enable efficient data processing and analytics.
  • Optimized ETL workflows to ensure scalability and performance across large datasets.
  • Assisted in data ingestion and transformation processes, improving data accessibility for business needs.
PySparkPythonBig Data Processing

Ntt data services

System Integration Analyst

Jul 2015Apr 2018 · 2 yrs 9 mos · Bangalore India

  • Designed and developed Data Vault models to support scalable and flexible data warehousing.
  • Built and optimized ETL mappings, sessions, and workflows using Informatica PowerCenter to ensure efficient data integration.
  • Conducted performance tuning for sources, targets, mappings, and sessions to enhance processing efficiency.
  • Led ETL workflow deployments in production, ensuring seamless data movement and integrity.
  • Played a key role in migrating ETL workflows across different environments, ensuring minimal downtime and optimal performance.
ETLInformaticaData Warehousing

Education

International Institute of Information Technology Bangalore

PG Diploma in Data Analytics — Data Science

Jan 2017Jan 2018

Stackforce found 100+ more professionals with Data Engineering & Cloud Technologies

Explore similar profiles based on matching skills and experience