B

Bidyadhar Barik

Product Manager

Bengaluru, Karnataka, India15 yrs 3 mos experience
Highly Stable

Key Highlights

  • 16+ years in software industry with data engineering expertise.
  • Led large-scale global projects across multiple industries.
  • Proven track record in cloud data architecture and migration.
Stackforce AI infers this person is a Data Engineering expert with extensive experience in cloud migration and architecture across various industries.

Contact

Skills

Core Skills

Data EngineeringCloud Data ArchitectureCloud Data MigrationData MigrationDatabase Management

Other Skills

AWSAgile MethodologiesAirflowAmazon RedshiftAmazon S3Amazon Web Services (AWS)Apache SqoopAzureBig DataBigQueryCassandraDWHData AnalysisData Engineering ArchitectureData Lakehouse

About

Over 16+ years of experience in the software industry, specializing in end-to-end SDLC for delivering Data Engineering and Data Analytics solutions across On-Premises, GCP, Azure, and AWS environments. Proven expertise in Data Modernization, Cloud Data Architecture, and Cloud Data Migration, with a strong track record of leading large-scale global projects across Telecom, E-commerce, Supply Chain, and Automobile industries. • Lead and mentored teams of 20+ professionals across Data Engineering, Testing & DevOps. • Over 7 years of experience leading engineering teams, with proven success in hiring, mentoring, and developing top talent. • 8+ years of experience in building, managing, and operating highly scalable services and teams in the cloud (GCP, Azure, AWS) • 8+ years Designed and implemented product roadmaps for Data Architecture, Data Modernization, and Cloud Data Migration initiatives. • Mitigating technical debt, including long-term technical architecture decisions, while balancing the product roadmap. • Demonstrating technical concepts effectively via live demos and presentations, influencing leadership and key stakeholders. • Working closely with engineering and product leaders on strategic planning, resource prioritization, and team allocation. • Leading and contributing to technical, product, and design discussions, as well as status reviews. • Defined and documented end-to-end data platform architecture in GCP, covering ingestion (Pub/Sub, Dataflow), storage (BigǪuery, Cloud Storage), and orchestration (Composer, Workflows). • Established data modeling standards (star and snowflake schemas, partitioning, clustering) to optimize query performance and control costs. • Developed high-volume migration frameworks from on-premises (structured/unstructured) to GCP using Dataproc, PySpark, BigǪuery, CloudSǪL, Bigtable, Spanner, Cloud Functions, Dataflow, Pub/Sub, Cloud Shell, GCS Bucket, and Python. • Built large-scale migration pipelines to Azure using Data Factory, Dataflow, Databricks, Data Lake, PySpark, Blob Storage, EventHub, Kafka, Airflow, Cloud Storage, Pandas, and Python. • Proficient in Agile, CI/CD, and DevOps practices with expertise in Kubernetes, Docker, and Pipeline • Expertise in Data Analysis & Reporting. • Strong understanding of Data Governance, Security & Compliance. • Adept in Automation, Budgeting, Cost Optimization, and Resource Planning. • Leadership in building high-performance teams and driving a culture of excellence. Contact: +91 9730650597,bidyadharbarik320@gmail.com

Experience

Tech mahindra

Technical Architect - GCP/Azure

Jun 2024Present · 1 yr 9 mos · Bengaluru, Karnataka, India · Hybrid

  • Design End to End Data Modernization, Data Engineer Architecture, Cloud Data Migration from Teradata to GCP Bigquery.
  • Modernized 2000+ DataStage ETL jobs to GCP Dataflow/BQ SP ETL for improved performance and scalability.
  • Designed ELT/ETL data pipeline processes to efficiently load data from both cloud and on-premise servers into data lakes.
  • Migration scheduling tools from Autosys to Astronomer, enhancing workflow automation.
  • Planned resources and estimated project timelines to optimize delivery and execution.
  • Led a team of 20 + professionals across India and the USA, driving collaborative data engineering initiatives.
  • Developed technical roadmaps for Big Data & Analytics solutions, aligning with business goals and industry best practices.
  • Managed stakeholders, ensure effective communication and alignment of project objectives.
  • Handled escalations, providing strategic solutions to mitigate risks and ensure smooth project execution.
  • Focused on Engineering Excellence, incorporating automation, CI/CD, and best practices to improve efficiency.
  • Provided mentorship (technical and non-technical) to junior team members, fostering skill development and growth.
Data ModernizationData Engineering ArchitectureCloud Data MigrationGCPDataflowBigQuery+2

A.p. moller - maersk

Data Engineer Architect- Azure

Dec 2022Jun 2024 · 1 yr 6 mos · Bengaluru, Karnataka, India · Hybrid

  • Designed data migration architecture from SǪL Server to PostgreSǪL.
  • Built ELT/ETL pipelines for cloud and on-premises data loading.
  • Migrated 500+ ETL jobs from SǪL Server stored procedures and Informatica to Azure Databricks.
  • Developed high-volume migration frameworks to Azure using Databricks, ADF, PySpark, Data Lakehouse, and Blob Storage.
  • 20 TB Data Migration from SǪL server to Azure SǪL server (DMA)
  • Designed CDC pipelines from SǪL on-premises to Azure SǪL Managed Instance.
  • Conducted code reviews and implemented best practices.
  • Delivered POCs on ChatGPT AI, Open Metadata for Data CatLog, Profiling, Governance, and Lineage.
  • Developed scalable ETL/ELT pipelines using Azure Data Factory and Databricks to ingest, cleanse, and transform large volumes of structured and semi-structured data.
  • Configured self-hosted integration runtime to enable secure and reliable data migration from on-premises SǪL Server databases to Azure Data Lake.
  • Established a folder structure and partitioning strategy in ADLS to optimize storage and query performance.
  • Automated data ingestion workflows with monitoring, logging, and error handling, improving data availability
  • Collaborated with data scientists and analysts to provide clean, curated datasets, accelerating analytics and reporting.
  • Cleanse, standardize, and transform raw data into usable formats.
  • Apply partitioning, compression, and schema evolution strategies.
  • Tune queries and pipelines for faster performance and lower cost.
  • Manage storage tiers (hot, cold, archive) in Azure Data Lake Storage.
  • Ensure data meets governance policies (naming standards, PII handling).
  • Perform data quality checks, deduplication, and anomaly detection.
Data MigrationAzureDatabricksData LakehouseETLData Engineering+1

Johnson controls

2 roles

Data Engineer Architect- GCP/AWS

Promoted

Mar 2021Dec 2022 · 1 yr 9 mos

  • Providing Technical Leadership to data engineering team members in developing, constructing, testing and maintaining first-class cloud data architectures.
  • Lead and mentor a team throughout design, development and delivery phases and keep the team intact on high pressure situations.
  • Have worked as a software professional specializing in Oracle 12c, Performance Tuning, MySQL, PostgreSQL, MongoDB, Python, Google Cloud , AWS,Airflow,Panda,Pyspark, Hadoop,sqoop,hive,SQL Loader and Unix shell scripting.
  • Developed framework for high-volume of Data Migration on-premises (Structured and unstructured data) to Google Cloud Platform using Pyspark and python.
  • Hands on Experience deploying data analytics & Data Pipeline solutions in GCP using BigQuery ,Redshift ,datastore, Dataproc , Data Pipeline , Airflow ,Aws ,Redshfit ,S3 Bucket and cloud storage.
  • Strong experience in working in Datawarehouse, Data lake, Hadoop Dataproc and Pyspark cluster (GCP).
  • Working knowledge on creating CI/CD pipelines using Jenkins and Team foundation server (repository).
  • Hands on experience in Change Data Capture (CDC) ,Data Migration, Transformation, PL/SQL Programing, Python for ETL, Unix Shell scripting Google Cloud Agile and Micro service as a platform.
  • Create automated unit tests and data validation scripts, Implement system health monitoring and alerts.
  • Experience in various ingestion (batch & real time) and curation techniques for structured /unstructured data on distributed environment.
  • Hands on experience in using tools like Jenkins,Svn,Git ,Bitbucket,Jira,Tspace & Devops.
  • Developed automated data delivery pipelines and services to integrate data from the data lake to internal and external consuming applications and services.
Data EngineeringGCPAWSData PipelineAirflowCloud Data Architecture

Senior Data Engineer

Jun 2018Mar 2021 · 2 yrs 9 mos

  • Have worked as a software professional specializing in Oracle 12c, Performance Tuning, MySQL, PostgreSQL, MongoDB, Python, Google Cloud , AWS,Airflow,Panda,Pyspark, Hadoop,sqoop,hive,SQL Loader and Unix shell scripting.
  • Developed framework for high-volume of Data Migration on-premises (Structured and unstructured data) to Google Cloud Platform using Pyspark and python.
  • Developed Google Cloud automation scripts for data export and import from GCP Cloud SQL Prod to Test envierment.
  • Developed Google Cloud automation scripts for data export and import from GCP Cloud SQL & BigQuery Prod to Test envierment.
  • Hands on Experience deploying data analytics & Data Pipeline solutions in GCP using BigQuery ,Redshift ,datastore, Dataproc , Data Pipeline , Airflow ,Aws ,Redshfit ,S3 Bucket and cloud storage.
  • Strong experience in working in Datawarehouse, Data lake, Hadoop Dataproc and Pyspark cluster (GCP).
  • Working knowledge on creating CI/CD pipelines using Jenkins and Team foundation server (repository).
  • Analyze complex, high-volume, high-dimensionality data from varying sources using a variety of ETL and data analysis techniques.
  • Experience in various ingestion (batch & real time) and curation techniques for structured /unstructured data on distributed environment.
  • Design, develop, and implement ETL pipelines for Enterprise DWH & Data marts ingesting both structured and unstructured data.
  • Strong experience in designing ETL, DWH & Data lake solutions with large datasets using Oracle Databases, & Big-data technologies. Export in Advance SQL query writing and PLSQL logic design in oracle, MYSQL , PostgreSQL & Analytics databases like Amazon Redshift & GCP Big query.
  • Hands on experience in using tools like Jenkins,Svn,Git ,Bitbucket,Jira,Tspace & Devops.
  • Developed automated data delivery pipelines and services to integrate data from the data lake to internal and external consuming applications and services.
OraclePL/SQLPerformance TuningData MigrationDatabase Management

Ibm india private limited

Tech Lead

Jun 2016Jun 2018 · 2 yrs · Bangalore

  •  Responsible for requirement gathering, analysis, design and development of any enhancements in the application.
  •  Prepare the Low level design document (LLD) for application related.
  •  Involved in preparation of design documents for all the impacted methods and for new functionality.
  •  Developed the Oracle Procedure ,Function ,Package, Materialized View , Trigger, and shell script for get CDC (Change Data capture ) Delta.
  •  Data Integration Framework Development – {Realtime ,Near Real time and Batch}
  •  Managed tables, indexes,constraints,views,sequences,synonyms and stored program units.
  •  Designed the overall architecture for Data Migration from Oracle Database to Oracle and Oracle to MongoDB.
  •  Used Bulk Collect , Bulk Binds to improve performance by minimizing the number of contest switches between the PL/SQL and SQL engines.
  •  Participated in design discussions with the application architects and suggested design changes to improve database performance.
  •  Performed Oracle Performance tuning using SQL_TRACE , DBMS_PROFILER and EXPLAIN PLAN.
  •  Wrote complex queries and sub queries to do analysis work and generated reports to validate results.
  •  Create MongDB collection along with collection schema validation.
  •  Write a complexes query fetch delta records for near real time (NRT) .
  •  Performance Tuning long running query and taking objects.
  •  Developed shell scripts to load the data through sql * loader and responsible to develop alert mechanism for file system
  •  Developed automation script for generated generic source code for Trigger and Procedure identified the delta records.
  •  Attend daily scrum of scrum calls and address blockers.
  • Achievements:
  •  Designed Event Generation Framework for real time data processing.
  •  Developed automation script for generated generic source code for Trigger and Procedure identified the delta records.
  •  Developed automation script for reduce manual work, batch process and resource power.
PL/SQLUnix Shell ScriptingData MigrationDatabase Management

Atos

System Analyst

Jun 2014Jun 2016 · 2 yrs · Pune, Maharashtra, India

  •  Involved in requirement gathering, analysis and design phase.
  •  Enhanced the application by developing and delivering new demands to support business.
  •  Attending calls and meetings with Clients and Internal to the organization by providing Technical and Functional subject expertize.
  •  Performed Migration activity to support Disha Program.
  •  Creating PL/SQL Procedure, Function, Package as per business requirement providing technical solutions and Development for requirements raised by Clients.
  •  Developed Unix Shell scripts with embedded SQL*Loader calls and PL/SQL statements to extract data from the legacy application in the form of flat files and load the extracted data into the new application.
  •  Worked with product managers to give work estimates and design.
  •  Wrote complex SQL script, analytical function.
  •  Improved query performance by query’s optimization - tracing the query execution plan (explain plan)
  •  Used job scheduling tool like Cron job.
  •  Performed SIT, UAT, Production Roll Out, Post Production Sanity, Handover to Production Team and Resolved Production Issues.
  •  Developed shell scripts to pull the file from different source through FTP.
  • Environment: PL/SQL, Unix Shell Scripting ,Oracle 12c,Oracle 11g,TOAD,Sql Developer ,SQL*Plus, Putty and WinSCP.
PL/SQLUnix Shell ScriptingData MigrationDatabase Management

Synova

2 roles

Senior Software Engineering

Promoted

Jul 2012Jun 2014 · 1 yr 11 mos

  •  Involved in requirement gathering, analysis and design phase.
  •  Creating PL/SQL Procedure, Function, Package and Trigger as per business requirement providing technical solutions and Development for requirements raised by Clients.
  •  Wrote complex SQL script, analytical function.
  •  Improved query performance by query optimization - tracing the query execution plan (explain plan)
  •  Developed shell scripts to pull the file from different source through FTP.
  •  Used job scheduling tool like Cron job.
  •  Analyzed the existing database and involved in extensive analysis.
  •  Responsible to do unit testing and prepared unit Test case and supporting SIT and UAT.
  • Environment: PL/SQL, Unix Shell Scripting ,Oracle 11g,Oracle 10g,TOAD,Sql Developer ,SQL*Plus, Putty and WinSCP.
PL/SQLUnix Shell ScriptingData MigrationDatabase Management

Software Developer

Dec 2010Jul 2012 · 1 yr 7 mos

  •  Coded and debugged Stored Procedure, Package and Views in Oracle Databases using SQL and PL/SQL , which were called by user-oriented application modules.
  •  Extensive querying using SQL*plus/TOAD to monitor quality & integrity of data.
  •  Developed shell scripts to pull the file from different source through FTP.
  •  Developed shell scripts to load the data through sql * loader and responsible to develop alert mechanism for file system.
  •  Analyzed the existing database and involved in extensive analysis.
  •  Developed oracle Reports,PL/SQL packages to perform Certain Specialized functions.
  •  Involved in in unit level testing, Module level testing and Integration testing at the primary level before the modules are delivered to the Quality department.
  •  Analyzed queries using SQL Trace facility and Explain Plan utility to obtain the execution process. Optimized the queries by modifying the data access methods, Index strategies, Join types and operations and providing hint.
  • Environment: PL/SQL, Unix Shell Scripting ,Oracle 11g,Oracle 10g,TOAD,Sql Developer ,SQL*Plus, Putty and WinSCP.
PL/SQLUnix Shell ScriptingDatabase Management

Education

MCA PTU

Master of Computer Applications - MCA — Computer Software Engineering

Stackforce found 100+ more professionals with Data Engineering & Cloud Data Architecture

Explore similar profiles based on matching skills and experience