Satish K

Data Engineer

India3 yrs 10 mos experience

Key Highlights

  • Expert in building scalable data pipelines.
  • Proficient in optimizing ETL processes for performance.
  • Strong collaboration skills in Agile environments.
Stackforce AI infers this person is a Data Engineer specializing in Fintech and cloud data solutions.

Contact

Skills

Core Skills

Data Pipeline Development & MaintenanceEtl Optimization

Other Skills

Amazon Web Services (AWS)Apache SparkAzure DevOpsControl-MHDFSHadoopHiveHive SQLPuTTYPySparkSQLSnowflakeSparkSqoopWinSCP

About

Data Engineer As Data Engineer delivering solutions across on-premises and cloud platforms. At Standard Chartered Bank (Singapore GBS), I specialized in on-premises data engineering—building and optimizing ETL workflows using Hive SQL and Control-M, managing secure data transfers with PuTTY and WinSCP, and deploying workflows via Azure DevOps. I also have strong cloud experience with AWS, working on data pipelines using Spark, Glue, and S3 to design scalable ETL workflows. I ensured high-quality, timely reporting through data validation, troubleshooting, and change request handling, while collaborating with stakeholders in Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives. Key Skills & Expertise: ✓ Data Pipeline Development & Maintenance: Expertise in designing, building, and maintaining data pipelines using AWS, Apache Spark, and other modern data technologies. ✓ ETL Optimization: Proficient in tuning and optimizing Spark jobs and ETL processes for better performance, scalability, and reliability. ✓ ETL Tools: Hands-on experience with tools like AWS Glue, PySpark, Airflow, Azure DevOps (VSTS), Jenkins, and Dataflow. ✓ SQL & Data Validation: Strong proficiency in writing complex SQL queries for data validation, ensuring data accuracy and completeness post-transformation. ✓ Testing & Quality Assurance: Skilled in designing and executing comprehensive test cases and plans to ensure the integrity of data throughout the ETL process. ✓ Problem Solving & Troubleshooting: Experienced in diagnosing and resolving issues, collaborating with development teams to ensure high-quality data integration and flow. ✓ Tools Knowledge: Familiar with Heartland, Postman, Rally, Qtest, WinSCP, Putty, and other software tools essential for data engineering and testing. I am passionate about driving data-driven decision-making through automation and the development of high-performing, scalable data systems. My goal is to continuously optimize processes, reduce bottlenecks, and ensure data integrity at every stage of the data pipeline.

Experience

Tata consultancy services

Senior Data Engineer

Jun 2025Present · 9 mos · Bengaluru, Karnataka, India · Hybrid

  • Abacus Staffing and Services Pvt Ltd
  • I specialized in on-premises data engineering by building and optimizing ETL workflows using Hive SQL, scheduling and monitoring batch jobs through Control-M, and managing secure data transfers via PuTTY and WinSCP. I was also responsible for deploying workflows using Azure DevOps pipelines, ensuring smooth CI/CD integration. My role involved performing data validation, reconciliation, and quality checks to deliver accurate and timely business reporting, as well as handling bug fixes and change requests. In addition, I collaborated with cross-functional teams and actively participated in Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives to align deliverables with business requirements.
Hive SQLControl-MPuTTYWinSCPAzure DevOpsData Pipeline Development & Maintenance+1

Neostats

Big Data Developer

Aug 2024Apr 2025 · 8 mos · Bengaluru, Karnataka, India

  • Worked on a data engineering project for Standard Chartered Bank – Singapore Global Business Services,
  • √ Delivering solutions across on‑premises and cloud platforms.
  • √ Built and optimized ETL workflows using Hive SQL and Control‑M,
  • √ deployments via Azure DevOps, and handled secure data transfers with PuTTY and WinSCP to ensure high‑quality, timely business reporting.
Hive SQLControl-MAzure DevOpsPuTTYWinSCPData Pipeline Development & Maintenance+1

Capgemini

Consultant

Dec 2021Jun 2022 · 6 mos · Bengaluru, Karnataka, India · Remote

  • Worked on a data engineering project for Unilever using on‑premises big data platforms.
  • √ Developed ETL workflows with Apache Spark and Hive SQL, implemented data quality checks, automated processes via Shell Scripting, and managed data integration using HDFS and Sqoop to support accurate and timely business reporting.
Apache SparkHive SQLHDFSSqoopData Pipeline Development & MaintenanceETL Optimization

Wipro

2 roles

Senior Analyst

Promoted

Feb 2021Jan 2022 · 11 mos · Hyderabad, Telangana, India · Hybrid

  • Worked as a Analyst for Google client, handling geospatial data processing and analytics.
  • √ Developed and optimized workflows using SQL and big data tools such as Hive and Spark for POI (Points of Interest), geocoding, and road mapping datasets.
  • √Performed data validation, quality checks, and transformation to enhance mapping accuracy and support global location‑based services for Asian and European regions.
SQLHiveSparkData Pipeline Development & MaintenanceETL Optimization

Analyst

Jan 2020Feb 2021 · 1 yr 1 mo · Hyderabad, Telangana, India · Hybrid

Stackforce found 7 more professionals with Data Pipeline Development & Maintenance & Etl Optimization

Explore similar profiles based on matching skills and experience