Gaurav Rawat — Data Engineer
As a Data Engineer with working in transforming data into growth, I have developed expertise in Hadoop, Hive, Python, Spark, Kafka, SQL, AWS, Databricks, Snowflake, Terraform , Airflow, Kafka, Spark Streaming, CI/CD, data modeling, and data pipeline technologies. • Experience in designing, developing, deploying, debugging and maintaining Big Data pipelines using Hadoop, Apache Spark, Python, SQL, AWS, Airflow and Hive. • Experience in developing high performance and scalable solutions that extract, transform and load Big Data. • Experience developing PySpark Modules using Dataframes API, Spark SQL and their optimization. • Proficiency in writing and optimizing SQL queries. • Experience building and optimizing 'Big Data' Data Pipelines, architectures and datasets involving petabyte and terabyte of data. • Experience in working with unstructured and semi-structured data. • Experiece in AWS Stack : EC2, EMR, S3, Airflow , Athena, CloudFormation Skills: Big Data, Python, SQL, Spark, AWS EMR, Databricks, Snowflake, DBT (Data Build Tool), Hadoop, Hive, Docker, Airflow, CI/CD, Terraform, GitHub, Data Quality, Kafka, Spark Streaming, ETL Development, and Data Warehousing. Feel free to reach out to me at gauravrawat141999@gmail.com
Stackforce AI infers this person is a Data Engineering expert in SaaS with a focus on Big Data solutions.
Location: Bengaluru, Karnataka, India
Experience: 3 yrs 11 mos
Skills
- Data Engineering
- Etl Development
Career Highlights
- Expert in building scalable Big Data pipelines.
- Proficient in AWS and Databricks for data solutions.
- Strong background in data quality and ETL frameworks.
Work Experience
Autodesk
Software Engineer 2 - Data Engineering (2 yrs)
Nike
Data Engineer (1 yr 7 mos)
Protium
Data Engineer-Intern (3 mos)
Education
Bachelor of Technology - BTech at J.C. Bose University of Science and Technology, YMCA