Rohan Bhagwatkar — Data Engineer
I currently work as a Data Engineer. I always strive to generate meaningful insights from raw data by building big data processing pipelines that power different analytical dashboards. In my experience of approximately 5 years, I have designed, developed and delivered high performing real time and batch data processing pipelines that process terabytes to petabytes of data on a daily basis in the cloud. These pipelines have helped the teams to drive real time analytics on user experience metrics and also provide processed data for search and recommendation services. The journey so far has allowed me to gain exposure and proficiency in: Programming Languages - Java, Scala and Python BigData Frameworks - Flink, Spark, Hadoop and Kafka (Core, Connect, Streams) Cloud - AWS (Amazon Web Services) - S3, EC2, EMR, Lambda, Redshift, Kinesis (Streams, Firehose and KDA), SQS, SNS, Athena, Glue and Cloudwatch Databases - PostgreSQL, Elasticsearch and Druid Dashbording Tools - Kibana, Graphana, Redash and Superset APM - NewRelic CI/CD - Jenkins SCM - GitHub and GitLab
Stackforce AI infers this person is a Data Engineer specializing in real-time data processing and analytics in the streaming services industry.
Location: Bengaluru, Karnataka, India
Experience: 5 yrs 8 mos
Skills
- Apache Flink
- Apache Kafka
- Pyspark
- Apache Spark
- Spark
Career Highlights
- Designed high-performing data processing pipelines.
- Reduced AWS costs by $2.5M annually.
- Built real-time analytics for improved user experience.
Work Experience
DISH Network
Senior Data Engineer (3 yrs 10 mos)
Data Engineer (1 yr 10 mos)
Saathi Global Education Network
Software Developer (5 mos)
Credit Suisse
Technology Analyst (2 mos)
IvLabs, VNIT
Summer Research Intern (2 mos)
Education
BTech - Bachelor of Technology at Visvesvaraya National Institute of Technology