Varun Ajmera

Product Engineer

Bengaluru, Karnataka, India8 yrs 10 mos experience
AI EnabledAI ML Practitioner

Key Highlights

  • Expert in building scalable data pipelines and applications.
  • Proven track record of optimizing data processing efficiency.
  • Strong experience in both backend development and data engineering.
Stackforce AI infers this person is a Data Engineering and Backend Development expert in the SaaS industry.

Contact

Skills

Core Skills

Data EngineeringApache SparkAwsDatabricksBackend DevelopmentWeb Development

Other Skills

API DevelopmentAPI IntegrationAWS LambdaAdobe PhotoshopAirflowAmazon DynamodbAmazon S3Amazon Simple Notification Service (SNS)Amazon Web Services (AWS)Apache KafkaAzure DatabricksBlockChainDeveloperCCascading Style Sheets (CSS)CodeIgniter

About

An Enthusiastic Engineer working on Data Engineering with 5+ years of experience in building data and web-intensive applications, tackling challenging architectural and scalability problems in various domains.technologies like Python, Hadoop, Kafka, Spark, GIT, SQL, HQL, AWS, etc. Roles & Responsibilities • Building scalable big-data pipelines/applications/data warehouses with Hadoop, Kafka & SQL, Pl/SQL • Knowledge of relational database design concepts, coding, and experience in Apache HAWQ, and PostgreSQL(creating/modifying tables, views, sequences, aggregate functions, etc.) • Proficient in Agile and Waterfall Methodology. • Experience in interfacing with Onsite Stakeholders and supported at onsite too. • Knowledge in installing, configuring, and using Hadoop ecosystem components like Hadoop Map Reduce, HDFS, Hive, Spark, and Kafka. • Used Kafka to collect, aggregate, and store the web log data from different sources like web servers and pushed to HDFS. • Used Hive to process data and Batch data filtering. • Implemented a distributed messaging queue using Apache Kafka. • End-to-end ownership for developing, unit testing, and migrating the ETL code with quality and minimal supervision. • Capable of quickly, regularly, and independently learning new technologies under the pressure of consistent high-profile project deliverables. • Expertise in writing complex SQL queries to perform table partitioning, and data cleansing. • Handled slowly changing dimensions (SCD 1 and SCD 2). • Involved in Batch scheduling and processing (Full and Incremental) using Informatica to ensure on-time delivery of extract reports even during non-business hours to BA. Skill Set: Data Engineering tool: - Big Data-Apache Hadoop, Databricks File System: Hadoop Distributed File System – HDFS Computing Paradigm:- MapReduce & Apache Spark Data Store: Hive Streaming: Kafka RDBMS: MySQL, Apache HAWQ, PostgreSQL Schema Evolution & File Formats: Parquet, ORC, CSV Languages: Python, PHP, C

Experience

Growtharc

Lead Data Engineer

Nov 2024Present · 1 yr 4 mos · Bengaluru, Karnataka, India · Remote

  • Designed and implemented an LLM-powered solution using the OpenAI API to interpret complex business logic and automatically generate summarized reporting stored procedures, reducing manual effort and significantly accelerating data analysis and business decision-making.
  • Re-architected a Kotlin-based data pipeline from Snowflake to Elasticsearch using Apache Spark, reducing processing time for 262 million records from 32 hours to 12 hours, enabling faster reporting cycles and enhancing overall operational efficiency.
  • Enhanced another pipeline transferring data from Oracle to Elasticsearch via Logstash , reducing processing time from 8 hours for 1.4 million records to just 14 minutes with Apache Spark, leading to reduced infrastructure costs and faster data availability for downstream systems.
  • Led the successful delivery of a Proof of Concept (POC) for scalable ETL pipelines leveraging Snowflake and DBT, resulting in improved data accuracy, performance, and maintainability.
Python (Programming Language)SnowflakeData Build Tool (DBT)Microsoft SQL ServerData EngineeringApache Spark

Dish network technologies

Senior Engineer

Oct 2022Nov 2024 · 2 yrs 1 mo · Bengaluru, Karnataka, India · Hybrid

  • Architected and executed a 5G data pipeline following Medallion Architecture using AWS (Lambda, DynamoDB, S3, Glue Crawler, Parquet), enhancing storage efficiency and data processing.
  • Engineered and optimized a Gold Layer with AWS EMR Serverless & Iceberg , eliminating AWS Glue , cutting cloud costs by 25% , and boosting query performance by 40% , enabling real-time analytics.
  • Optimized AWS Lambda memory usage and streamlined logging, cutting cloud costs by 12% and improving execution efficiency.
  • Designed and deployed a data quality framework, detecting 40% of schema changes and monitoring multi-source data arrivals, reducing data inconsistencies by 20%.
  • Collaborated with stakeholders to define and implement scalable data solutions, accelerating project delivery by 20% .
Python (Programming Language)Amazon S3AWS LambdaicebergAmazon DynamodbAmazon Simple Notification Service (SNS)+5

Enquero

Senior Data Engineer

Dec 2019Oct 2022 · 2 yrs 10 mos · Bengaluru, Karnataka, India · Hybrid

  • Engineered complex Databricks scripts replicating PostgreSQL stored procedures to process 1 TB of data daily, delivering accurate data transformations and enabling a 20% improvement in report generation speed.
  • Orchestrated a 30% speed increase in databricks data processing by cache and data partitioning, using salting and broadcasting for efficient data distribution, which reduced latency.
Apache SparkAmazon Web Services (AWS)PostgreSQLHiveQLdatabricksHive+3

Talocity : touchless hiring

Back End Developer

Sep 2019Nov 2019 · 2 mos · Mumbai, Maharashtra, India · On-site

  • Developed and provided scalable APIs for an AI-powered interview platform, enabling seamless user authentication, onboarding 10+ clients, and data processing. Integrated 2 DAGs in Apache Airflow to automate workflows, improving platform efficiency and reliability.
Python (Programming Language)AirflowBackend Development

Startups club

Backend Developer

Feb 2018Aug 2019 · 1 yr 6 mos · Bengaluru Area, India · On-site

  • Contributed to Razorpay Python SDK
  • https://tinyurl.com/razorpaycontribution
  • Engineered and launched a social network for 25K+ professionals and entrepreneurs using Python, Django Rest Framework (DRF), and React , ensuring scalability and security with JWT-based authentication.
  • Developed and integrated a mentor-mentee system, facilitating domain-specific guidance and increasing mentor-mentee connections by 20%.
  • incorporated Razorpay Payment Gateway and introduced a subscription model, boosting revenue by 16% and streamlining payment processing.
  • Optimized website performance, cutting load times by 15% and boosting user engagement, while enhancing Razorpay Python SDK by implementing the Settlement API, streamlining payments for thousands of businesses.
DjangoDjango REST FrameworkPython (Programming Language)razorpayBackend DevelopmentWeb Development

Thought chimp

Web Developer

Dec 2016Jan 2018 · 1 yr 1 mo · Bengaluru Area, India · On-site

  • Built a data collection pipeline integrating YouTube, Facebook, and Twitter APIs, enabling a recommendation algorithm that increased user engagement by 15% through personalized content based on likes and comments.
PHPPython (Programming Language)Web Development

Education

Vivekanand Education Society's Institute Of Technology

Master of Computer Applications (MCA)

Jan 2014Jan 2017

LACHOO MEMORIAL COLLEGE,JODHPUR

BCA

Jan 2010Jan 2012

Stackforce found 100+ more professionals with Data Engineering & Apache Spark

Explore similar profiles based on matching skills and experience