A

Abhishek Ranjan

Senior Software Engineer

Gurugram, Haryana, India6 yrs 7 mos experience

Key Highlights

  • Expert in backend development with extensive experience in logistics.
  • Proven track record of optimizing complex systems for performance.
  • Strong leadership skills in mentoring junior developers.
Stackforce AI infers this person is a Backend Engineer specializing in logistics and fintech with expertise in system design and optimization.

Contact

Skills

Core Skills

Backend DevelopmentSystem DesignSystem OptimizationData EngineeringBig Data

Other Skills

Python (Django)GoRabbitMQCelery WorkersKubernetesPostgreSQLAmazon S3RedshiftElasticsearchKibanaRedisCeleryBatch ProcessingAWS LambdaKafka

About

I’d like to share some information to help you get to know me better. Currently, I am a Senior Backend Engineer at Trademo, a global AI-driven supply chain company. Prior to this role, I worked with Delhivery, a leading logistics and supply chain Unicorn in India, and with Danske Bank. I hold a Bachelor's degree in Computer Science (BSc) and a Master's in Computer Applications (MCA) from VIT, Vellore. I bring strong expertise in Data Structures, Algorithms, and in designing and developing large-scale systems on scale. 1. Good Understanding of developing products from scratch by considering engineering methods. 2. Good understanding of designing/architecting and developing products on scale. 3. Have experience in leading and mentoring Junior developers in a team. 4. Good experience in supporting the support team in production issues/rca (Root cause and Analysis of issues). 5. Good experience in gathering requirements from the Product team/clients and implementing them in product for improvement/addition of features. 6. Solid understanding in Data structures and Algorithms and problem solving ability. 7. proficient with version control systems (e.g., Git). 8. Good understanding of design patterns and system design concepts. 9. Good experience of working on AWS. 10. Have solid amount of experience of working in python (Flask & Django Frameworks). 11. Good experience with Celery for asynchronous task processing and developing products on celery workers with queue (RabbitMQ/Kafka). 12. Good at HLD (High Level) & LLD (Low Level) Design. My Tech Stacks: Python (Frameworks: Flask & Django), Aws Lambda, DynamoDB(NoSql), Amazon S3, Amazon SNS, Amazon API Gateway, Redis for Distributed Caching, Elasticsearch, Kibana , EKS , Aws Kinesis(Streaming) ,Kafka, RabbitMQ, Aws EventBridge, Aws SQS, Aws SNS, Aws SES, Serverless Architecture, Aws Cloudwatch, Jenkins, Docker, MongoDb, Memgraph(Graph DB), Postgres, Celery, Go.

Experience

6 yrs 7 mos
Total Experience
1 yr 9 mos
Average Tenure
1 yr 4 mos
Current Experience

Dehaat

Senior Software Engineer

Jan 2025Present · 1 yr 4 mos · Gurugram, Haryana, India · On-site

  • Currently working as a Senior Software Engineer in the Core Backend Engineering team at DeHaat, where I lead the engineering initiatives for Supply Chain Technology.
  • Key Responsibilities:
  • Dispatch (Logistics Module): Spearheading the development and optimization of DeHaat’s logistics platform to streamline end-to-end supply chain operations.
  • Vehicle Management System (VMS): Enhancing vehicle tracking, allocation efficiency, and route optimization for seamless logistics coordination.
  • Purchase Order Flow: Building and scaling systems that empower the Ground Team to upload bulk orders, automate approval workflows, and manage warehouse transfers across pan-India locations.
  • Tech Stack:
  • Backend & Infrastructure: Python (Django), Go, RabbitMQ, Celery Workers, Kubernetes
  • Database & Storage: PostgreSQL, Amazon S3, Redshift
  • Observability & Search: Elasticsearch, Kibana
  • Other Tools: Frappe Framework
Python (Django)GoRabbitMQCelery WorkersKubernetesPostgreSQL+6

Trademo

SDE 2

Mar 2024Dec 2024 · 9 mos · Gurugram, Haryana, India · On-site

  • At Trademo, I worked as a Senior Backend Engineer focused on backend product development and optimization. My role involved improving technical workflows and designing a product from scratch, taking it through architecture, development, and production deployment.
  • 1. One major challenge I tackled was resolving an Out of Memory (OOM) issue in the email generation flow. Users couldn’t download more than 75,000 records at once due to high memory usage and slow processing, caused by large Elasticsearch queries and sheet generation tasks. To address this, I implemented batch processing and optimized Celery by setting max-tasks-per-child = 1, ensuring memory was freed after each task. This allowed efficient task handling on limited resources, with 2 CPU cores processing at most two tasks simultaneously.
  • These optimizations not only improved system stability and enabled seamless large record processing but also reduced memory allocation from 4 GB to 2 GB, cutting infrastructure costs while enhancing user experience.
  • 2. The Finscreen application is architected to enable comprehensive risk assessment, advanced screening, and efficient data processing, with a primary focus on identifying potential red flags in financial and trade datasets. Leveraging a microservices architecture, I utilized Python (Django) for API development, Elasticsearch for distributed search and analytics, Redis for caching and real-time data storage, and Memgraph for graph database operations. To ensure asynchronous processing and scalability, I wrote an async engine capable of processing 50+ functions in a single run using an event loop with async, await, and asyncio in Python. This, combined with RabbitMQ for message brokering and Celery workers for task orchestration, enabled seamless data ingestion, high-throughput processing, and rapid scaling to manage complex workflows efficiently.
Python (Django)ElasticsearchRedisCeleryRabbitMQBatch Processing+2

Delhivery

Software Development Engineer 👨‍💻

Apr 2021Mar 2024 · 2 yrs 11 mos · Gurugram, Haryana, India

  • At Delhivery, I worked on large-scale event-driven architectural systems in the logistics domain, handling millions of shipments and ensuring seamless, high-performance operations.
  • 1. Billing and Invoice Management
  • Designed and developed a Billing Service to automate invoice generation for B2B clients.
  • Triggered by waybill pickups at the First Mile or deliveries at the Last Mile.
  • Managed billing details, supported invoice overrides, and consolidated invoices during billing cycles.
  • Processed over ₹70Cr+ worth of invoices monthly with seamless automation.
  • 2. PDF Generation and Dispatch Services:
  • Built a PDF Generation Service for invoices, LR copies, and Last Mile POD copies.
  • Used Amazon S3 for document storage and shortened URLs for efficient access.
  • Generated PDFs for over ₹70Cr+ worth of B2B invoices monthly.
  • 3. Invoice Dispatch Service (B2B Clients):
  • Developed an Invoice Dispatch Service to physically dispatch invoices, mimicking shipment flow from First Mile to Last Mile. Enabled seamless tracking and payment collection for clients.
  • 4. Modern Architecture and System Migration:
  • Architected systems using a modern tech stack: AWS Lambda, Kafka, Elasticsearch, DynamoDB, S3, Redis, Flask, EKS, ensuring scalability, reliability, and operational excellence.
  • Led the migration of several services from serverless architecture to EKS (Elastic Kubernetes Service).
  • Enabled dynamic pod scaling based on CPU usage to handle high-throughput workloads during peak times. Achieved cost efficiency and improved resource utilization for critical services.
AWS LambdaKafkaElasticsearchDynamoDBS3Redis+4

Danske bank

Associate Software Engineer

Aug 2019Mar 2021 · 1 yr 7 mos · Bangalore

  • At Danske IT I was a part of Big Data and Advanced Analytics Team.
  • Data Pipeline: Designed and Created a Data Pipeline where we are transferring Data from Hadoop to Data Warehouse I have worked on complete designing of Data pipeline from developing the pipeline from scratch to deployment in production.
  • Technology used: Python, Hive, Sqoop, Pipeline(Jenkins) , Git, Airflow (used for scheduling)
  • Oozie Job Tracker: Designed and created an application that will detect the status of all Oozie jobs (workflow jobs) before and after the job completion on a daily basis(Similar like CRON Job). We can keep track of jobs and process them again if needed, this application saving time as before we manually check for all jobs that are ingesting data in Hadoop.
  • Technology used: Python, Oozie, Git, Jenkins(Pipeline) , Airflow .
PythonHiveSqoopGitAirflowData Engineering+1

Education

Vellore Institute of Technology

Master of Computer Applications - MCA — Computer Science

Jan 2017Jan 2019

Vellore Institute of Technology

BSC — Computer Science

Jan 2014Jan 2017

St. Joseph's High School

12th Board

Jun 2012Jun 2014

Stackforce found 100+ more professionals with Backend Development & System Design

Explore similar profiles based on matching skills and experience