Amulyam Agrawal

Senior Software Engineer

Gurgaon, Haryana, India14 yrs experience
Most Likely To SwitchHighly Stable

Key Highlights

  • Expert in building scalable backend systems.
  • Strong experience with big data technologies.
  • Proven track record in developing real-time data solutions.
Stackforce AI infers this person is a Backend-focused Software Engineer with expertise in Fintech and Big Data Analytics.

Contact

Skills

Core Skills

KafkaStreaming ApiRest ApiPrometheusBig Data AnalyticsHadoopBackend Development

Other Skills

API DevelopmentAWS S3AerospikeAlgorithmsCC++CSSCassandraCore JavaData AggregationData ReconciliationData ReportingData StructuresData publishingData service

About

I am currently a software engineer at Tower Research Capital Market India Pvt Ltd working on building Post-Trade software to enable reconciliation between trade data provided by the trading teams and the data provided by the brokers hence enabling the calculation of any fees or dues which is pending with the brokers. I have been an enthusiastic programmer since high school and got his Bachelor of Engineering in Information Technology from National Institute of Durgapur, India. I have strong experience in working with Java and building scalable systems. I also have experience in frameworks like Spring-Boot and Vert.x. I have also worked with various big data technologies like Kafka, Spark, Hive, HBase, Cassandra, Elastic Search, etc. I like to work on the backend and infrastructure of the system and would love to take initiative in leading and working on challenging problems. I am always interested to learn about new opportunities and challenges, please reach out to me at: anshulamul@gmail.com

Experience

Tower research capital

2 roles

Senior Software Engineer

Jan 2025Present · 1 yr 2 mos · Gurugram, Haryana, India

Software engineer III

Mar 2022Jan 2025 · 2 yrs 10 mos · Gurugram, Haryana, India

Goldman sachs

Associate

Jan 2020Mar 2022 · 2 yrs 2 mos · Bengaluru, Karnataka

  • Developed Live Publisher which takes real time data from GS- Proprietary table data structure and put it in Kafka in a GS-Proprietary data format.
  • Developed EOD Publisher which takes real time data from GS- Proprietary table data structure and put it in Kafka in a GS-Proprietary data format.
  • Worked single handedly with Arctic team for the integration of their Streaming API and wrote consumers for Live and EOD Data which reads data from Kafka and put it in Chunkstore which is a GS-Proprietary time series database for long term persistence via this Streaming API.
  • Reformatted many repos to simplify micro-services.
  • Worked on developing API which gives historical trend of market data.
  • Worked on developing APIs for a governance app which dictates the
  • responsibility and tracks the user who have made changes to an asset.
  • Worked on writing EOD Market Service which provides data to clients in a
  • streaming format or REST API.
  • Improved an existing system by 92% by working on a POC which took data
  • from an external system and made it readily available in our Cache.
  • Designed a Prober system to send metrics to Prometheus client to set up an
  • alarming system in case of failures.
KafkaStreaming APIMicro-servicesREST APIPrometheus

Mobileum

2 roles

Senior Software Engineer - Big Data Analytics

Promoted

Apr 2018Jan 2020 · 1 yr 9 mos

  • 1) Generic Cube Framework: Developed in-house OLAP cube framework which gives the user the flexibility to slice, dice and aggregate data while performing joins with various data sources such as HBase, RDBMS, etc., plugging in user-defined functions and exploding rows with user-defined code. Also, the framework had a rich availability of user-defined aggregate functions including some difficult unique count UDAFs scenarios also which uses HyperLogLog.
  • 2) Feature Factory Framework: Developed Feature Factory which combines in-house OLAP Aggregation cube framework and pluggable features which can also involve machine learning. This framework is being used across teams in Mobileum for reporting and detecting frauds.
  • 3) Generic Reconciliation Engine: Built a framework which has the capability to reconcile any two data streams on the basis of rules/instructions being specified in a configuration file. This framework was used to reconcile almost 100-110 streams for our client Progresif Cellular, Brunei.
  • 5) IRSF DB: Developed a framework to reconcile and resolve the number ranges from various streams.
  • 6) SIP Voice Data Model: SIP model project involved writing Hadoop Record Readers for decoding the capture packets i.e. Pcaps, developing a Spark Streaming application for getting the dialogs in real-time and writing a batch job to generate all the voice calls from the data.
  • 7) CLI Fraud Detection: Designed, devised and wrote rules with which one can detect Caller-Line Identification frauds from the voice data.
  • 8) Roaming Insights legacy system backend was ported by me from pig to Hadoop, Oozie and Spark. This involved thinking of many reporting angles such as generalizing time granularities configuration for each Stream for which reporting was to be done.
  • 9) Did many of the optimization processes such as a reduction in Hbase read time, spark job time, hive, etc.
HadoopSparkOLAPMachine LearningBig Data Analytics

Software Engineer - Big Data Analytics

Jul 2017Mar 2018 · 8 mos

  • Solving various use case with Big Data Analytics

Snapdeal

2 roles

Software Engineer II

Apr 2017Jun 2017 · 2 mos · Gurgaon, India

Software Engineer

Jul 2015Mar 2017 · 1 yr 8 mos · Gurgaon, India

  • Worked in serving and optimization of Brand and Product Ads.
  • Used Netty based Rest Express for REST APIs.
  • Designed a data flow to ensure consistency of data on all servers using Cassandra and AWS S3.
  • Implemented Lucene based indexing to reduce heap size and load on Ad Servers.
  • Designed a Codahale metric based monitoring framework to identify production issues and to
  • monitor health of the system.
  • Worked on a system which manages the budget of the campaigns.
  • Also used aerospike for a certain component in AdServer.
CassandraAWS S3REST APIsBackend Development

Nit durgapur

2 roles

Intern

May 2014Jun 2014 · 1 mo

Student

Aug 2011May 2015 · 3 yrs 9 mos

Oravel

Data Research Intern

Jun 2013Jul 2013 · 1 mo

Education

NIT DURGAPUR

Bachelor of Technology (B.Tech.) — Information Technology

Jan 2011Jan 2015

Children College Azamgarh

Stackforce found 100+ more professionals with Kafka & Streaming Api

Explore similar profiles based on matching skills and experience