AYUSH NIGAM

Co-Founder

Bengaluru, Karnataka, India13 yrs 5 mos experience
Most Likely To Switch

Key Highlights

  • Expert in building scalable backend systems.
  • Proven track record in data analytics and insights generation.
  • Strong leadership experience as Co-Founder and CTO.
Stackforce AI infers this person is a Backend Engineer specializing in SaaS and Data Engineering.

Contact

Skills

Core Skills

ScalaAkkaAwsJavaHadoop

Other Skills

AirflowAlgorithmsAntlrApache AtlasApache KafkaApache SparkBig DataBusiness StrategyCC++CompilersData StructuresEMREnglishEntrepreneurship

About

Skilled in Core Java, Scala, Akka Actors, AWS , GraphQl , Hadoop ,Spring Framework, Restful API architecture, MicroServices, Design Patterns, Data Structures and Algorithms, Spark, Kafka, Elastic Search, Gremlin , Janusgraph

Experience

Prepairo

Co-Founder and CTO

Jan 2024Present · 2 yrs 2 mos · Bengaluru, Karnataka, India · On-site

  • Bringing a Revolution to EdTech

Career break

Health and well-being

Aug 2023Dec 2023 · 4 mos

Prophecy

Founding Software Engineer

Jun 2021Aug 2023 · 2 yrs 2 mos · Bengaluru, Karnataka, India

  • Built core backend metadata system of Prophecy based on Akka, Scala, Git, Graphql. Built an extensive system from scratch and ecosystem around it to make it robust
AkkaScalaGitGraphql

Bloomreach

Member Of Technical Staff | Analytics

Apr 2020Jun 2021 · 1 yr 2 mos · Bengaluru, Karnataka

  • Working on a complex Analytics Pipeline to track clickstream data and generate meaning/insights from user interaction on eCommerce sites.
  • Working on attribution logic to give credit to queries/searches/page views(depending upon attribution model) that generates revenue so that we can top rank best-selling/most relevant products and can have extensive and insightful reporting for each user interaction and generating meaning from it.
  • There are 100s of reports generating meaning from Analytics data and that needs fast retrieval, hence we also cache data extensively based on smart usage patterns.
  • Pipeline works on Billions of events/Terrabytes of Data each day doing multiple layers of processing/joining etc in between to generate insights at every level. Around 500 node cluster, each day does processing in Parallel.
  • Tech Used: MapReduce, AWS, GCP, EMR, Nginx, Java, Antlr, Redshift, Postgres, Redis, Airflow, and in-house file versioning system such as Zinc.
MapReduceAWSGCPEMRNginxJava+5

The apache software foundation

Apache Atlas Contributor

Oct 2018Present · 7 yrs 5 mos

Intuit

software engineer 2

Jul 2018Mar 2020 · 1 yr 8 mos · Bengaluru, Karnataka, India

  • Building next gen Data Catalog platform for automatic capturing of metadata from different sources to a common metadata repository Apache Atlas residing over AWS.
  • Using various AWS services to accomplish tasks such as S3,EMR,EC2,Lambda etc,most of the codebase is in Java
  • 1) Designed and wrote a highly concurrent framework to Migrate metadata from Alation to Atlas.
  • 2) Designed and wrote parser and model for parsing Avro schema and persisting it to Atlas adding various features over it. It has the capability to deep traverse Avro schema and persist in Atlas along with self-references along with field classifications.
  • 3) Worked on Feature Management Platform integration with Atlas , where we exposed API(based on spring boot) to create various machine learning feature and the process by which the features are created,exposed different search and graph traversal API over it.
  • 4) Did various POC's such as :
  • Automatic migration of AWS Glue data catalog metadata to S3
  • Capture Column level realtime lineage of Hive Metadata
  • 5) Wrote complete CICD pipeline for various projects using Jenkins and Terraform and completely designed the functional test framework and ensured complete automation.
JavaAWSApache AtlasSpring Boot

@walmartlabs india

Software Engineer

Aug 2016Jun 2018 · 1 yr 10 mos · Bengaluru Area, India

  • Part of the Data Lake team.Building data pipeline and data quality frameworks based on Java using technologies like Hadoop ,Spark,Kafka,Hive etc.
  • 1) Added Data Lake to MariaDb connector as part of data pipelining framework.This supports moving data via configuration through a hive table as well as hdfs to MariaDb.This internally used Sqoop Export.
  • 2) Added Delta data extraction logic common to all the connectors(Cassandra,MongoDb,Rest Calls,Mysql) and corresponding auditing in metadata table.
  • 3) Added code for pushing metadata as Json to Kafka Queue after a job runs so that it serves as a buffer and finally it can be posted to metadata server.
  • 4) Added code for Red,Amber,Green thresholds in dataquality framerwork where red means a rule fails and amber means its a warning but still data quality checks will pass.Green means all tests pass.
  • 5) Added informix to data lake connector as part of data pipelining framework.
  • 6) Built a regression framework to do automated testing of coded merged to the master everyday in morning and evening.It used Java custom annotations and posting the resulting json to tomcat server.
  • 7) Addes Rest Connector to parallely download data from a Rest Api to both Hdfs as well as local file system.Used Java executor framework as well as connection pooling for doing so.
  • 8) Developed Mail Notification framework so sending mails.Exposed different Api for doing so.Used reflection to dynamically analyze the object and send mails in tabular form as fields of the object.Exposed Api for externalizing the templates for mail.
  • 9) Added patch to sqoop source code to skip the metadata transaction isolation checks for importing data from MS PDW to Hadoop.
JavaHadoopSparkKafkaHive

Mnnit

CSE student

Jul 2012May 2016 · 3 yrs 10 mos

  • student

Education

NIT ALLAHABAD

Bachelor of Technology (B.Tech.) — Computer Science

Jan 2012Jan 2016

Bishop Johnson School,Allahabad

Basic Skills and Developmental

Jan 2006Jan 2011

Stackforce found 100+ more professionals with Scala & Akka

Explore similar profiles based on matching skills and experience