Akhil B.

AI Researcher

West Lafayette, Indiana, United States7 yrs 9 mos experience
Most Likely To SwitchHighly Stable

Key Highlights

  • Expert in securing distributed cyber physical systems.
  • Proficient in big data engineering and data security.
  • Strong background in real-time data processing and decision engines.
Stackforce AI infers this person is a Big Data and Cybersecurity expert with strong software development skills.

Contact

Skills

Core Skills

Data SecurityBig Data EngineeringData StreamingDeployment ManagementApi DevelopmentApplication State ManagementDocumentation ManagementReal-time ProcessingQuality AssurancePerformance Optimization

Other Skills

AuthenticationAuthorizationData profilingOData-V4 API serviceState machineDocumentation generationLDAPKerberosApache RangerApache SparkApache KafkaApache AvroApache AmbariApache MavenApache Olingo

About

Doctoral Student in Computer Science at Purdue University, West Lafayette. Interested in Cybersecurity research, particularly securing distributed cyber physical systems. Please have a look at my profile!

Experience

Chainlink labs

Research Intern

May 2024Present · 1 yr 10 mos · Remote · Remote

Adobe

Data Science Research Intern

May 2021Aug 2021 · 3 mos · San Jose, California, United States

  • Systems for Deep Learning

Purdue university

Graduate Student Researcher

Aug 2020Present · 5 yrs 7 mos · West Lafayette, Indiana, United States

Amazon web services (aws)

SDE Intern

May 2020Aug 2020 · 3 mos · Dallas, Texas, United States

Open insights

2 roles

Senior Engineer

Promoted

Apr 2018Jun 2019 · 1 yr 2 mos · Pune Area, India

  • Data security in the big data space : Extensively worked on authentication providers like LDAP and kerberos and also authorization software such as Apache Ranger, to secure data in the big data space. Also have experience in managing security of various data container tools like HDFS, Hive, Kafka and processing tools like Apache NiFi.
  • Data profiler on streams : Designed and developed an engine which calculates statistics like Mean, quantiles, null counts, maximum and minimum values and also quantiles over real time streams, primarily designed for small clusters. Used Apache Spark Stateful-Streaming, Apache Kafka and Apache Avro for developing the engine. Implemented the Greenwald-Khanna paper for calculating quantiles over streams. This engine consumed 1/20th(4000% improvement) of the amount of memory consumed by a batch data profiler built on spark itself.
  • Packaging and deployment : Worked on the "Single-click-deployment" project, which configures a cluster, sets it up and brings up all the services in a single click. Extensively worked on Apache Ambari as a deployment and service management tool . Also exhaustively worked on Apache maven as a build tool to manage the build lifecycle of services and for generating necessary deployables.
  • OData-V4 API service on Apache Hive/JDBC driven databases : Designed and developed a robust OData-v4 data query service on databases with JDBC drivers. Worked with Apache Olingo, Apache Hive and Java8 streams to develop the same. Developed a solution to stream data over HTTP with a constant space complexity.
  • State machine : Developed a state machine to keep track of the application state of a native product of Open Insights. Used the Spring State Machine project of the Spring framework to design and develop the same.
  • Apache Maven driven documentation generation : Worked on documentation generation using asciidoc format and used Apache Maven to generate documentation automatically on every release cycle.
Data securityAuthenticationAuthorizationData profilingOData-V4 API serviceState machine+3

Big Data Engineer

Jul 2017Mar 2018 · 8 mos · Pune Area, India

  • Data Ingestion framework on Big Data platform : Worked on developing a data ingestion framework using Apache NiFi, HDFS, Apache Kafka and the Spring boot framework. Exhaustively studied Apache NiFi as a tool to ingest, stream and transform data. Also worked on the spring framework tools like Spring boot, Spring Data JPA, spring cloud and spring security.
  • Real time decision engine on Apache Flink : Built a decision engine on data streams using Apache Flink. Business rules created using JBoss Drools are applied to the data stream on the fly, using Apache Flink's custom filter function.
  • Automated testing : Worked on Java's testing tools like JUnit, Mockito and PowerMockito to test every line of code written. Maintained an average code coverage of 86% over 20000 lines of Java code, spanning over multiple modules.

Barclays

Big Data Engineering Internship

May 2016Jul 2016 · 2 mos · Pune Area, India

  • Primarily worked on improving the performance of the real time decision engine in force at Barclays. Making use of various cutting edge technologies like Apache Kafka, ElasticSearch, Logstash and Kibana, designed and developed a logging framework to identify performance-critical bottlenecks in the stream pipeline.

Economic development board, government of andhra pradesh

Software Development Internship

May 2015Aug 2015 · 3 mos · Hyderabad Area, India

  • Designed and developed the internal IT systems and the website of the organisation from scratch. The website is designed in such a way that anyone who knows Microsoft Word can maintain it, without any knowledge of web development.

Desicrew solutions private limited

Software Development Internship

Sep 2014Nov 2014 · 2 mos · Chennai Area, India

  • Worked as a PHP and Javascript developer.

Education

Purdue University

Doctor of Philosophy - PhD — Computer Science

Jan 2021Jan 2025

Purdue University

Master of Science - MS — Computer Science

Jan 2019Jan 2020

Indian Institute of Technology, Madras

Bachelor of Technology (B.Tech.)

Jan 2013Jan 2017

Narayana Junior College

High School

Jan 2011Jan 2013

Bhavans Sri Ramakrishna Vidyalaya, Sainikpuri, Hyderabad

CBSE School Certificate

Jan 2003Jan 2011

Stackforce found 100+ more professionals with Data Security & Big Data Engineering

Explore similar profiles based on matching skills and experience