Shikha Madaan

DevOps Engineer

Flevoland, Netherlands14 yrs 10 mos experience
Most Likely To SwitchHighly Stable

Key Highlights

  • Over 7 years of experience in Big Data and Scala development.
  • Expertise in building scalable data processing solutions.
  • Proven track record in developing microservices for fintech applications.
Stackforce AI infers this person is a Big Data and Fintech specialist with strong skills in Scala and data engineering.

Contact

Skills

Core Skills

ScalaMicroservicesSparkBig DataEtlData Migration

Other Skills

DevOpsAnsibleAkkaHadoopHBaseMapReduceHiveKafkaInformaticaAzure data factoryAzure DatabricksSynapse AnalyticsMicrosoft AzureAzure Data LakeData Warehousing

About

Software professional with strong problem solving and analytical capabilities. Have technical vision and actively engages in understanding customer requirements. Result oriented person, who skillfully balances between meeting resource and time constraints. Information Technology professional with 7 plus years of experience in various domains and technologies. Experience in developing applications using Play Framework, Akka, Type Level programming, Cats, Scalaz, Monad Transformers, Slick, Liquibase Experience in design, development of Big Data projects that perform large scale distributed data Processing/Analytics using Scala, Spark, Akka, Hadoop, Hive, Impala HDFS, YARN, Kite SDK, Avro, Parquet, Kafka, Oozie and Big Data ecosystem tools. TECHNICAL HIGHLIGHTS Scala Ecosystem : Play Framework, Akka, Cats, Scalaz, Microservices, Monad Transformers, Higher Order functions, Design Patterns, Finagle, Type Classes Languages : Scala, Functional Programming Big Data Technologies : Spark, Hive, Pig , Kafka, Mapreduce, Hadoop, Oozie, Yarn Web Technologies : HTML, CSS, Java script Containerization : Docker CI/CD Tools : Jenkins, Nolio, Ansible Testing Tools : Jmeter Scripting : Shell Scripting Relational Databases : Oracle, MySQL ETL/Monitoring Tools : Informatica Power Center 9.5.0, Grafana Source Control Systems : Git/Github, SVN, Perforce, Gitlab Working on Building Solutions and Algorithms includes: - Hadoop and Big Data Infrastructure - Data Analytics - Data Pipelines, EDW and ETL - Log Processing Frameworks - Migration of data from RDBMS to NoSQL - Ingest and analyse Social data

Experience

14 yrs 10 mos
Total Experience
2 yrs 7 mos
Average Tenure
6 yrs 8 mos
Current Experience

Kpn

Scala/ Devops Developer

Aug 2019Present · 6 yrs 8 mos · The Randstad, Netherlands

Lunatech labs

Full Stack Developer

Aug 2017Aug 2019 · 2 yrs · Netherlands

  • We creates central components for authentication platform which is crucial in Bank vision of realizing a single, global, as-a-service security platform for all countries that ING is active in. The platform is based on microservices, with the components we create written in Scala. Also involved in DevOps part especially took Initiative to deploy the application using Ansible throughout tribe and given workshop to rest of teams.

Barclays investment bank

Scala/Spark/Hadoop Big Data Consultant(Contractor)

Sep 2016Aug 2017 · 11 mos · London Area, United Kingdom

  • CDH 5.4.0 Cluster with 19 Data Nodes
  • Involved the E2E architecture for RISK IT Octon RWA calculation project and developed all the components using Scala, Akka, Spark. (FileWatcher, Assembler, Executor, Launcher, Spark ETLs, HBaseUtil, HdfsFileService)
  • Extensively used various Spark DataFrame API’s and Scala Case class to process 100+GB Dataset.
  • Enhanced the Spark DataFrame class capability using Scala Implicit class and developed DSL like API’s using Scala implicit functions.
  • Designed the rowkey for event_log table and developed framework-hbase module to perform all HBase tables operations (Create/Delete/Import csv)
  • Mapped HBase tables to Hive tables using HBaseStorageHandler Serde
  • Created Multimodule projects using SBT and developed the build script and used various sbt plugins to manage the build dependencies.
  • Configured Spark 1.5 in CDH 5.4.0 cluster to make use of all 1.5 features though it was not part of Cloudera 5.4.0 stack.
  • Very good knowledge in Cloudera Clusters, Cloudera Manager, Yarn Resource Manager/App Master, Properties of each component, Config and Logs locations

Cognizant technology solutions

Big Data Hadoop Developer

Aug 2014Aug 2017 · 3 yrs · Gurugram, Haryana, India

  • Collaborating with portfolios across our client’s technology domains to point out areas with suitable Hadoop use cases.
  • Developed and designed application to process data using Spark.
  • Developed MapReduce jobs, Hive & PIG scripts for Data warehouse migration project..
  • Developed and designed system to collect data from multiple portal using kafka and then process it using spark.
  • Developing MapReduce jobs, Hive & PIG scripts for Risk & Fraud Analytics platform.
  • Developed Data ingestion platform using Sqoop and Flume to ingest Twitter and Facebook data for Marketing & Offers platform.
  • Developed and designed automate process using shell scripting for data movement and purging
  • Installation & Configuration Management of a small multi node Hadoop cluster.
  • Installation and configuration of other open source software like Pig, Hive, Flume, Sqoop

Csc

ETL & Big Data Developer

Jul 2013Aug 2014 · 1 yr 1 mo · Greater Hyderabad Area

  • In-depth Knowledge of Data Warehousing and Analytics
  • Worked extensively with Data migration, Data profiling, ETL Processes and Data mining
  • Implemented Continuous Data integration using Informatica tool.
  • Hands on experience in writing Map Reduce jobs in Java, Pig, Hive.
  • Good working experience on using Sqoop to import data into HDFS from RDBMS and vice- versa.
  • Expertise in using job scheduling and monitoring tools like Oozie and ZooKeeper.
  • Experience in managing all stages of data migration cycle - from data analysis to pre go-live

Verizon data services india pvt ltd

ETL Developer

Jun 2011Jul 2013 · 2 yrs 1 mo · Greater Hyderabad Area

  • Hands-on experience in ETL methodology for supporting Data Extraction, Transformations and •Loading Process in a corporate-wide-ETL solution using Informatica
  • Responsible for designing and developing of complex mappings, mapplets , sessions and work flows for loading the data from source to target database using Informatica Power Center and tuned mappings for improving performance.

Education

Amritsar College of Engineering & Technology of Amritsar, Punjab

Bachelor of Technology (BTech) — Computer Science

Jan 2007Jan 2011

DAV Collegiate Senior Secondary School

Stackforce found 100+ more professionals with Scala & Microservices

Explore similar profiles based on matching skills and experience