R

Ram Linga

CEO

Bengaluru, Karnataka, India7 yrs 11 mos experience
Highly Stable

Key Highlights

  • 8 years of experience in Hadoop administration and DevOps.
  • Expert in managing AWS cloud services and data lakes.
  • Proficient in automating infrastructure and application deployments.
Stackforce AI infers this person is a Cloud Infrastructure and Big Data Specialist with strong DevOps capabilities.

Contact

Skills

Core Skills

Hadoop AdministrationCloud Infrastructure ManagementDevopsData Analysis

Other Skills

ARM TemplatesAWSAmazon Web Services (AWS)AmbariAzureAzure DevOpsBashBicepBig DataClouderaData Warehouse ManagementDatabasesDockerHDFSHadoop

About

Over all 8 years experience in Hadoop admin 6.2 years which includes Hortonworks & Cloudera Distribution and Linux as well, 2year as cloud and Devops experience.

Experience

Kyndryl india

Support Team Lead

Sep 2023Present · 2 yrs 6 mos · Bengaluru, Karnataka, India · Remote

  • Provide Hadoop L2 application support for Prod/Non-Prod/R&D environment from offshore. Responsible for managing the complete Hortonworks data lakes cluster which is having 3 clusters environment (PROD, QA, Dev)
  • Working experience on managing Hadoop cluster which consists Ambari, adding/managing services, attaching/detaching nodes based on requirements and upgrading the services as well.
  • Hadoop cluster administration that includes Hadoop cluster maintenance, monitoring and troubleshooting.
  • Checking JobTracker, Resource manager and MapReduce job status. Performance Tuning of MapReduce / Yarn Jobs.
  • Creating encryption zones in Production cluster to maintain extra level of security for customers data Experienced in working on DevOps/Agile operations process and tools area (Code review, unit test automation, Build & Release automation Environment, Incident and Change Management) including various tools.
  • Creating complex pipelines to trigger target pipelines using PowerShell Pipeline variables/parameters using PowerShell Rest API.
  • Mirroring the existing pipelines in different tenant and provisioning the Infra by triggering the target pipeline.
  • Experience in creating APIM to deploy API gateway for backend API.
  • Experienced in Automating Infrastructure and application deployments using PowerShell scripts, bicep, ARM Templates and terraform.
  • Provisioning data lakes access to new users, using LDAP auth, Ranger for Hive and HDFS, R-studio, Jupiter hub access to data scientists and other request as per Coca-Cola ITIL process.
  • Environment is completely on AWS cloud and we need manage all the services used in data lakes Linux EC2 instances, S3 buckets, SNS, SQS, Redshift, Dynamo DB, EMR cluster, Cloud formation stacks from AWS Console window.
  • Nifi and kylo administration, exporting and importing templates, process groups from one environment to other, creating new nifi controller services based on project requirements
HadoopHortonworksClouderaLinuxAWSPowerShell+5

Tookitaki

Implementation Engineer

Jul 2023Aug 2023 · 1 mo · India

  • Involved in designing and implementing the architecture for the cloud infrastructure and application.
  • Performed continuous integration and continuous deployment using Azure DevOps, establish the pipeline using azure tools and spanning up the multiple environments. Enable build and release.
  • Develop azure resources such as azure web applications, application gateway, docker container, Kubernetes, storage accounts, key vault, virtual machines, service bus, EventHub, networking, Azure active directory.
  • Experienced in working on DevOps/Agile operations process and tools area (Code review, unit test automation, Build & Release automation Environment, Incident and Change Management) including various tools.
  • Involved in Hub & Spoke network topology and implement the hub & spoke model in our organization.
  • Creating and maintaining Infrastructure for Applications in using Terraform. Building and Automating infra CICD pipelines for different applications using Jenkins and Azure Pipelines.
  • Worked on deployment automation of all the microservices to pull image from the private Docker registry/Azure container registry ACR and deploy to Kubernetes (on-prem K8) and AKS cluster using Jenkins & Azure pipelines.
  • Worked with Python, Bash, PowerShell, Groovy. Developed Shell & PowerShell scripts used to automate day to day tasks & automation of the build, release process.
  • Deployed JAVA applications to application servers in agile continuous integration environment and automated the whole process.
  • Used Azure Kubernetes service to deploy to managed AKS cluster in Azure and created AKS cluster in the azure portal, with the azure CLI, also used template driven deployment options such as ARM, Terraform.
  • Experienced in both YAML & classic Pipelines.
  • Deploying docker container to deploy web application into docker, configure the container registries and service connection to connect to docker hub or docker container registry to deploy from pipeline.
Azure DevOpsTerraformKubernetesDockerPythonBash+3

Unitedhealth group

Senior Software Engineer Data Analyst

Mar 2020Jul 2023 · 3 yrs 4 mos · Hyderabad, Telangana, India · On-site

  • Monitoring Alarms on regular basis and fixing them. Gained expertise in troubleshooting job errors.
  • Data warehouse management using Hive, worked on creating both managed and external hive tables, partitioning, bucketing.
  • Handling tickets/service requests using Coca-Cola Service Now ticketing tool
  • Provisioning data lakes access to new users, using LDAP auth, Ranger for Hive and HDFS, R-studio, Jupiter hub access to data scientists and other request as per Coca-Cola ITIL process.
  • Environment is completely on AWS cloud and we need manage all the services used in data lakes Linux EC2 instances, S3 buckets, SNS, SQS, Redshift, Dynamo DB, EMR cluster, Cloud formation stacks from AWS Console window.
  • Nifi and kylo administration, exporting and importing templates, process groups from one environment to other, creating new nifi controller services based on project requirements.
  • Managing all SFTP related activities, file retention, external vendors IP whitelisting.
  • Designing and testing new pipelines under the supervision of big data architects and modifying the existing pipelines as per the vendor and business requirements.
  • Involved in bench marking Hadoop cluster file systems various batch jobs and workloads.
  • Involved in minor and major upgrades of Hadoop and Hadoop eco system.
  • Installation of various Hadoop Ecosystems and Hadoop Daemons in cloudera and EMR
  • Involved in Installing and configuring Kerberos for the authentication of users and Hadoop daemons.
  • Optimizing performance of Hive/HBase/Spark Jobs.
  • Working with application teams for scaling up instances and troubling shooting application issues.
  • Adding users and groups to the Azure DevOps and setting up branch permissions to the users and groups, working with Azure Boards and configuring the work items and link the work items to the branch.
  • Reviewing the repository history and comparing the version in case of any issues.
  • Involved in minor and major upgrades of Hadoop eco system
HiveAWSNifiKyloHadoopData Warehouse Management+2

Reliance jio

L2 Hadoop Administrator

Oct 2017Nov 2019 · 2 yrs 1 mo · Mumbai, Maharashtra, India · On-site

  • Good experience in Hadoop Administration and Hadoop Application Support.
  • Good experience in Hadoop Administration and Hadoop Application Support.
  • Provide Hadoop L2 application support for Prod/Non-Prod/R&D environment from offshore. Responsible for managing the complete Hortonworks data lakes cluster which is having 3 clusters environment (PROD, QA, Dev)
  • Working experience on managing Hadoop cluster which consists Ambari, adding/managing services, attaching/detaching nodes based on requirements and upgrading the services as well.
  • Commissioning & Decommissioning of cluster nodes to & from the cluster, importing and exporting data from HDFS.
  • Checking JobTracker, Resource manager and MapReduce job status. Performance Tuning of MapReduce / Yarn Jobs
  • Troubleshooting Hadoop issues and ticket resolution within SLA for PROD cluster.
  • Implementing Patches found on the platform by consulting the product vendor.
  • Monitoring Alarms on regular basis and fixing them. Gained expertise in troubleshooting job errors.
  • Data warehouse management using Hive, worked on creating both managed and external hive tables, partitioning, bucketing.
  • Provisioning data lakes access to new users, using LDAP auth, Ranger for Hive and HDFS, R-studio, Jupiter hub access to data scientists and other request as per Coca-Cola ITIL process.
  • Environment is completely on AWS cloud and we need manage all the services used in data lakes Linux EC2 instances, S3 buckets, SNS, SQS, Redshift, Dynamo DB, EMR cluster, Cloud formation stacks from AWS Console window.
  • Nifi and kylo administration, exporting and importing templates, process groups from one environment to other, creating new nifi controller services based on project requirements.
  • Managing all SFTP related activities, file retention, external vendors IP whitelisting.
  • Designing and testing new pipelines under the supervision of big data architects and modifying the existing pipelines as per the vendor and business
HadoopAWSLinuxHDFSNifiKylo+1

Education

Jawaharlal Nehru Technological University

Computer Science

Jan 2009Jan 2013

Narayana Junior College - India

Bachelor's degree — Junior High/Intermediate/Middle School Education and Teaching

Jan 2007Jan 2009

Bhashyam Public School

Bachelor's degree — SSC

Jan 2007Jan 2007

Stackforce found 100+ more professionals with Hadoop Administration & Cloud Infrastructure Management

Explore similar profiles based on matching skills and experience