Summary
Overview
Work History
Education
Skills
Timeline
Generic

Radha Harika

BigData Administrator
Hyderabad

Summary

Production Support Engineer - Bigdata

  • 9.5 Years of Experience in Bigdata Technology
  • Contributed to higher client satisfaction rates by ensuring consistent product quality through rigorous testing procedures.
  • Mentored junior production support engineers, providing guidance on best practices and helping them develop the skills necessary for long-term career growth within the industry.
  • Implemented effective incident management strategies that minimized disruption to business operations during system outages or failures.
  • Streamlined work processes through the implementation of automation tools and scripts.
  • Reduced downtime by proactively monitoring systems for potential issues and addressing them before they escalated.
  • Assisted project managers with the planning, execution, and evaluation of production support activities, ensuring successful outcomes for all projects.

Overview

10
10
years of professional experience

Work History

Production Support Engineer - Bigdata

HSBC Software Development
06.2019 - Current

Project - Global Bigdata Services (GBDS)

  • Responsible for implementation and Administration of existing Hadoop Infrastructure
  • Cluster Maintenance
  • File System Management & Monitoring
  • Working with data delivery teams to setup new Hadoop users, which includes Creation of users in Active Directory , setting up Kerberos principles , Creating Ranger policies for Hdfs and Hive Access.
  • Collaborating with Linux , Patching , Network and Backup teams
  • Data ingestion - support ingestion by scheduling control M Jobs and fixing issues
  • Creation of Sftp users for new ingestions and scheduling Jobs using chronox tool
  • Creating Disk Drives , setting up new users and onboard new projects , creating source and target connections for new ingestions
  • Monitoring and fixing Java Micro services which are responsible for a smooth ingestion of data
  • Alerting & Monitoring using Ansible and AppDynamics


Level II Engineer – Hadoop Administrator

DXC Technology
09.2014 - 05.2019

Project – BDPaaS (BigData platform as a Service)

Client – State Farm

BDPaaS provides Hadoop, NoSql database and stream processing software with a fully managed, as a Service delivery and support model

Responsibilities:

  • Involved in Hadoop cluster setup, performance fine-tuning, monitoring, structure planning, scaling and administration
  • Responsible for cluster maintenance, adding and removing cluster nodes, cluster monitoring and troubleshooting, manage and review data backups, manage and review Hadoop log files
  • Implementing and Monitoring of Changes during Change management
  • Responsible for troubleshooting and development on Hadoop technologies like HDFS, Hive, Pig, Flume, MongoDB, Sqoop
  • Experienced in managing and reviewing Hadoop log files
  • Installation of various Hadoop Ecosystems and Hadoop Daemons
  • Fine tune applications and systems for high performance and higher volume throughput, Zookeeper, Spark, MapReduce, YARN, and HBase
  • Monitoring and troubleshooting Linux memory, CPU, OS, storage and network issues
  • Involved in Data migration between Hadoop clusters
  • Involved in communicating effectively with the Onsite Team and coordinating the offshore team activities accordingly
  • Involved in Analyzing system failures, identifying root causes, and recommended course of actions
  • Automate manual Tasks (Using Shell Scripts and Splunk alerts)

Education

B-tech - Digital Techniques in Design and Planning

JNAFAU

Skills

Incident Management

Hadoop - HDFS, Yarn ,Hive, Zookeeper,Spark,Hbase,Oozie,Nifi

Scheduling tools - Control M and Chronox

Shell Scripting , Ansible , AppDynamics

Timeline

Production Support Engineer - Bigdata

HSBC Software Development
06.2019 - Current

Level II Engineer – Hadoop Administrator

DXC Technology
09.2014 - 05.2019

B-tech - Digital Techniques in Design and Planning

JNAFAU
Radha HarikaBigData Administrator