Experienced Kafka Administrator with 3.7 years of expertise in managing and optimizing Confluent Apache Kafka clusters. Expert in configuring and maintaining brokers, implementing schema management, and ensuring secure data transmission through RBAC.
Proficient in utilizing Kafka connectors and monitoring cluster health across cloud platforms, such as Google Cloud Platform (GCP) and AWS Cloud.
Demonstrated ability to troubleshoot issues effectively, optimize performance, and maintain high availability in both development and production environments.
Additionally, for the Open Banking project, I have developed and managed microservices using Node.js and JavaScript, focusing on MOD and ecom APIs.
I’ve utilized tools like IBM DataPower for API management, and implemented CI/CD pipelines with Codefresh, Bitbucket, and Bamboo.
Also, I have experience with the APIMesh platform, as well as deploying applications on GCP and monitoring performance with Splunk and GCP, ensuring optimal reliability and efficiency.
Overview
6
6
years of professional experience
Work History
Application Developer
IBM
Bengaluru
08.2021 - Current
Deployed, configured, and managed Confluent Kafka clusters on Google Cloud Platform, OpenShift Container Platform, and AWS.
Configured and administered Confluent Kafka components including Schema Registry, Replicator, Kafka Connect, Control Center.
Created, configured, and maintained Kafka topics and partitions to optimize performance and support business requirements.
Implemented Confluent RBAC for efficient Active Directory Authentication/Authorization.
Hands-on experience with Kafka MQ connectors.
Configured Kafka Replicator for high availability across multiple data centers.
Configured and maintained Schema Registry, ensuring schema validation and management for Avro, JSON and Protobuf formats.
Assisted in troubleshooting Kafka consumer and producer issues, providing support for real-time and batch data processing pipelines.
Experience with Kafka cluster administration in Production and working with CI/CD tools: GIT, Helm Charts, Bamboo and Artifactory. Configuring, monitoring and Kafka cluster with Confluent Control Center.
Improved cluster performance through regular tuning of Kafka parameters (e.g., retention policies, replication factors, log segment sizes).
Experience in requirement gathering, analysis, and designing mapping documents. Closely worked with the client on requirements and deliverables.
Accountable for delivering integration design for critical banking applications, as per the industry standards and business requirements.
Worked on creating service design documents, production implementation plans, and checklists.
Following Agile/Scrum methodology and playing the role of Scrum Master to ensure the squad won't have any blockers and run the daily stand-up.
For the Openbanking project, I utilized Node.js and JavaScript to create and manage microservices that optimize application scalability and efficiency.
I played a key role in developing and integrating MOD and ecom APIs, ensuring seamless data interactions. By employing APIMesh and IBM DataPower, I enhanced API security and governance.
My expertise includes implementing CI/CD pipelines using Codefresh, Bitbucket, and Bamboo, as well as streamlining deployment processes with uDeploy, and deploying applications on Google Cloud Platform (GCP).
The aim of this project is to advance research by integrating a system into a realistic driving simulator that can detect, recognize, and track road objects, making decisions that assist the user in making the driving process safer and more convenient.
Roles and responsibilities:
Experienced working in the Python language.
Worked with large datasets, preferably with an understanding of Hadoop, HBase, Docker, and Apache Spark.
We used the deep-learning network YOLO algorithm, combined with data, to detect and classify the objects, and estimate the position of objects around the car.
The algorithm has been developed and tested using the Camera dataset collected by the ROS bag.
The performances are evaluated in controlled scenarios of increasing difficulty, and examined via metrics provided by CARLA.
Education
Master of Engineering in Big Data & Data Analytics
Manipal Academy of Higher Education
Manipal
05-2020
B.Tech in Computer Science Engineering
Gitam University
Bengaluru
04-2020
Skills
Kafka (Confluent / Apache), Hadoop, Zookeeper
Event-Driven Architecture Implementation
Codefresh CI/CD Experience
Cloud Platforms: GCP, AWS
Nodejs, JavaScript
Microservices and APIs
APIMesh, IBM DataPower
UDepoly
Timeline
Application Developer
IBM
08.2021 - Current
Intern
KPIT Technologies
06.2019 - 04.2020
Master of Engineering in Big Data & Data Analytics
Manipal Academy of Higher Education
B.Tech in Computer Science Engineering
Gitam University
Similar Profiles
Jonathan AlvarezJonathan Alvarez
System Services Representative at IBMSystem Services Representative at IBM