Summary
Overview
Work History
Education
Skills
Websites
Certification
Personal Information
Timeline
Generic
Santosh Kumar Bugude

Santosh Kumar Bugude

Bangalore

Summary

Having 11+ years of experience in Software development with proficiency in design and development Experience in managing a squad of 4 Team members, assigning task , design implementation and code reviews 5+ years of experience in Hadoop big data eco system 2 Years of experience in Azure cloud and one year in AWS Experience in Streaming sources Azure Event Hubs, Kafka using Spark Streaming API and Kafka in Scala and java Programming Experience in Batch Processing application ,understanding the compute of the cluster and setting the data limits in the batch process Experience in Storage service like Azure Blob and s3 Experience in Deploying containerized application in pivotal Kubernetes and aws eks Developing Spring Boot application and strong experience in SQL Experience in migrating the legacy oracle PLSQL procedure into spark sql's Experience in addressing the spark application performance issue based on the spark UI resource utilization

Overview

13
13
years of professional experience
1
1
Certification

Work History

Java Full Stack Developer

JP Morgan Chase
Bangalore
01.2024 - Current
  • Java development professional with solid foundation in full stack technologies, bringing valuable experience in both front-end and back-end frameworks. Proven ability to deliver efficient and scalable solutions while maintaining high standards for code quality and performance. Team-focused with commitment to collaborative success and adaptable to evolving project requirements.
  • Developed comprehensive technical and problem-solving skills in fast-paced software development environment. Expertise in Java programming, front-end and back-end integration, and collaborative project management. Seeking to transition into new field where these transferable skills can drive innovation and efficiency.

Data Engineer

JP Morgan Chase
04.2022 - Current
  • Worked on the data ingestion flow to read UBM, UTM from AMPS an OG sources into PTR Kafka topic using the kafka API
  • Worked on Streaming Loader job to read data from kafka and write to HDFS storage handling the duplicate check but storing the key in hbase
  • Worked on DBWriter component to read the hbase metadata table and process the new batch parquet files from hdfs and write to oracle database
  • Worked on idet process to read the from oracle and process impact related data to mq
  • Migrating the existing IDET spring boot application into aws eks optimizing the code by configuring the Batch size
  • Provisioned aws infrastructure using the terraform
  • Involved in the aws design plans for the application
  • Migrating the legacy PLSQL stored proc reports into spark sql using generic service

Data Engineer

DELL EMC
03.2020 - 04.2022
  • Spark-ETL job, read the messages from Azure Event Hubs, by reading manually maintained offset file in blob storage with that we can determine from which offset to read for all the partitions
  • Decompressing the EventHub message using google proto Buffer
  • Applying the data massaging and Storing the file Parquet file in the Blob Storage
  • Accumulating all the row wise errors into Spark Dataset and at the end of application run pushing it Error EventHub
  • Data validation for the final data frame are captured and stored as csv file for reporting purpose
  • Logging the Job status in Azure Application insights using Telemetry API
  • Validating the invalid collection and pushing to event Hubs
  • Deploying spark-scala application on PKS
  • Real time Streaming application read data from eventhubs and update the latest status of the file into elastic search DB
  • On-prem data to into Azure cloud using the Spark Streaming solution deployed into on-prem Kubernetes solution
  • Gathering all the weekly issues and addressing them in next sprint
  • Querying the Appinsights telemetry using KQL and created the Azure Dashboards to understand the errored records and success ratios

Senior Consultant

Oracle Financial Services
12.2019 - 02.2020
  • Analyzing the functional specifications and providing the estimations
  • Connect with Data Architects and Users for designing reports
  • Ingestion of data into hive tables and extraction
  • Implementing hive tables to store the flow of daily data
  • Used Spark SQL to process the huge amount of structured data available in Hive Tables
  • Analyzing the spark job and tune it if any performance issues are seen
  • Data cleansing activities
  • Experienced in implementing Spark RDD transformations, actions to implement business analysis
  • Worked on Spark Data frames, Spark Data sources, Spark SQL using Scala
  • Preparing SFTP and FTP scripts to import data from different source systems
  • Write Custom UDF for some of the requirements
  • Validating the datasets and sending for UAT signoff and moving to production

Associate Software Engineer

Cognizant
08.2016 - 11.2018
  • Involved in all phases of the SDLC from analysis, design, development, testing, implementation and maintenance with timely delivery
  • Migrating the data using Sqoop from HDFS to Relational Database System and vice-versa
  • Migrating Oracle PLSQL stored procedures into spark applications using data frames and datasets
  • Developing spark application using both Data frames/SQL/Datasets and RDD 2.3 for Data Aggregation, queries and writing data back into OLTP system through Sqoop
  • Loading data from flat files into Hive tables
  • Designing the database data model for building an application to meet the business requirements, Customizing the application using PL SQL techniques and using normalizing Techniques
  • Analyzing the best Access methods, join methods predicates, using hints for complex query using the execution plan
  • Creating indexes on the table based on the cardinality, have created B-trees, Bit-map and unique, functional, composite, global, prefixed local non prefixed local index
  • Worked on partitioning the table for the SQL performance as well as archiving the old data
  • Used PL SQL Tables, Cursors to process huge volumes of data and used bulk collect for mass insert as performance improvement process
  • Loading the data from external source into database using SQL loader, external tables
  • Gathering table stats when bulk updates, deletes and insert happen one the table, rebuilding the indexes of the table if necessary
  • Having basic knowledge on SQL trace and TKPROF

System Engineer

TCS
01.2015 - 08.2016
  • Creating views for the QlikView team to pull the data into QlikView qvd's
  • Developed Stored procedure, function and package for the migration of Virgin client data into Tesco system
  • Developed onetime SQL and Anonymous Block scripts to handle data issues in the production environment
  • Worked on data pumps (EXDP, IMPED) and SQL loader for processing of Tata sky data into Tesco system
  • Worked on Linux environment for getting secured data files using sftps and stp commands got at

Systems Engineer

TCS
11.2012 - 12.2014
  • Working with Business Analyst and understanding the requirement
  • Preparing Low level design document
  • Defect fixing
  • Production Support of the application
  • Involved in Testing

System Engineer Trainee

TCS
06.2012 - 10.2012
  • A Three-month training program on Java J2EE
  • Understanding the fundamentals of Java syntax, variables, data types, and control structures
  • Exploring the principles of object-oriented programming, including classes, objects, inheritance, polymorphism, encapsulation, and abstraction
  • Introduction to HTML, CSS, and JavaScript for front-end development
  • Learning the basics of servlets and JavaServer Pages (JSP) for server-side web development
  • Building simple dynamic web applications using servlets and JSP

Education

B. Tech -

G. Pulla Reddy Engineering College
Kurnool
01.2012

12th -

Sir Ravindranath Tagore Junior College
Hyderabad
01.2008

10th -

Montessori High School
Kurnool
01.2006

Skills

  • SPARK SQL/STREAMING
  • REACT
  • SCALA
  • JAVA
  • PYTHON
  • SPRING BOOT
  • SQL
  • PLSQL
  • KAFKA
  • KUBERNETES
  • DOCKER
  • MICRO SERVICES

Certification

  • Certified Kubernetes Associate Developer (CKAD), 1.24, 05/20/23
  • Terraform Associate, 07/16/23
  • Oracle Certified Professional, Java SE 6 Programmer Certification, 12/27/14

Personal Information

  • Date of Birth: 07/27/90
  • Gender: Male
  • Nationality: Indian
  • Marital Status: Single

Timeline

Java Full Stack Developer

JP Morgan Chase
01.2024 - Current

Data Engineer

JP Morgan Chase
04.2022 - Current

Data Engineer

DELL EMC
03.2020 - 04.2022

Senior Consultant

Oracle Financial Services
12.2019 - 02.2020

Associate Software Engineer

Cognizant
08.2016 - 11.2018

System Engineer

TCS
01.2015 - 08.2016

Systems Engineer

TCS
11.2012 - 12.2014

System Engineer Trainee

TCS
06.2012 - 10.2012

B. Tech -

G. Pulla Reddy Engineering College

12th -

Sir Ravindranath Tagore Junior College

10th -

Montessori High School
Santosh Kumar Bugude