Summary
Overview
Work History
Education
Skills
Websites
Personal Information
Projects
Architecture
Domain
Timeline
Generic

Satish Mudde

Hyderabad

Summary

Extensive experience on Big Data technologies. Confident in Hadoop, Apache Spark, Scala, Python, AWS, HDFS, HIVE, PL/SQL. Experience on different Hadoop ecosystem tools like HIVE, Sqoop, pig and Ozzie Strong knowledge of backups, Restore, recovery models, database shrink operations, DBCC commands, Clustering Experience in troubleshooting and resolving database integrityissues,log shipping issues, blocking and deadlocking issues, connecting and security issues. Domain: Banking, Retail, and Telecom. Architecture: Batch processing of big data sources at rest. Real-time processing of big data in motion. Interactive exploration of big data. Predictive analytic and machine learning. Store and process data in volumes too large for a traditional database. Transform unstructured data for analysis and reporting. Capture, process, and analyze unbounded streams of data in real time, or with low latency.

Overview

9
9
years of professional experience

Work History

Consultant

JP Morgan Chase
09.2021 - Current
  • Home Lending Servicing LOB is starting an initiative to consolidate all data in the MSP Clients 156 and 465 into one where the MSP Client 465 will be the only instance going forward. This code deployment and testing effort related to the MSP 2 to 1 initiative for the CFL reports. From a firm-wide perspective completing this will help streamline and improve our current processes, reduce testing requirements and timelines to implement updates and increase efficiency for Chase applications processing the data from MSP which will help us efficiently service our customers' requests.
  • Consolidate all logic/code from both clients where there the 156 code differs from the 465 codes. Refer to parent epic for broader requirements.
  • Develop and implement pipelines that extract, transform, and load data into an information product that helps the organization reach its strategic goals
  • Focus on ingesting, storing, processing, and analyzing large datasets
  • Create scalable, high-performance web services for tracking data
  • Investigate alternatives for data storing and processing to ensure implementation of the most streamlined solutions
  • Serve as a mentor for junior staff members by conducting technical training sessions and reviewing project outputs
  • Develop and maintain data pipelines and take responsibility for Apache Hadoop development and implementation
  • Work closely with data science team to implement data analytic pipelines
  • Maintain security and data privacy, working closely with data protection officer

Senior Software Engineer

Virtusa
Hyderabad
09.2021 - Current
  • Technology's & Tools: BigData, PySpark, Apache Spark, Scala, SQL, CoreJava, Spark development, Hadoop, Amazon Web Services(AWS), JUnit, Pycharm, Control-M, Bitbucket, Git, Oracle SQL Developer, Jira, jules

Consultant

AT&T
Chennai
11.2018 - 08.2021
  • The AT&T Project is generally consistent with the Town Plan insofar as expanding wireless broadband service into areas of the Town with poor service quality, while minimizing and mitigating adverse impacts to historic districts or public parks, necessary wildlife habitats, special flood hazard areas, primary agricultural soils, or along designated scenic roadways.
  • Capital Improvement Project means the acquisition, construction, reconstruction, improvement, planning and equipping of roads and bridges, appurtenances to roads and bridges to enhance the safety of animal-drawn vehicles, pedestrians, and bicycles, wastewater treatment facilities, water supply systems, solid waste disposal facilities, and storm water incidental to those facilities.
  • Create Scala/Spark jobs for data transformation and aggregation
  • Produce unit tests for Spark transformations and helper methods
  • Write Scala doc-style documentation with all code
  • Design data processing pipelines

Software Trainee

Mars Infosol
Chennai
02.2016 - 08.2018
  • Technology's & Tools: java spark, Core Java, PL/SQL, JUnit, Bitbucket, Git, Oracle SQL Developer, postman, Jira

Consultant

Disney
02.2017 - 06.2018
  • Company Overview: Disney Streaming Services is a technology subsidiary of The Walt Disney Company located in Manhattan, New York City. Disney Streaming is a business unit within Disney Media and Entertainment Distribution (DMED), managing operations of The Walt Disney Company's streaming services including Disney+, Hulu, ESPN+ and STAR+.
  • Working on migration tasks from source to destination.
  • Creating and monitoring the task in different environment which includes Prod/Dev/QA.
  • Good knowledge of creating Qlik replicate tasks to source and target.
  • Experienced on adding tables to task.
  • Monitor the Full load and change processing modes, apply throughput, latency issues and data incoming changes.
  • Managing task settings and endpoint connections, importing and exporting the task.
  • Supporting ON-Call on weekends.
  • We are supporting nearly 200+ Databases which includes Prod/Dev/QA environment.
  • Creating logins and assigning proper permission to the created logins, mapping logins to the databases and granting appropriate permissions.
  • Planning backups and recovery strategy for various database environments.
  • Space monitoring and adding space to database when its needed.
  • Performing SQL server daily health check and preparing a report based on it.
  • Disney Streaming Services is a technology subsidiary of The Walt Disney Company located in Manhattan, New York City. Disney Streaming is a business unit within Disney Media and Entertainment Distribution (DMED), managing operations of The Walt Disney Company's streaming services including Disney+, Hulu, ESPN+ and STAR+.
  • Design data processing pipelines

Education

B-Tech - Computer Science Engineering

Vishnu Institute of Technology
Bhimavaram, AP
03.2016

Higher Secondary School - Intermediate

Narayana junior college
Vijayawada, AP
03.2012

Primary School - SSC

Adarsh Public School
Akividu, AP
04.2010

Skills

  • Hadoop
  • Apache Spark
  • Scala
  • SQL
  • Spark development
  • Hive
  • Airflow
  • PySpark
  • Python
  • Shell/Bash scripting
  • Cloud (GCP)
  • Kafka
  • Core Java
  • Kubernetes
  • Docker
  • Dataflow
  • Junit
  • Git
  • Linux
  • Jules
  • Jira

Personal Information

Title: Data Engineer

Projects

CBD HLT REDS MSP2 Merge, Consultant, JP Morgan Chase, 09/06/21, Present, Home Lending Servicing LOB is starting an initiative to consolidate all data in the MSP Clients 156 and 465 into one where the MSP Client 465 will be the only instance going forward. This code deployment and testing effort related to the MSP 2 to 1 initiative for the CFL reports. From a firm-wide perspective completing this will help streamline and improve our current processes, reduce testing requirements and timelines to implement updates and increase efficiency for Chase applications processing the data from MSP which will help us efficiently service our customers' requests., Develop and implement pipelines that extract, transform, and load data into an information product that helps the organization reach its strategic goals., Focus on ingesting, storing, processing, and analyzing large datasets., Create scalable, high-performance web services for tracking data., Investigate alternatives for data storing and processing to ensure implementation of the most streamlined solutions., Serve as a mentor for junior staff members by conducting technical training sessions and reviewing project outputs., Develop and maintain data pipelines and take responsibility for Apache Hadoop development and implementation., Work closely with data science team to implement data analytic pipelines., Maintain security and data privacy, working closely with data protection officer. AT & TIOTA Migration, Consultant, AT&T, 11/15/18, 08/28/21, The AT&T Project is generally consistent with the Town Plan insofar as expanding wireless broadband service into areas of the Town with poor service quality, while minimizing and mitigating adverse impacts to historic districts or public parks, necessary wildlife habitats, special flood hazard areas, primary agricultural soils, or along designated scenic roadways., Create Scala/Spark jobs for data transformation and aggregation., Produce unit tests for Spark transformations and helper methods., Write Scala doc-style documentation with all code., Design data processing pipelines. Disney Streaming Services, Consultant, Disney, 02/02/17, 06/30/18, Disney Streaming Services is a technology subsidiary of The Walt Disney Company located in Manhattan, New York City. Disney Streaming is a business unit within Disney Media and Entertainment Distribution (DMED), managing operations of The Walt Disney Company's streaming services including Disney+, Hulu, ESPN+ and STAR+., Working on migration tasks from source to destination., Creating and monitoring the task in different environment which includes Prod/Dev/QA., Good knowledge of creating Qlik replicate tasks to source and target., Experienced on adding tables to task., Monitor the Full load and change processing modes, apply throughput, latency issues and data incoming changes., Managing task settings and endpoint connections, importing and exporting the task., Supporting ON-Call on weekends., We are supporting nearly 200+ Databases which includes Prod/Dev/QA environment., Creating logins and assigning proper permission to the created logins, mapping logins to the databases and granting appropriate permissions., Planning backups and recovery strategy for various database environments., Space monitoring and adding space to database when its needed., Performing SQL server daily health check and preparing a report based on it. Mars Infosol, Software Trainee, 02/01/16, 08/31/18, Chennai, Tamil Nadu, India, java spark, Core Java, PL/SQL, JUnit, Bitbucket, Git, Oracle SQL Developer, postman, Jira

Architecture

  • Batch processing of big data sources at rest.
  • Real-time processing of big data in motion.
  • Interactive exploration of big data.
  • Predictive analytic and machine learning.
  • Store and process data in volumes too large for a traditional database.
  • Transform unstructured data for analysis and reporting.
  • Capture, process, and analyze unbounded streams of data in real time, or with low latency.

Domain

  • Banking
  • Retail
  • Telecom

Timeline

Consultant

JP Morgan Chase
09.2021 - Current

Senior Software Engineer

Virtusa
09.2021 - Current

Consultant

AT&T
11.2018 - 08.2021

Consultant

Disney
02.2017 - 06.2018

Software Trainee

Mars Infosol
02.2016 - 08.2018

B-Tech - Computer Science Engineering

Vishnu Institute of Technology

Higher Secondary School - Intermediate

Narayana junior college

Primary School - SSC

Adarsh Public School
Satish Mudde