Summary
Overview
Work History
Skills
Personal Qualifications
Assignments
Personal Information
Timeline
Generic

Mohith Kumar Reddy Sannareddy

Senior Software Engineer
Hyderabad

Summary

  • To become a key player in providing quality solutions in data-oriented products which will actually add value to the business.
  • Having around 5 years of Professional experience in Developing, Implementing, configuring, and unit testing Bigdata/Spark Ecosystem and its related components.
  • Sound knowledge in Hadoop Ecosystem and its related technologies.
  • Very well Experienced in Amazon Web Services and its components S3, EMR, Step functions, Glue, Athena, Terraform etc.
  • Good exposure with Bigdata ecosystem and its components like HDFS, Yarn, Databricks. Experience in working different file formats like Parquet, Textile, ORC and Avro.
  • Having good working knowledge on Apache Spark, Spark Core, SQL and SCALA. Excellent knowledge with distributed storages (HDFS) and distributed processing (MapReduce, Yarn).
  • Usage of Import and export data from different databases like MySQL, Oracle into HDFS and vice versa using Sqoop.
  • Implementation of build management tools like Maven, SBT.
  • Team player with good Technical, Analytical and Communicational skills.
  • Willingness and ability to easily adapt to learn any newer technology or software

Overview

5
5
years of professional experience
1
1
Language

Work History

Senior Software Engineer

Coforge Pvt Ltd
7 2021 - Current
  • The purpose of this project is to calculate risk parameters and behavior of the contracts to improve the consistency between the margin projections performed by the financial management and ones used for capital planning exercises
  • Responsibilities include requirement analysis, fixing code issues, writing Spark using SCALA code, creating unit test cases, deployment pipelines, code quality evaluation, job scheduling, performance optimization, and client collaboration.

Senior Software Engineer

Recvue India Pvt Ltd
06.2019 - 07.2021
  • Recvue Billing System is a Product Calculating Recurring Revenue for various Organization Using Big data (Spark) for Business Logic Implementation and high-speed Performance
  • Responsibilities include performance tuning, code development and enhancement, performance testing, sprint planning, and ticket analysis.

Skills

    AWS stack, Spark, SCALA, Python, HiveQL, Hadoop, Linux, UNIX and Windows

undefined

Personal Qualifications

  • Maters in Telecommunication Systems, 2019, BTH(Sweden)
  • Bachelor of Electrical and Electronics Engineering, 2017, JNTUH

Assignments

Project 1: Capital Regulatory

Duration: July 2021   to till date

Team Size: 15 Members

Description:

The purpose of this   project is to calculate risk parameters and behavior of the contracts to   improve the consistency between the margin projections performed by the   financial management and ones used for capital planning exercises

Role: Senior Software Engineer

Solution Environment: Cloudera,   AWS

Tools: Scala, Hadoop, Spark, Spark SQL, Hadoop, Hive,   Impala, Jenkins, Control-M, S3, EMR, Step Functions, Athena, Glue, Terraform.

Responsibilities:

• Requirement analysis of the   problem identified.

• Working on production   critical incidents to find the source of the code’s issue and swiftly fix it.

• Writing Spark using SCALA   code as per the requirements.

• Creating Unit test cases to   ensure that all requirements are met and functioning as intended.

• Creation of deployment   pipelines for deployment sing Jenkins and GitHub.

• Finding the problematic code   by utilizing Sonar Qube to evaluate code quality and coverage.

• Creation of Job and   Scheduling and managing the dependency using Control-M.

• Performance Optimization and   fine tuning of long running jobs in Apache Spark.

Worked together with the   client to comprehend the requirements and record the project in Confluence.



Project 2: Recvue Billing System

Duration: Jan 2020   to July 2021

Team Size: 5 Members

Description:

Recvue Billing System is   a Product Calculating Recurring Revenue for various Organization Using Big data   (Spark) for Business Logic Implementation and high speed Performance

Role: Data Engineer

Solution Environment: Linux , Windows

Tools: Scala, Hadoop, Spark, Spark SQL, Databricks, AWS

Responsibilities:

• Performance tuning of spark jobs   processing millions of data, bulk-testing and sending the Statistics to the   client

• Owned code development and   enhancement of Compensation Management, Delivery Imports and Price Deliveries   Module

• Performance Testing and fine   tuning of code in Databricks Cluster

• Administered Sprint planning with Stakeholders in Onsite

• Analyzed work as per ticket assigned in current sprint based on work-priority


Project 3 : Raptor

Duration: Jun 2019   to Dec 2019

Team Size: 5 Members

Description:

Reynolds American (RAI)   is on a journey to build out a next generation analytics platform. As part of   this journey RAI is looking to replace an existing on-premise data warehouse   with a cloud-based data warehousing system. RAI has selected the Amazon Web   Services Platform (AWS) to house this data warehouse. RAI plans to leverage   the AWS Big Data technology stack (Postgres, Spring Boot, S3, EMR, etc.) to   build out this analytics platform.

Role: Data Engineer

Solution Environment: Linux

Tools: Scala, Hadoop, Spark, Spark SQL, Oracle Bigdata   Cloud, Databricks

Responsibilities:

• Building Data Pipeline

• Write Data models in Spring Boot

• Convert traditional system transformations into transformations.

• Optimizing Spark jobs.

• Process Automation using   Shell Scripting.

Personal Information

  • Gender: Male
  • Nationality: Indian
  • Marital Status: Single

Timeline

Senior Software Engineer

Recvue India Pvt Ltd
06.2019 - 07.2021

Senior Software Engineer

Coforge Pvt Ltd
7 2021 - Current
Mohith Kumar Reddy SannareddySenior Software Engineer