Summary
Overview
Work History
Education
Skills
Projectssummary
Roles And Responsibilities
Certification
Work Availability
Accomplishments
Work Preference
Timeline
Generic
Vattipalli Bharath Kumar Reddy

Vattipalli Bharath Kumar Reddy

Senior Advisory Consultant
Hyderabad,TG

Summary

Experienced and enthusiastic Consultant with a proven track record of success in various industries. Possessing exceptional interpersonal, problem-solving, and analytical skills, I provide valuable advice and expertise to client organizations, driving improvements in business performance. Experienced leader with a strong background in guiding teams, managing complex projects, and achieving strategic objectives. Expertise lies in developing efficient processes, ensuring high standards, and aligning efforts with organizational goals. Known for collaborative approach and unwavering commitment to excellence. Versatile skills in project management, problem-solving, and collaboration bring a fresh perspective and a strong dedication to quality and success. Adaptability and proactive approach recognized for delivering effective solutions.

Overview

10
10
years of professional experience
2015
2015
years of post-secondary education
2
2
Certifications
3
3
Languages

Work History

Senior Advisory Consultant

IBM India Pvt Ltd
06.2022 - Current


  • Developed and maintained end-to-end data pipelines for processing structured and unstructured data from multiple sources.
  • Collaborated with cross-functional teams to gather data requirements and deliver real-time insights to support business operations.
  • Implemented data transformation and cleaning processes using PySpark, SQL, and Shell Scripting to ensure data quality and consistency.
  • Conducted performance tuning for data queries and optimized ETL processes for faster data retrieval.
  • Reduced data processing time by 30% by optimizing ETL workflows and leveraging Performance techniques like AQE, Partition pruning, scanning relevant data, OPTIMIZE, ZORDER, Liquid Clustering, VACCUM etc.
  • Successfully delivered multiple data integration projects ahead of schedule, supporting critical business initiatives.
  • Streamlined data quality checks, resulting in a 25% reduction in data errors reported by business users.

Sr Data Engineer

Optum Global Solutions India Pvt Limited
05.2017 - 06.2022
  • Delivered exceptional results under tight deadlines, consistently prioritizing tasks effectively to meet project timelines without compromising quality or accuracy.
  • Participated in strategic planning sessions with stakeholders to assess business needs related to data engineering initiatives.
  • Aligned business objectives with technical requirements through close collaboration with product owners providing insights based on available data sources.
  • Established standard procedures for version control, code review, deployment, and documentation to ensure consistency across the projects.
  • Followed Agile/Scrum principles/Methodology while working on projects involved in Sprint Planning, Capacity planning, Feature and User story creation and Daily scrum calls. Decreased TTV (Time to Value) by 20% in the current project compared to previous projects.
  • Achieved cost savings by streamlining workflows, automating repetitive tasks across various projects.
  • Reengineered existing ETL workflows to improve performance by identifying bottlenecks and optimizing code accordingly.


Data Engineer

Artech Infosystems Pvt Ltd
09.2016 - 05.2017


  • Ensured compliance with corporate standards by adhering to established guidelines for naming conventions, version control, code reviews, and testing protocols during Pyspark development activities.
  • Reduced error rates in ETL workflows by implementing comprehensive error handling and logging mechanisms within Pyspark jobs.
  • Conducted root cause analysis on recurring issues within the ETL environment, identifying problem areas and implementing robust solutions to prevent future occurrences.
  • Developed reusable components for common ETL tasks, reducing development time and promoting consistency across projects.

Software Consultant

Dynpro India Pvt Ltd
07.2015 - 09.2016
  • Facilitated knowledge sharing among team members through documentation of best practices and conducting training sessions on DataStage techniques.
  • Improved data quality by designing and implementing DataStage jobs for data cleansing, validation, and transformation.
  • Successfully managed multiple concurrent projects by prioritizing tasks effectively and collaborating closely with stakeholders to ensure timely delivery of high-quality solutions.
  • Streamlined the data extraction process by automating data load schedules using IBM Tivoli Workload Scheduler.
  • Enhanced ETL performance through tuning DataStage parallel jobs and optimizing database queries.

Education

M. Tech - Embedded systems

JNTU University

B. Tech - Electronics and Communication engineering (ECE)

JNTU University

Skills

Pyspark

Projectssummary

  • Project Field: Telecom
  • Project: Mpower reporting
  • Skills Used: Pyspark, ADLS Gen2, Azure ADF, Databricks,
  • Client: Vodafone Idea Limited

Developed and implemented 21 key performance indicators (KPIs) to streamline business reporting and retailer billing, ensuring precise and timely data delivery. KPIs were presented in multiple time frames such as Month to Date, Last Month to Date, File to Date, and Last to Last Month to Date, leveraging CSV files for efficient reporting. Enhanced data processing performance through optimization techniques, including Adaptive Query Execution (AQE), partitioning, and predicate pushdown. Utilized performance tuning best practices by eliminating unnecessary actions, optimizing executor configurations, and adjusting driver settings. Delivered KPIs on a D-1 basis, facilitating timely, data-driven decision-making.

  • Project Field: Telecom
  • Project Name: ASP – GSP sunset, Pyspark, ADLS Gen2, Azure ADF, Databricks
  • Client Name: Vodafone Idea Limited

Successfully migrated critical billing data reporting from on-premises systems to Azure Data Factory (ADF) to ensure accurate, error-free compliance reporting for ASP-GSP. Consolidated billing data from various streams and circles on a monthly and daily basis, with reports for VBS and NON-VBS pushed to the ASP portal for final reporting. Addressed key challenges around scheduling and triggering processes based on data availability from the EBPP system. Developed master scripts to automate the trigger process, ensuring data is only processed when available, and skipping unnecessary steps to accommodate varying billing cycles across different circles.

  • Project Field: Healthcare
  • Project Name: Billing Receivable Management system data ingestion and integration into UHC
  • Skills Used: Pyspark, Data Warehousing, Perl, SQL, Python, Airflow
  • Client: United Health Care (UHC)

Project scope is to ingest and integrate four extracts from upstream system and send 6 extracts to the downstream systems PSGL and one extract to FDW by applying required transformation and integrate all 4 extracts to Data warehouse enterprise tables. As part of this project, we have developed Pyspark jobs and shell scripts and Airflow Dag’s and prepared required SQLs to unit test the Data. In this project we are using ELT concept where will stage all the Data into four staging tables while loading these tables, we have used various checks to make sure the complete data flown to staging tables like duplicate file check, reject data check, tagging values check and trigger file to stage validation check related to count and amounts. For PSGL requirement, we have extracted the data from staging layer and loaded into work table with A side and B side separately, then applied requirement transformations like grouping on segment/Business unit wise and prepared header, trailer and line records and send it out to downstream application PSGL through Electronic communication gateway. For Integration requirement, loaded data into source repository (where will preserve history) and then created required surrogate keys to uniquely identify the records, loaded into common format layer using required transformations including ETL load indicator to identify is it an insert or update record, loaded into Base tables and performed a check on monetary amount between Base and source repository to make sure no data loss. Created Subledger view which is user facing view which will be exposed to user is built on top of the 3NF enterprise tables to make a Flattened view. Applied certain roles to the PSGL report views to make it accessible to user for read-only purpose.

  • Project Field: Healthcare Insurance
  • Project Name: Cirrus Data extraction from Lake Server and Integrating into UDW Data warehouse
  • Skills Used: Pyspark, Data Warehousing, Shell Script, Python, Airflow
  • Client: United Health Care (UHC)

Project scope is to eliminate the DataStage processing for data acquisition process and decrease the data availability time into the staging area and reduce the TCO. For this purpose, we have developed UDW Lake DA framework. Spark Framework can be able to pick the file from lake environment and after doing basic control checks can load into Teradata DB. Spark jobs are designed in such a way that in order to ingest new file format, we can prepare new config and use the existing spark common job, due to this capability the time taken for developing/ingesting new files were took less time and at the same time increased quality.

Roles And Responsibilities

  • Attend Feature grooming sessions to understand the file/table layout, volume, frequency, no of files and timelines to complete each stage of the project including Mapping (includes business requirement), development, QA testing, UAT testing, Production deployment.
  • Understand business requirement and make team aware of the same, discuss design architecture/flow according to the requirement.
  • Collaborate with Business system Analyst on Mapping document preparation and code design with developers.
  • Develop Pyspark scripts and ETL pipelines using Azure ADF and Databricks.
  • Work on creating security requests for setting up FTP or SFTP with new downstream servers and for applying required read/write roles to user exposed tables.
  • Help team members on technical issues, road blocks or concerns.
  • Attend scrum of scrum meeting and Program management calls every Wednesday and Thursday and daily scrum meeting.
  • Keep track of project deliverables and progress, constantly seek customer feedback.
  • Callout any risks or blockers in Program Management meeting and Scrum of Scrum meeting.
  • Playing active role in capacity and forecast planning for the Project.
  • Front face to the client on the technical issues resolution, requirement clarification and confirmation.
  • Motivate the team members by clearly defining expectations, actively seeking feedback from team.
  • Discuss with DBA related to storage capacity planning, new database object creation and DDL modification and history reprocessing etc.
  • Review deployment plan with the team and suggest best practices, forward the deployment plan document to the scrum master to go ahead with next steps for change ticket creation and production deployment.

Certification

Azure Fundamentals ( AZ -900 )

Work Availability

monday
tuesday
wednesday
thursday
friday
saturday
sunday
morning
afternoon
evening
swipe to browse

Accomplishments

Received various recognition and awards for the quality of projects delivered.

Work Preference

Work Type

Full Time

Work Location

On-SiteHybrid

Important To Me

Career advancementWork-life balanceCompany CultureHealthcare benefits

Timeline

Senior Advisory Consultant

IBM India Pvt Ltd
06.2022 - Current

Sr Data Engineer

Optum Global Solutions India Pvt Limited
05.2017 - 06.2022

Data Engineer

Artech Infosystems Pvt Ltd
09.2016 - 05.2017

Software Consultant

Dynpro India Pvt Ltd
07.2015 - 09.2016

B. Tech - Electronics and Communication engineering (ECE)

JNTU University

M. Tech - Embedded systems

JNTU University
Vattipalli Bharath Kumar ReddySenior Advisory Consultant