Summary
Overview
Work History
Education
Skills
Timeline
Generic
Shravan Kumar

Shravan Kumar

Azure Data Engineer
Hyderabad,Hyderabad

Summary

8+ years of IT experience, including 4+ years in Azure Data Engineering, 2+ years in SQL Development, and 1 year in Python Development.


> Expertise in Azure (ADF, Databricks, ADLS, Synapse, Delta Lake) and AWS (S3, Glue, Lambda, Redshift) data services. Strong in ETL/ELT pipeline design, Medallion Architecture (Bronze–Silver–Gold), incremental loads, and SCD Type 2 implementations.


> Skilled in Python, SQL, and PySpark for transformations, reconciliation, and workflow automation. Delivered large-scale projects across Healthcare, Banking, Finance, and Retail/E-Commerce domains for global clients like UnitedHealth Group, Wells Fargo, Deluxe US, and Amazon. Experienced in CI/CD (Azure DevOps, Git), Agile methodologies, and data governance (RBAC, Key Vault, encryption, masking).


> Innovative technology professional with several years of diverse experience. Skilled in enhancing systems and aligning technical solutions with business objectives. Proven success in leading projects from start to finish and contributing to organizational growth and success.

Overview

10
10
years of professional experience

Work History

Azure Data Engineer (Client: UHG – Healthcare Domain)

Mphasis
11.2024 - Current
  • Project: Healthcare Data Integration & Analytics Platform
  • Overview: Built a HIPAA-compliant Azure platform for patient admissions, claims, and treatment data integration, enabling predictive insights and secure reporting.
  • Key Contributions:
  • Designed ADF pipelines for ingesting HL7/JSON/CSV data from hospital systems into ADLS Gen2.
  • Developed PySpark transformations for cleansing, deduplication, and enrichment.
  • Implemented Medallion Architecture (Bronze–Silver–Gold) for structured datasets.
  • Built fact-patient, dim-provider, and claims models in Synapse Analytics.
  • Configured incremental loads and Delta time travel for historical tracking.
  • Enforced data governance using Unity Catalog, RBAC, and Key Vault.
  • Provided curated datasets for BI and ML teams for patient risk analysis.
  • Created Power BI dashboards for diagnosis and hospital KPI reporting.

Azure Data Engineer (Client: Deluxe US – Retail Domain)

Randstad
12.2023 - 11.2024
  • Project: Customer Behaviour & Recommendation Analytics
  • Overview: Built a retail analytics platform for personalized recommendations and churn prediction, processing clickstream and transactional datasets for Deluxe US.
  • Key Contributions:
  • Designed ADF pipelines for ingesting clickstream and transaction data.
  • Developed PySpark jobs for churn analysis and customer segmentation.
  • Implemented Delta Lake within Medallion Architecture for incremental updates.
  • Created curated datasets for machine learning models.
  • Applied partitioning, bucketing, and caching for faster processing.
  • Built Tableau dashboards for customer KPIs and churn trends.
  • Automated pipeline alerts using ADF triggers and Azure Monitor.
  • Secured pipelines using Key Vault & RBAC.

Azure Data Engineer (Client: Wells Fargo – Banking Domain)

Randstad
07.2021 - 11.2023
  • Project: Loan & Transaction Analytics Platform
  • Overview: Built scalable Azure pipelines for loan portfolio analysis and fraud detection, enabling real-time risk monitoring and compliance.
  • Key Contributions:
  • Developed ADF pipelines for ingesting loan, transaction, and account data.
  • Created PySpark notebooks for customer risk profiling and fraud detection.
  • Built fact-loan and dim-customer models in Synapse.
  • Enabled incremental data loads with Delta Lake.
  • Developed Power BI dashboards for loan delinquency and fraud KPIs.
  • Automated monitoring using ADF triggers and Azure Monitor alerts.
  • Applied data encryption and masking for PCI compliance.
  • Collaborated with risk teams for curated compliance datasets.

AWS Data Engineer (Retail/E-Commerce Domain)

Amazon UK
06.2020 - 11.2020
  • Project: Product Catalogue Optimization
  • Overview: Built AWS data pipelines for product listings, pricing, and inventory feeds using Glue, Lambda, and S3, ensuring catalogue accuracy and real-time visibility.
  • Key Contributions:
  • Designed AWS Glue jobs for ingesting and transforming product/pricing data.
  • Used Lambda functions to trigger real-time data updates.
  • Implemented S3-based staging and partitioning for optimized storage.
  • Integrated Redshift for analytical querying and reporting.
  • Built Athena queries for validation and ad-hoc analysis.
  • Developed pricing automation scripts to reduce latency by 20%.
  • Configured CloudWatch alerts for job failures and metrics monitoring.
  • Implemented IAM policies for secure data access.
  • Delivered accurate catalogue datasets to supply chain and marketing teams.

Python Developer (Retail/E-Commerce Domain)

Amazon India
04.2019 - 04.2020
  • Project: Payment Transactions & Reconciliation Analytics
  • Overview: Built Python-based ETL workflows for payments, refunds, and settlements, supporting fraud detection and financial reconciliation.
  • Key Contributions:
  • Developed Python ETL scripts for ingesting payments from APIs, Kafka, and flat files.
  • Designed SQL procedures for reconciling payments and settlements.
  • Built fraud detection logic for failed transactions and chargebacks.
  • Automated daily ETL workflows with alerts and error handling.
  • Provided clean datasets to Power BI dashboards for fraud insights.
  • Implemented data quality rules to ensure completeness and accuracy.
  • Partnered with finance teams for reconciliation and reporting.
  • Reduced ETL runtime by 30% through parallel processing optimization.

SQL Developer (Retail/E-Commerce Domain)

Amazon India
01.2016 - 11.2019
  • Project: Sales & Transaction Data Warehouse
  • Overview: Developed SQL-based ETL and reporting solutions for sales, tax, and transaction analytics supporting retail decision-making.
  • Key Contributions:
  • Wrote complex SQL queries, stored procedures, and triggers for transaction processing.
  • Designed fact/dimension tables for sales and tax reporting.
  • Implemented incremental loads to optimize data refresh cycles.
  • Built materialized views for downstream analytics.
  • Conducted data reconciliation between OLTP and warehouse layers.
  • Optimized SQL jobs for faster query performance.
  • Supported Power BI dashboards for finance and sales analytics.

Education

Bachelor of Technology (B.Tech) - Computer Science

TRR College of Engineering (JNTU-H)
01.2015

Skills

  • Programming: Python, SQL, PySpark
  • Azure Services: Data Factory, Databricks, ADLS Gen2, Synapse, Delta Lake, Key Vault, Monitor
  • AWS Services: S3, Glue, Lambda, Redshift, CloudWatch, IAM
  • Big Data & Frameworks: Apache Spark, Kafka
  • Databases: Azure SQL, MySQL, PostgreSQL, Oracle
  • Visualization: Power BI, Tableau
  • DevOps & Tools: Git, Azure DevOps, Jira
  • Security: RBAC, Key Vault, Data Masking

Timeline

Azure Data Engineer (Client: UHG – Healthcare Domain)

Mphasis
11.2024 - Current

Azure Data Engineer (Client: Deluxe US – Retail Domain)

Randstad
12.2023 - 11.2024

Azure Data Engineer (Client: Wells Fargo – Banking Domain)

Randstad
07.2021 - 11.2023

AWS Data Engineer (Retail/E-Commerce Domain)

Amazon UK
06.2020 - 11.2020

Python Developer (Retail/E-Commerce Domain)

Amazon India
04.2019 - 04.2020

SQL Developer (Retail/E-Commerce Domain)

Amazon India
01.2016 - 11.2019

Bachelor of Technology (B.Tech) - Computer Science

TRR College of Engineering (JNTU-H)
Shravan KumarAzure Data Engineer