Summary
Overview
Work History
Education
Skills
Certification
Technical Skills
Timeline
Hi, I’m

Jay Patel

SUJATHA NAGAR
Jay  Patel

Summary

Over all 5 years of experience designing, implementing, and optimizing data solutions using Azure Data Factory, Data Lake Storage, and Synapse Analytics to support complex data processing and analytics requirements Skilled in leveraging Azure Data Lake to architect and deploy scalable data storage solutions, ensuring durability and accessibility for analytics and reporting Proficient in Azure Data Factory, orchestrating end-to-end data pipelines for seamless data integration and transformation across cloud environments Seasoned in Azure Databricks, employing PySpark and Python to develop advanced analytics solutions and optimize data processing workflows Expertise in designing and managing Azure Data Warehouse and Azure SQL databases, ensuring optimal performance and data integrity Specialized in developing ETL pipelines using Azure Data Pipeline, facilitating efficient data movement and transformation Accomplished in implementing data governance and compliance measures using Azure Cosmos DB and Azure Purview, ensuring data quality, integrity, and security in alignment with regulatory standards Knowledgeable in working with Snowflake and Azure Synapse Analytics, designing and optimizing data warehousing solutions for advanced analytics Technical proficiency in utilizing Azure Blob Storage and Azure Key Vault for secure data storage and management Worked in an Agile environment and have good insight into Agile methodologies and Lean working techniques. Participated in Agile ceremonies and Scrum Meetings Team player with excellent interpersonal skills, self-motivated, dedicated, and understanding of the demands of 24/7 system maintenance and has good customer support experience

Overview

6
years of professional experience
4
years of post-secondary education
3
Certifications

Work History

LTI Mindtree

Azure Data Engineer
01.2022 - Current

Job overview

  • Developed and maintained robust data pipelines using Azure Data Factory, ensuring seamless extraction, transformation, and loading (ETL) of data from multiple sources into data warehouses and lakes
  • Utilized Azure SQL for database management and optimization, including schema design, query optimization, and performance tuning, ensuring efficient data storage and retrieval
  • Implemented data transformation and manipulation tasks using Python and PySpark within Azure Databricks, enabling advanced analytics and machine learning capabilities on big data sets
  • Implemented data governance and metadata management using Azure Purview, ensuring data quality, compliance, and lineage tracking across the data lifecycle
  • Leveraged Azure Key Vaults for secure storage and management of cryptographic keys, secrets, and certificates, ensuring data security and compliance with regulatory requirements
  • Automated business processes and workflows using Azure Logic Apps, enabling seamless integration and orchestration of data and services across cloud environments
  • Led full lifecycle development activities on the Azure platform, from initial design to deployment, ensuring seamless go-live transitions
  • Involved in building the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure technologies
  • Created and maintained highly scalable data pipelines across Azure Data Lake Storage, and Azure Synapse using Data Factory, Data bricks
  • Developed and optimized data pipelines leveraging Azure Blob Storage, Azure Cosmos DB, and Snowflake, enabling efficient data storage, retrieval, and processing at scale

Genpact

Data Engineer
09.2019 - 12.2021

Job overview

  • Designed and developed components and features for the ingestion framework, based on functional and non-functional requirements in Python/PySpark within the Azure platform
  • Worked on full end-to-end development activities from going live for development in the Azure platform
  • Developed full pipeline ETL applications to move data within a corporate data management platform, with both MS Azure and MS SQL Server tools
  • Participated with corporate data user teams, developing data validation and test plans, performing user acceptance testing, and providing feedback to development and sustainment teams
  • Involved in core company data pipeline, responsible for scaling up data processing flow to meet the rapid data growth
  • Developed data processing pipelines in the Azure environment (receiving content, transforming data, and sending to a target)
  • Responsible for data collection and ingestion from external sources, and optimizing data flow between different systems and environments
  • Designed and build reusable data extraction, transformation, and loading processes to handle data from various sources
  • Responsible for managing a growing cloud-based data ecosystem and reliability of our corporate data lake and analytics data mart

Education

JNTU

Bachelors from Computer Science and Engineering (CSE)
06.2015 - 05.2019

University Overview

Skills

Azure Blob Storage

Certification

DP-900: Microsoft Certification: Azure Data Fundamentals

Technical Skills

Technical Skills
Azure Blob Storage, Azure Data Lake, Cosmos DB, Azure Synapse, Azure Data Catalog, Snowflake, Azure SQL, Azure HDInsight, Azure Data Bricks, Azure Stream Analytics, Azure Log Analytics, Azure Data Pipelines, Azure Data Factory, ETL, Azure Monitor, Azure Key Vault, Python, PySpark, SQL, Power BI

Timeline

Azure Data Engineer
LTI Mindtree
01.2022 - Current
Data Engineer
Genpact
09.2019 - 12.2021
JNTU
Bachelors from Computer Science and Engineering (CSE)
06.2015 - 05.2019
Jay Patel