Summary
Overview
Work History
Education
Skills
Certification
Personal Information
Websites
Timeline
Generic
Jagadeeswara Pulikanti

Jagadeeswara Pulikanti

Bengaluru

Summary

Experienced Data Engineer with over 7 years of hands-on expertise in data engineering, and a total of 13.5 years in the IT industry. Proven success in designing and implementing scalable data pipelines, developing cloud-native data warehousing solutions, and optimizing big data workflows. Skilled in technologies including Databricks, Snowflake, dbt, Azure Data Factory (ADF), AWS Glue, and PySpark. Adept at building robust ETL processes that enable data-driven decision-making and drive business outcomes.

Overview

14
14
years of professional experience
1
1
Certification

Work History

Senior Lead Data Engineer

Brillio Ltd
02.2021 - 09.2024
  • Designed and implemented a data pipeline framework for Nokia’s Software License Management project using Azure Data Factory (ADF), Databricks, and SharePoint in the Azure cloud.
  • Configured secure authentication and authorization between ADF, ADLS, SharePoint, Azure Key Vault, and Databricks services.
  • Developed ADF pipelines to ingest data from SharePoint and Oracle into Azure Data Lake Storage (ADLS), followed by PySpark transformations.
  • Built PySpark notebooks in Databricks to process raw data, data quality check, following the Medallion Architecture, to transform and load data into Unity Catalog fact and dimension tables, incorporating robust data quality checks.
  • Maintained audit and metadata tables to ensure data quality, traceability, and fault tolerance across data pipelines.
  • Collaborated with cross-functional teams to ensure platform readiness, validate the Power BI semantic layer and reports, and support the transition to production operations.
  • Participated in user acceptance testing (UAT) with business stakeholders to ensure that solution outputs aligned with business requirements.
  • Optimized PySpark notebooks in Azure Synapse for Nokia’s DAAS product, reducing execution costs by approximately 60%.
  • Implemented CI/CD pipelines using Azure DevOps to deploy and monitor ADF and Databricks jobs.
  • Led the SQL Server EDW to Snowflake migration using Fivetran, ADF, and dbt, ensuring minimal disruption and high data fidelity.
  • Developed ADF pipelines to extract and ingest large volumes of historical EDW data into Snowflake.
  • Analyzed legacy SSIS packages and PL/SQL procedures, translating business logic into maintainable dbt models.
  • Set up and configured dbt in Snowflake, migrating Talend jobs into reusable, testable dbt models.
  • Created Databricks workflows integrating dbt jobs with Slack, and email alerts for proactive job monitoring and orchestration.
  • Participated in change management processes to release development changes into the production environment.
  • Created detailed data mappings, technical design documents, and support documentation to ensure alignment with business and technical stakeholders across all project phases.

Tech Lead

Tech Mahindra
08.2019 - 12.2020
  • Led the development of AWS Glue ETL jobs using PySpark to process data in the AWS Data Lake for the Ingram client project.
  • Created Python scripts for data ingestion in EC2, and automated workflows with Lambda functions.
  • Involved in data modeling in Redshift and deploying the code with AWS Lake Formation.
  • Developed the Glue ETL framework pipeline to handle several types of data from different sources, which allowed savings of up to 50% on development time.
  • Orchestrated the end-to-end data pipelines integrate with various services lambda, glue jobs,ec2 scripts based on event driven.
  • Created the KB and support documents to transition to the support team.

Data Engineer

Mindtree Ltd
01.2017 - 08.2019
  • Developed MapReduce and Spark jobs for data processing in the Equifax project.
  • Scheduled MapReduce jobs with the Control-M tool.
  • Created Spark jobs for data transformation and loading into the Cassandra database.
  • Involved in troubleshooting the failed jobs as part of operational maintenance.
  • Involved in logs and job monitoring setup in Datadog.

IT Analyst

Tata Consultancy Services
03.2011 - 01.2017
  • Developed web applications for the PWC client using the Struts framework and Hibernate for database mapping.atabase mapping.
  • Monitored and maintained Java web applications in production environments, responding to critical incidents (P1/P2), and providing L2/L3 technical support.
  • Diagnosed and resolved high-priority issues (P1–P3) in Java-based web platforms, collaborating with development and infrastructure teams to restore service and implement permanent fixes.
  • Involved in Hadoop secure cluster setup and performance tuning the jobs in MapR and Hortonworks distributions, and also Vertica for HP (Hewlett-Packard) Inc. client.
  • Resolved performance issues and node recovery challenges in Vertica clusters.

Education

Master of Computer Applications -

Jawaharlal Nehru Technological University
Andhra Pradesh, India
01.2010

Bachelor Computer Science -

Government Degree College
Andhra Pradesh, India
01.2007

Skills

  • Big Data & Cloud: AWS Glue, Azure Data Factory, AWS Lambda, and Azure Synapse
  • Programming and scripting: Python, SQL, PySpark, shell script
  • Data warehousing and ETL: Databricks, dbt, Snowflake, Fivetrans, Control-M
  • DevOps and CI/CD: Azure DevOps and Jenkins
  • Versioning and repository: Git, Bitbucket, GitHub, and Azure DevOps

Certification

  • Databricks Certified Data Engineer Professional
  • AWS Certified Solutions Architect Associate.
  • MapR Certified Hadoop Developer (MCHD).
  • OCJP Java Certification.

Personal Information

Timeline

Senior Lead Data Engineer

Brillio Ltd
02.2021 - 09.2024

Tech Lead

Tech Mahindra
08.2019 - 12.2020

Data Engineer

Mindtree Ltd
01.2017 - 08.2019

IT Analyst

Tata Consultancy Services
03.2011 - 01.2017

Master of Computer Applications -

Jawaharlal Nehru Technological University

Bachelor Computer Science -

Government Degree College
Jagadeeswara Pulikanti