Summary
Overview
Work History
Education
Skills
Certification
Courses
Timeline
Generic
JEEVAN KUMAR PERIKALA

JEEVAN KUMAR PERIKALA

Data Engineer
Hyderabad

Summary

Dynamic and results-oriented Data Engineer with over 7 years of experience designing and building robust data solutions across large-scale distributed systems. Expert in developing high-performance ETL/ELT pipelines, optimizing data architectures, and delivering actionable insights using technologies such as Apache Spark, Python, Hadoop, Hive, and SQL. Proven ability to manage end-to-end data workflows—from ingestion to analysis—leveraging tools like Data Bricks, and Control-M. Skilled in object-oriented programming and adept at analyzing complex datasets to drive informed business decisions. Known for precision in data integration, strong troubleshooting instincts, and effective cross-functional collaboration with stakeholders across engineering, analytics, and operations. Passionate about building scalable, resilient data infrastructure that powers impactful decision-making.

Results-focused data professional equipped for impactful contributions. Expertise in designing, building, and optimizing complex data pipelines and ETL processes. Strong in SQL, Python, and cloud platforms, ensuring seamless data integration and robust data solutions. Known for excelling in collaborative environments, adapting swiftly to evolving needs, and driving team success.

Overview

7
7
years of professional experience
1
1
Certification

Work History

Data Engineer

Suncorp Insurance
Hyderabad
06.2025 - Current
  • Developed and maintained scalable data pipelines using Databricks and dbt, enabling efficient transformation and modeling of insurance claim data.
  • Integrated OracleDB with Databricks to support real-time data ingestion and analytics across policyholder records and transaction logs.
  • Automated deployment workflows using Bitbucket Pipelines, ensuring seamless CI/CD for dbt models and reducing manual intervention by 40%.
  • Collaborated with cross-functional teams to optimize data models for underwriting, fraud detection, and customer segmentation.
  • Implemented robust data validation and testing strategies within dbt to ensure high data quality and consistency across environments.
  • Designed modular dbt workflows to support incremental loads and historical tracking of insurance KPIs.
  • Contributed to version control and code reviews via Bitbucket, improving team collaboration and code reliability.
  • Optimized data processing by implementing efficient ETL pipelines and streamlining database design.
  • Collaborated on ETL (Extract, Transform, Load) tasks, maintaining data integrity and verifying pipeline stability.
  • Enhanced data quality by performing thorough cleaning, validation, and transformation tasks.
  • Automated routine tasks using Python scripts, increasing team productivity and reducing manual errors.
  • Migrated legacy systems to modern big-data technologies, improving performance and scalability while minimizing business disruption.
  • .Fine-tuned query performance and optimized database structures for faster, more accurate data retrieval and reporting.
  • Led end-to-end implementation of multiple high-impact projects from requirements gathering through deployment and post-launch support stages.
  • Reviewed project requests describing database user needs to estimate time and cost required to accomplish projects.

Data Engineer

Grünenthal
Hyderabad
02.2022 - 06.2025
  • Accomplished Data Engineer with over 7 years of experience delivering scalable, high-performance data infrastructure and leading complex BI modernization initiatives across enterprise environments.
  • Directed end-to-end migrations from DOMO and Qlik to Azure Databricks, translating deeply layered business logic into modular Spark pipelines and reusable Delta Lake frameworks to support real-time and batch analytics at scale.
  • Expert in developing and automating high-throughput ETL/ELT pipelines across datasets exceeding 12 TB, utilizing technologies such as Apache Spark, Hadoop (HDFS, Hive), Sqoop, Azure DevOps, and Terraform.
  • Proficient in data ingestion, modeling, and transformation with deep expertise in SQL, HiveQL, Scala, and workflow orchestration using Control-M, enabling streamlined and reliable data movement across ecosystems.
  • Delivered measurable impact through engineering initiatives that improved ETL throughput by 28% through Spark tuning, reduced manual configuration time by 50% using infrastructure automation, and accelerated release velocity by 35% via robust CI/CD integration.
  • Collaborated with cross-functional teams to align engineering deliverables with stakeholder objectives, contributing to agile data product delivery and continuous improvement of analytics platforms.
  • Spearheaded implementation of data quality frameworks leveraging automated validation, anomaly detection, and dashboarding solutions, ensuring reliable business insights and compliance with enterprise data governance standards.
  • Devised and introduced CI/CD monitoring dashboards by integrating Azure DevOps pipelines and custom telemetry, equipping stakeholders with real-time visibility into deployment status, error trends, and system reliability metrics.
  • Engineered automated data reconciliation processes by integrating custom Spark jobs with enterprise workflow schedulers, substantially boosting pipeline accuracy and regulatory compliance across multiple data domains.
  • Consolidated disparate data integration processes by centralizing configuration management and automating environment provisioning, markedly enhancing consistency, deployment speed, and maintainability across large-scale data engineering projects.
  • Refined release management practices by aligning environment promotion protocols with automated compliance checks, minimizing deployment errors and supporting uninterrupted delivery of robust data solutions to global clients.
  • Collaborated with cross-functional teams to drive successful completion of complex projects within deadlines.
  • Trained and supported new team members, maintaining culture of collaboration.

Senior Software Engineer

Calvin Technologies Pvt Ltd
Hyderabad
07.2018 - 02.2022
  • Data Engineer with over 3 years of experience in Talend Big Data and Azure Databricks environments.
  • Built and maintained ETL/ELT pipelines, performing complex data transformations using Python, SQL, and PySpark.
  • Migrated data workflows from Talend to Azure Databricks, improving performance, scalability, and maintainability.
  • Hands-on experience with Apache Spark, Hive, HDFS, Sqoop, and Control-M for workflow orchestration.
  • Integrated DevOps practices using Azure DevOps and automated deployments with Terraform.
  • Translated business logic into modular, production-grade Spark pipelines using Delta Lake and reusable code structures.
  • Collaborated with cross-functional teams to deliver clean, reliable data solutions for analytics and reporting.
  • Mentored junior developers, fostering professional growth and enhancing team productivity.
  • Developed scalable applications using agile methodologies for timely project delivery.
  • Regularly reviewed peers'' code contributions, offering constructive feedback to enhance overall product quality.
  • Streamlined development workflows to increase team efficiency and reduce time spent on repetitive tasks.
  • Collaborated with management, internal and development partners regarding software application design status and project progress.
  • Developed robust, scalable, modular and API-centric infrastructures.

Education

B.Tech - Mechanical

Vignan's Lara Affiliated With JNTU University
Guntur
03-2017

Skills

Python

SQL

Apache Spark

AWS

Apache spark

Talend Big Data

Airflow

MySQL

HDFS

Informatica

Jenkins

PySpark

Databricks

Azure Data Factory (ADF)

Azure Data Lake Storage (ADLS)

undefined

Certification

Successfully completed training on converting Jatropha oil into biodiesel using the transesterification process.

Courses

  • PROJECT MANAGEMENT PROFESSIONAL (PMP)
  • PMI AXELOS, 06/2024 - 08/2004
  • Completed 100+ hours of training aligned with PMBOK principles, covering Agile, Scrum, and traditional project lifecycles. Gained hands-on experience with project documentation including project charters, RACI matrices, work breakdown structures (WBS), and risk management plans. Applied project planning tools and methodologies to real-world data projects, enhancing cross-functional collaboration and delivery effectiveness. Developed skills in stakeholder communication, change management, and sprint planning, supporting leadership in data migration and platform modernization projects.

Timeline

Data Engineer

Suncorp Insurance
06.2025 - Current

Data Engineer

Grünenthal
02.2022 - 06.2025

Senior Software Engineer

Calvin Technologies Pvt Ltd
07.2018 - 02.2022

B.Tech - Mechanical

Vignan's Lara Affiliated With JNTU University
JEEVAN KUMAR PERIKALAData Engineer