8.5 years of experience in designing, developing, and implementing Data Warehousing and Business Intelligence applications using Big Data technologies like Hadoop, Hive, Spark, Azure, Databricks, and Snowflake.
Over 8 years of expertise in Python programming, with proficiency in Big Data tools and ETL technologies, including Hadoop Ecosystem, Apache Spark, HDFS, YARN, Hive, Kafka, and Palantir Foundry.
Skilled in building advanced data engineering applications in Palantir Foundry, leveraging Data Ingestion, Pipeline Builder, Ontology, Workshop, Quiver, and Code Workbooks.
Developed user-facing applications in Workshop with custom TypeScript-backed widgets, enabling write-back capabilities and materialization.
Strong knowledge of Data Modeling (Relational, Star, and Snowflake Schema), Data Analysis, and implementation of Data Warehousing solutions.
Expertise in Snowflake cloud data warehousing, focusing on architecture design, best practices, reusable frameworks, robust design, and automation of secured database utilities (SCBD).
Hands-on experience in CDC (Change Data Capture) and migrating databases to Snowflake, Databricks, and Palantir Foundry.
Proficient in SQL and Relational Databases (SQL Server, Oracle, DB2), with experience in data modeling, normalization, and database design using tools like Erwin.
Familiar with Slowly Changing Dimensions and Slowly Growing Targets methodologies.
Comprehensive understanding of the SDLC process, including requirements gathering, design, coding, testing, deployment, and documentation.
Overview
7
7
years of professional experience
4
4
Languages
Work History
Lead Data Engineer
Swiss Re India Private Limited
07.2022 - Current
Experience:
Lead Data Engineer at Swiss Re, responsible for designing and architecting Big Data solutions for underwriting treaty property, processing ~40TB of property data.
Built and optimized scalable data pipelines in Palantir Foundry, ingesting ~5–7GB of property data every 30 minutes, leveraging advanced external transforms to eliminate dependency on traditional ingestion tools.
Designed and implemented incremental data processing workflows, ensuring efficient processing and minimal resource utilization.
Architected solutions to sync processed data into Ontology Objects, enabling transactional and non-transactional reporting applications used by underwriters for portfolio acceptance and renewal decisions.
Collaborated closely with underwriters to deliver tailored applications supporting both operational and analytical needs.
Actively contributed to team development by mentoring and onboarding new engineers, reviewing and designing code, and implementing best practices to improve code quality and maintainability.
Demonstrated expertise in Apache Spark, optimizing Spark jobs to handle large-scale data efficiently while minimizing costs and runtime.
Leveraged the Palantir Foundry platform extensively, including Data Ingestion, Pipeline Builder, Ontology, Workshop, and Quiver, to build robust and user-centric data engineering applications.
Ensured end-to-end solution quality by designing resilient architecture, conducting thorough testing, and implementing automated monitoring for data pipelines.
Played a key role in aligning technical solutions with business goals, fostering collaboration between engineering and business teams.
Senior Data Engineer
Airbus India Private Limited
05.2019 - 07.2022
Experience:
Migrated an in-house Big Data platform utilizing Hadoop, Hive, and Spark managed by YARN to a hybrid cloud solution combining Snowflake, Databricks, and Palantir Foundry.
Designed and implemented efficient data pipelines for processing large datasets (~10TB), ensuring high performance and scalability.
Leveraged Snowflake for quick reporting solutions integrated with Power BI, enabling fast and efficient data-driven decision-making.
Utilized Databricks for advanced processing of unstructured sensor data, optimizing Spark code for performance and resource efficiency.
Built structured data processing workflows and user-facing applications in Palantir Foundry, leveraging tools such as Data Ingestion, Ontology, Pipeline Builder, Quiver, and Workshop for robust and scalable solutions.
Skilled in optimizing Spark code, improving execution times, and reducing resource consumption across various Big Data workflows.
Delivered applications in Palantir Foundry with write-back capabilities and custom TypeScript-backed widgets to enhance user interactivity and data materialization.
Strong expertise in hybrid cloud architecture, enabling seamless integration of multiple technologies to meet diverse business needs.
Software Engineer
Robert Bosch
04.2018 - 05.2019
Experience:
Developed adaptive applications in C++ on the application layer of the Adaptive AUTOSAR framework, utilizing modern C++ standards (C++11/14/17) to enhance code efficiency and maintainability.
Led a Proof of Concept (POC) project to install and remove applications on RCAR using OTA (Over-The-Air) updates based on Adaptive AUTOSAR.
Designed and implemented multi-threaded algorithms to optimize application performance and resource utilization.
Configured Essential IP files for IPC communication and deployment on virtual hardware (QEMU) and RCAR hardware.
Debugged and resolved issues in the latest software releases, ensuring functionality compliance.
Created software and vehicle packages, successfully transferring multiple SW packages through OTA and installing them on RCAR.
Implemented UDS services such as Data Read by Identifier (0x22) and Routine Control (0x31) in automotive ECUs.
Outsourcing Associate at Swiss Re Global Business Solutions India Private LimitedOutsourcing Associate at Swiss Re Global Business Solutions India Private Limited