Summary
Overview
Work History
Education
Skills
Accomplishments
Timeline
Generic
Aashish Prashant

Aashish Prashant

Bengaluru

Summary

Strategic and hands-on Data Quality Developer with over 7 years of experience architecting and delivering enterprise-wide data quality frameworks and ETL pipelines across diverse domains. Specialized in metadata-driven design, rule repository, and self-service DQ adoption using tools like Informatica IDMC (CDQ, CDI, CDGC), PowerCenter, Python, and Databricks. Proven expertise in implementing parameterized rule engines, building automated DQ wrappers, and integrating with cloud-native platforms for scalable and auditable rule execution. Recognized for enabling business ownership through intuitive UI layers, and for bridging the gap between data governance and engineering with robust, API-integrated DQ ecosystems.

Overview

7
7
years of professional experience

Work History

Data Quality Developer

Takeda ICC
Bengaluru
04.2024 - Current

Project Description

  • The Enterprise Data Quality Framework (EDQ) is a fully metadata-driven, scalable solution designed to streamline and automate the application of data quality rules across diverse business domains. The framework enables both technical and business users to efficiently manage data quality using a centralized rule template repository, dynamic rule expansion, and a self-service UI layer.

Role & Responsibilities

1. Framework Implementation & Metadata Design

  • Designed and delivered a metadata-driven Enterprise Data Quality (EDQ) Framework to enable scalable, reusable, and standardized rule execution across multiple domains and business units.
  • Introduced a centralized Rule Template mechanism to define reusable logic (e.g., null checks, pattern checks, threshold validations), enabling dynamic rule expansion for multiple columns and datasets without code duplication.
  • Built a flexible rule assignment model using metadata controls for full rule lifecycle management and governance tracking.

2. Python-Based Orchestration Wrapper

  • Developed a robust Python wrapper layer that orchestrates:
  • Metadata interpretation from PostgreSQL tables.
  • Dynamic payload construction using rule templates.
  • Execution triggering via Informatica IDMC CDI Taskflow REST APIs.
  • Monitoring and audit logging of rule execution and scoring status.
  • This wrapper handles complete lifecycle automation: rule registration → assignment → expansion → execution → scorecard generation, decoupling business logic from pipeline logic.

3. Self-Service UI & Business User Enablement

  • Designed a self-service UI interface, backed by metadata, allowing business users to configure, assign, and reapply DQ rules without engineering dependency.

4. API-Based Execution & PDO Optimization

  • Implemented fully parameterized DQ rule execution by calling Informatica IDMC CDI Taskflow via REST APIs, supporting dynamic runtime substitution of rule logic, targets and thresholds.
  • Optimized execution using Pushdown Optimization (PDO), offloading compute to underlying databases for improved performance and reduced cost during validation at scale.

5. Monitoring, Reporting, and Operational Governance

  • Developed interactive Power BI dashboards to visualize rule-level DQ scores, adoption trends, data domain coverage, and failed record trends with drilldowns.
  • Designed reusable scorecard logic that seamlessly integrates with rule execution metrics, enabling traceable, auditable, and governance-aligned data quality reporting across domains.

Tech Stack: Python, PostgreSQL, Databricks, EC2, Power BI, Informatica CDGC, IDQ, Rule Engine

ETL Developer

Deloitte India Consultant Private Limited
Bengaluru
03.2021 - 04.2024

Project Description

  • ONE IDENTITY Customer will be a single MDM solution for Boehringer Ingelheim, which will source data from various contributing systems to create a single golden record of a customer.
  • One customer will also publish mastered information to different consumer systems of Boehringer-Ingelheim.
  • There can be two types of source systems: one that provides data in batch mode, and the other that is integrated in real time. Source systems that provide data in batch mode are going to be integrated through the ETL process.

Role & Responsibilities

  • Designed and developed complex ETL workflows using IICS, ensuring efficient data integration and validation across multiple data sources.
  • Implemented data quality rules and cleansing processes using IDQ, resulting in significant improvements in data accuracy and completeness.
  • Collaborated with business stakeholders to identify data requirements, create
    data mappings, and define data integration strategies.
  • Conducted performance tuning and optimization of ETL processes, resulting
    in improved processing times and reduced costs.
  • Created and maintained technical documentation, including ETL design specifications, data mappings, and process flows.
  • Provided technical support to end users, resolving issues related to data integration, data quality, and system performance.

Tech Stack: IDQ, IICS, Oracle SQL, SQL Server, UC4, and GitHub.

Associate Consultant

Capgemini India Private Limited
Bengaluru
08.2018 - 03.2021

Project Description

  • WATTS application is workflow and task tracking system that tracks the customer
    requests for wide varieties of services such as Bundle-TV, Internet and Homelife
    etc.

Role & Responsibilities

  • Involved in understanding the data model and preparing the inputs for ETL
    high level and low-level design document.
  • Assisted in gathering requirements by performing ad hoc analysis of the requirements with the technology teams.
  • Using Informatica design and code the mappings, jobs to process data from
    different source system to data warehouse.
  • Worked with PEGA utility BIX extracts as a source for streaming.
  • Involved in creating SCD tables in the data warehouse to load data from XML.
  • Involved in unit testing and deployment of the code to production via Jenkins.
  • Developed a PL/SQL procedure for validation purposes.

Tech Stack: Informatica PowerCenter, Oracle, SQL, SQL SERVER,JENKINS, WINSCP.

Education

Bachelor of Engineering - Computer Science

Lakshmi Narain College Of Technology
Bhopal, Madhya Pradesh
07-2018

Skills

  • ETL Tools: Informatica PowerCenter (PC), Informatica Data Quality (IDQ), Informatica Intelligent Data Management Cloud (IDMC), Cloud Data Quality (CDQ), Cloud Data Integration (CDI)
  • Data Quality & Governance: Informatica Cloud Data Governance & Catalog (CDGC), Mass Ingestion & Control Center (MCC), Cloud Data Quality (CDQ)
  • Languages & Scripting: Python, SQL, Unix, Shell Scripting
  • Databases: PostgreSQL, Oracle, SQL Server
  • Cloud & Big Data: Databricks, AWS
  • Reporting: Power BI, Tableau
  • DevOps & Utilities: Jenkins, GitHub, WinSCP, TIDAL, UC4
  • Methodologies: Agile, Waterfall

Accomplishments

  • 🏆 RISING STAR Award – Capgemini (Oct 2020)
    For outstanding performance in the Insights Data Project (Q3 FY20)
  • 🏆 APPLAUSE AWARD – Deloitte USI (Aug 2021)
    For delivering key DQ deliverables and exceptional project collaboration
  • 🏆 APPLAUSE AWARD – Deloitte USI (Oct 2022)
    For timely and high-quality delivery of complex IICS and IDQ initiatives
  • 🏆 APPLAUSE AWARD – Deloitte USI (Jan 2023)
    For design and implementation of complex mass API ingestion framework

Timeline

Data Quality Developer

Takeda ICC
04.2024 - Current

ETL Developer

Deloitte India Consultant Private Limited
03.2021 - 04.2024

Associate Consultant

Capgemini India Private Limited
08.2018 - 03.2021

Bachelor of Engineering - Computer Science

Lakshmi Narain College Of Technology
Aashish Prashant