As a dedicated Project Associate with 5 years of experience in ETL development, I am seeking to leverage my expertise in data integration, transformation, and development to drive innovative solutions. I am enthusiastic about continuing my experience, advancing to new areas, and accepting new challenges.
Client - First American India
Data migration from Informatica BDM to Databricks for the title insurance project.
Roles and Responsibilities:
1. Led a data migration project, transitioning from Informatica to Databricks for data tuning, code optimization, and improving performance.
2. Executed the migration of large datasets, ensuring data accuracy and consistency throughout the process.
3. Converted Informatica BDM code to Databricks-compatible code.
4. Designed and implemented a comprehensive workflow, integrating all necessary notebooks, and automating processes through triggered workflows.
5. Ensured seamless data transfer and integrity, improving overall system performance and reliability.
Client - Bank Of Ireland
We have to load the tables and files into the staging area by using incremental loading, and then load the data to the dimension tables using SED type 2, and then load it into the fact tables.
Roles and Responsibilities:
1. Interacted with the business analysts to understand the business requirements.
2. Understanding the document for developing the mappings, and the strategy for loading historical data and incremental loading of mappings.
3. Designed and developed ETL mappings for sed type 1 and type 2 for loading into fact tables. Prepared test cases and completed unit testing of developed mappings.
4. Analyzed ETL code and tuned the mappings to improve their load performance.
5. Migrated the code from individual user folders to project folders.
Client - Apple
The main objective of the project is to convert all the inclusive terms to exclusive terms, for example, Master to Main, Blacklist to Deny list, etc.
Roles and Responsibilities:
1. Developed the core table and stage table definitions, or DDLs, as per the requirement, and deployed them in the Dev and UAT environments.
2. Extracted the job and process JSON from ETLOS, manipulated them as required, and moved the JSONs through the pipeline to Dev and UAT.
3. Developed views or DMLs over the tables as required, and deployed them in the Dev and UAT. Registered the tables by using the ICM tool to move them to the prod environment.
4. Once all the impacted components are ready and validated, the configs are moved to the Prod environment through the pipeline, and then the core, stage tables, and views are deployed in Prod.
5. I ran and monitored the jobs in the GBI data pipeline. Also, I rectified the failures by verifying the logs.