Overall 4 years of experience as Data Engineer professional with good technical knowledge in ETL, python, Pyspark, Databrick, BigQuery. Experience in working in a development project and actively involved in all the phases of the project life cycle including data acquisition, data cleaning, data processing. Possess good interpersonal and relationship building skills. Flexibility to work with new technologies and able to accept new challenges.
PROJECT I :
OBU Client :Takeda Pharmaceutical
Summary : The data foundation program(DFP) is intended to provide the cloud based solution. This required the data from on-premise systems to be migrated to the new core solution and setting up new batch based data ingestion for the new source. The cloud platform is build to cater the analytics need for the business' users and to cater the data needs of the down stream applications.
Roles & Responsibilities:
PROJECT II:
Phoenix Client : Takeda Pharmaceutical
Summary : This project involves transferring of lake table parquet files from lake-com EDB bucket to inbound folder of exchange account bucket(ADX).
Roles & Responsibilities:
PROJECT: SAMLINK ACCOUNT (Domain-BFS)
Summary: In this project an EDL (Enterprise Data Lake) is created where we store our data and run different type of analytics, my role in this project being a ETL developer is to pull the data from staging layer till data mart layer with the help of ETL tool Informatica where we perform all the needful business glossaries and cleansing to provide the final data as per business requirements for reporting purposes.
ROLES/RESPONSIBILITIES:
Understanding the Business Requirements and developing various ETL transformations to load data from source to target.
UTILITIES DEVELOPED USING PYTHON:
GCP (Google cloud platform):
Python
undefined