Seeking a challenging career through which I can contribute continuously to the growth of the organization by using my fullest potential in all possible means.
Around 8 years of overall experience performing the roles of SQL Developer and Senior Data Engineer in various projects. Also ensuring that On Cloud and On-Premises HCM and ERP (Ramco Product) implementation. Good Understanding on Bigdata Terminologies. Experience In Apache Spark along with Python. Hands on experience on AWS (Lambda, Storage S3 and Glue). Used Standard Python Modules like Json, PySpark. Responsible For automations which reduces the manual intervention and will improve the process. Having experience with MS SQL features like Stored Procedures, Triggers, Cursor, Joins and SQL Jobs. Very rich experience in back-end objects of MS SQL server. Good working knowledge on creation crystal reports using Ramco Decision Works (Product Tool).
English & Tamil
Portland Group Electric – Data Analysis, PGE, Illinoi, USA, Senior Data Engineer, 2 years, 10, Hadoop, Spark, Python, AWS Components like (S3, Lambda, Glue), The purpose of this project to get the respected data from sensors and create ML MODELS for Business., Data Ingestion pipeline using AWS LAMBDA (API Data Extraction), Create Python Script with transformation Logic using PANDAS., Creating Python Script for Error Handling mechanisms which includes sending of success and failure mails to the stakeholders using AWS SNS., Creating Pyspark transformations for the below operations., Joining two or three Data frames., Performing windowing operations on it., Analyzed the SQL scripts and designed the Solution to implement using pyspark., Perform POCs and present it to client for billing. Migration – Business Data, Al Jazeera Support Service Company (Saudi Arabia), Data Engineer, 2.4 years, 6, Spark, Python, AWS Components like (S3, Lambda, Glue), This purpose of this project to move the existing historical business data and current data from one legacy server to another server. We were moved almost 15 years of Data., Involved in creating Data pipeline., Read the input csv file from AWS s3 bucket using Pyspark., Involved In creating Data Model., We are moving around 60 objects for that we are creating spark data frame., Using pyspark we are doing transformation and cleaning of data based on business need., After to all the transformation write the data into AWS S3 bucket. Development and Post GO Live Support, Logics and fab India, SQL Developer, 3 years, 4, MS SQL server, Ramco ERP, Developed and maintained new screens using IEDK tool and Write Business Logic Using MS SQL., Implemented Workflow Logics for 3 level of authorization for PO approval Using MS SQL., Developed I Reports for customer needs using Ramco Decision Works., Developed and tested mail Triggers for Po Renewal using MS SQL., Implemented new screens for Inventory Reports and write the Business Logic using MS SQL., Post Go live support and resolving production issues.
I hereby declare that all the details and articulated above are true to the best of my knowledge.