Summary
Overview
Work History
Education
Skills
Languages
Projects
Timeline
SoftwareEngineer
Rohit  Borkar

Rohit Borkar

Aurangabad.MH

Summary

"2+ Years Experienced Data Engineer with expertise in Big Data Technologies and AWS Services. seeking opportunities to optimize data workflows, Maintain data integrity and data-driven insights. Proficient in SQL,Python and Cloud Technologies. Committed to driving innovation and data excellence.

Overview

2
2
years of professional experience

Work History

AWS Data Engineer

Addon Systems Information Technology Services PVT.LTD.
07.2022 - Current
  • Designed and implemented robust data pipelines on AWS, leveraging services such as AWS Glue, AWS Lambda, and AWS Step Functions to facilitate data extraction, transformation, and loading (ETL) processes.
  • Managed and optimized data storage solutions using AWS S3, Redshift, and RDS, ensuring data accessibility, security, and scalability while minimizing costs.
  • Collaborated with cross-functional teams to gather and refine business requirements, translating them into effective data engineering solutions that supported analytics and reporting.
  • Automated routine data tasks and workflows using AWS CloudFormation templates and Infrastructure as Code (IAC) principles, reducing manual intervention and enhancing system stability.

Big Data Engineer

Addon Systems Information Technology Services PVT.LTD.
08.2021 - 07.2022
  • Developed and maintained end-to-end data pipelines, processing and analyzing large datasets using technologies such as Hadoop, Spark extract valuable insights.
  • Designed and implemented data architecture solutions that accommodated structured and unstructured data from various sources, optimizing data ingestion and storage with tools like HDFS and MySQL databases.
  • Enhanced data quality and accuracy by implementing data cleansing, validation, and enrichment processes, ensuring high-quality data for analysis.
  • Built and maintained monitoring and alerting systems to proactively identify and address data processing and infrastructure issues, minimizing downtime and enhancing system reliability.
  • Kept up-to-date with emerging big data technologies and best practices, ensuring the incorporation of the latest advancements into data engineering processes to meet evolving business needs.

Education

B-Tech -

Marathwada Institute of Technology, Aurangabad
Aurangabad,India
08.2021

Skills

  • Hadoop Ecosystem: HDFS, Hive, Sqoop, PySpark
  • Databases: MySQL,Oracle
  • Cloud Platform: AWS (Lambda, Glue, S3, EC2, IAM, Cloudwatch)
  • Programming Language: Python
  • Other Skills: Agile methodology, Jira, Git, Github

Languages

English
Intermediate (B1)
Hindi
Advanced (C1)
Marathi
Advanced (C1)

Projects

Project 1: "Serverless Entity Extraction and Real-time Database Integration with S3, Lambda, and RDS"                           Nov 2022 - July 2023

Domain: Legal Domain.

Role :-Data Engineer

Project Goal : The goal of the project was to create Automated Data Processing Pipeline using Amazon Service.


Tools Used:

  • Amazon S3
  • AWS Lambda
  • AWS Lambda Triggers
  • Amazon RDS (Relational Database Service)
  • Proactive Monitoring and Alerting Systems


"I designed and implemented an AWS data pipeline that utilized S3 for data storage, Lambda functions for data processing, Lambda triggers for automation, and RDS (Relational Database Service) for efficient data storage and retrieval. This streamlined pipeline enhanced data processing and improved the accessibility and performance of critical data analytics systems." 


Roles and Responsibilities:

  • Developed and architected an end-to-end data pipeline leveraging AWS services, including S3 for data storage and AWS Lambda for data processing.
  • Implemented Lambda triggers to automate data processing, enhancing system efficiency and reducing manual intervention.
  • Engineered data transformations within Lambda functions, ensuring data quality and compatibility for downstream analytics.
  • Managed and optimized an RDS (Relational Database Service) instance, ensuring data integrity, performance, and secure storage.
  • Set up proactive monitoring and alerting systems to detect and address issues promptly, maintaining the reliability of the data pipeline.
  • Attended daily scrum calls with the team and weekly scrum calls with the client.
  • Held daily discussions with the team about daily tasks and feature goals.


Project 2 :   "AWS Data Format Transformation Automation" .                                                                                                       Dec 2021 - August 2022                                                                                                          

Role: Data Engineer

Project goal:  "To automate the transformation of data from CSV to Avro format within an AWS environment, enhancing data processing efficiency and improving data quality for downstream analytics." 


Tools used:  

  • AWS Glue
  • AWS Lambda
  • Amazon S3
  • Amazon RDS (Relational Database Service)
  • AWS IAM (Identity and Access Management)


"Our project pipeline automated the transformation of CSV data to Avro format using AWS services. It began with data ingestion into an S3 bucket, triggering an AWS Lambda function. The Lambda function executed an ETL job through AWS Glue for data transformation. The transformed data was then stored in another S3 bucket. IAM policies and permissions were managed to ensure security and access control throughout the process."


Roles and Responsibilites:

 

Timeline

AWS Data Engineer

Addon Systems Information Technology Services PVT.LTD.
07.2022 - Current

Big Data Engineer

Addon Systems Information Technology Services PVT.LTD.
08.2021 - 07.2022

B-Tech -

Marathwada Institute of Technology, Aurangabad
Rohit Borkar