Results-oriented and innovative Senior Software Engineer with [Number] years of experience. Easily communicates complex technical requirements to non-technical stakeholders. Excellent leadership record of leading development teams in enterprise-wide development projects. Detail-oriented, organized and meticulous employee. Works at fast pace to meet tight deadlines. Enthusiastic team player ready to contribute to company success.
Title: Care Analytics
Technologies used: Apache Spark, Spark SQL, Scala, Hive, IntelliJ IDE
Client: CIGNA
Responsibilities:
· Reading data from File System into a Spark.
· Writing the Transformations and Actions for processing the data.
· Writing the Spark SQL write the SQL queries for incoming data.
Created hive table with schema and loaded the data using Spark SQL and dataframes.
· Developed Hive external tables to store the processed results in a tabular format.
· Creating Hive table with partitions and loaded data.
· Used to participate in Daily stand up meetings.
Title: Market Impact [Datastage Migration to Spark Migration].
Technologies Used: Apache Spark SQL, Intellij, GITHUB, Shell Scripting, Control_M
Client: Sears Holding Corporation
Responsibilities:
· Understand the logic of transformations in Datastage and implement the same in Hadoop. We used Spark SQL for these transformations.
· Involved in testing where lot of intermediate files in Datastage and Hadoop needs to match.
· Coordinating meetings with onsite business to give details about miss-match data.
Big data analysis
· Big Data Developer with almost 3+ years of IT experience in designing, developing of software using wide variety of technologies in all phases of the development life cycle
· Hands-on development and implementation experience in Big Data Management Platform (BMP) using Hadoop, Apache Spark, Hive,Sqoop and Scala.
· Basic knowledge on cloud environment like Amazon Web Services (AWS) EMR, S3.
· Knowledge in importing and exporting data using Sqoop from Relational Database Systems to HDFS and vice-versa.
· Experience in handling various file formats like AVRO, Parquet etc.
· Scheduled various ETL processes and Hive scripts by developing Oozie workflow.
· Developed data queries using HQL and optimized the Hive queries .
· Designing and creating Hive external tables using shared meta-store in MySQL, instead of derby with partitioning, dynamic partitioning, and bucketing.
· Creative, self-motivated, punctual with good communication and interpersonal skills.