Summary
Overview
Work History
Education
Skills
Certification
Accomplishments
Timeline
Generic

Nitin Kumar

Senior Software Engineer Lead
Noida,Uttar Pradesh

Summary


Technically sophisticated & result driven professional, with an excellent career of 13 years of work experience in IT and 9+ years in Big Data technology. Presently working with Optum Global Service India. Process oriented data analyst and developer in financial domain implementing Banking and Financial solutions.

  • Expertise in big data architecture with Hadoop File system and its eco system tools Apache Spark, Hive, Airflow, NiFi, Map Reduce, Pig, Oozie, Flume, Impala, Kibana Proficient in Scala, Python, Java and C#.
  • Familiar with machine learning packages, deep learning with CNTK and TensorFlow. Experienced in applications of AWS (EMR, EC2, S3,Athena and Glue) cloud.
  • Good knowledge on Amazon AWS concepts like EMR & EC2 web services which provides fast, scalable and efficient processing of Big Data.
  • Developed, deployed and supported spark applications to handle semi and unstructured data Excellent experience in building data intensive applications, tackling challenging architectural and scalability problems in Finance.
  • Currently helping banking transactions with Petabyte scale data pipelines. Ability to work with managers and executives to understand the business objectives and deliver as per the business needs firm believer in Team works makes Dream works.
  • Experience with Unit testing/ Test driven Development (TDD), Load Testing. Software development model & project planning using Microsoft Project Planner and RALLY / JIRA.
  • Experience in SQL and good knowledge in PL/SQL programming and developed Stored Procedures and Triggers and Data Stage, DB2, UNIX, Hadoop, and Hive. Good knowledge in evaluating big data analytics libraries (MLlib) and use of Spark-SQL for data exploratory.

Overview

11
11
years of professional experience
8
8
years of post-secondary education
1
1
Certification
2
2
Languages

Work History

Software Engineer Lead I

Optum Global Services
Noida, Uttar Pradesh
06.2020 - Current

Analytic Theme: Fraud Analysis

Team Size - 10

Description: COB or Coordination of Benefits refers to the process of determining a health insurance company’s status as a primary or secondary payer to provide medical claim benefits for a patient having multiple health insurance policies.With COB it is much easier to determine the responsibilities of the primary payer and settle on the contribution of the secondary payer while processing the medical claims.This is the process by which a health insurance determines if it should be the primary or secondary payer of medical claims for a patient and ensuring claims are paid correctly who has coverage from one health insurance policy. It ensures that the amount paid by plans in dual coverage situations does not exceed 100% of the total claim, to avoid duplicate payments.How can we leverage technology to mitigate that challenge.This is where data analytics and we are working upon to rescue this where an algorithm can predict

Responsibilities:

  • Researched, evaluated, architected, and deployed new tools, frameworks, and patterns to build sustainable
  • Big Data platforms for the clients
  • Created a process of reading files using Jsch library from odm servers and uploading them on s3 bucket
  • Extracted business logic from SAS and converting them into a Scala code to understanding the current data flow in SAS
  • Monitoring performance and advising any necessary infrastructure changes
  • Writing Scala code for huge data frame and code for push down predicates for high improvements
  • Created a central data layer by fetching records from multiple data sources
  • Validation of data by writing code logic over business requirements
  • Proficient understanding of distributed computing principles
  • A willingness to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations
  • Experienced in working with high dimensional claims and data sets (250+ million rows and ~2150 columns)
  • Identified the different leading indicators/important variables based on claims, physician and demographic level data
  • Used dimension reduction based on amen decrease in accuracy, PCA and checked for co linearity to reduce the number of variables from ~6500 variables to 20 variables
  • Functioned as a part of a team of 7 members (Onshore team)
  • Primarily engaged in requirement gathering, systems analysis to meet the customer requirements, coding, and Ensuring compliance to quality standards
  • Interacts effectively with executive’s managers and interdisciplinary teams globally in the orchestration of organizational projects
  • Environment and Skills used: Kubernetes, Spark, Scala, Drools, DB2, MySQL, AWS S3, Jsch Library, Agile scrum, Linux scripting, Rally, Jenkins

Technical Lead

Fiserv India Pvt. Ltd.
Noida, Uttar Pradesh
10.2016 - 06.2020

Project: WBI (Wire Batch Import)

Team Size:13

Description: Wire manager is a scalable; next-generation solution from Fiserv, Wire Manager expedites wire transfers for any organization and customers, providing one of the quickest and most secure methods to move funds between banking accounts by doing wire transfer.Scope of these modules is to pick files or create file and apply several processes to an automated workflow and streamlined approval process replace tedious, inefficient and time-consuming tasks, so your banking staff can manage wire transfers in less time.A built-in processing workflow automatically validates each transfer file to ensure that it meets Federal Reserve standards, checks for duplicate items and runs risk management controls.The workflow also provides you and your customers with convenient e-mail notification regarding completed or rejected items.

Responsibilities:

  • Achieved an accuracy of more than 90%of the predictive models for each of the projects and presented the results to the clients.Used ensemble across all models for client delivery on a monthly basis
  • Built proficiency in the rare disease space and generated a revenue growth of at least ~3 million dollars for each of the clients
  • Big data analytics with Hadoop, HiveQL, Spark RDD, and Spark SQL
  • ETL to convert unstructured data to structured data and import the data to Hadoop HDFS
  • Efficiently integrated and analyzed the data to increase drilling performance and interpretation quality
  • Analyzed sensors and well log data in HDFS with HiveQL and prepare for prediction learning models
  • Handled importing data from various data sources, performed transformations using Hive and loaded data into HDFS
  • Designed the real-time analytics and ingestion platform using Spark 2.x and scala
  • Extracted meaningful analyses, interpreted raw data conducted quality assurance and provided meaningful conclusions and recommendations to the clients based on the data results
  • Conducted training and knowledge sharing session areforthe offshore and onsite team members, interns in various analytical, statistical testing, Big data tools
  • Interaction with Business Analyst, SMEs, and other Data Architects to understand Business needs and functionality for various project solutions
  • Root cause analysis using text mining of work order description to find reason behind machine breakdown and the failed part(s) involved
  • Sequence mining to identify pattern of machine breakdown
  • Environment and Skills used:Spark, Scala, Hive, OLAP, DB2, AWS, EMR, AWS LAMDA, Metadata, Sqoop, Flume, Hadoop, Agile scrum, Linux scripting , MS Excel,Rational Rose

10.2012 - 06.2016

Project: MFA (Multi Factor Athentication)

Team Size: 5

Description:Multi-factor authentication (MFA) is a method of confirming a user's claimed identity in which a user is granted access only after successfully presenting 2 or more pieces of evidence (or factors) to an authentication mechanism.The goal of MFA is to create a layered defense and make it more difficult for an unauthorized person to access a target such as a physical location, computing device, network or database.If one factor is compromised or broken, the attacker still has at least one more barrier to breach before successfully breaking into the target

  • Responsibilities:
  • Performed data analysis, natural language processing, statistical analysis, generated reports, listings
  • Big data analytics with Hadoop, HiveQL, Spark RDD, and Spark SQL
  • Built prediction models of major subsurface properties for theunderground image, geologic interpretation, and drilling decisions
  • Utilized advanced methods of big data analytics, machine learning, artificial intelligence, wave equation modeling, and statistical analysis
  • Provided anexclusive summary of oil/gas seismic data and well profiles, conduct predictive analyses and data mining to support interpretation and operations
  • Big data modeling with theincorporation of seismic, rock physics, statistical analysis, well logs and geological information into the 'beyond image'
  • Involved in converting Hive/SQL queries into Spark transformations using Scala
  • Constantly monitor the data and models to identify the scope of improvement in the processing and business
  • Manipulated and prepared the data for data visualization and report generation
  • Performed data analysis, statistical analysis, generated reports, listings, and graphs
  • Accomplished customer segmentation algorithm in python, based on behavioral and demographic tendencies, for improving campaigning strategies
  • This helped reduce marketing expenses by 10% and helped boost client’s revenue
  • Built customer lifetime value prediction model using historical telecom data in SAS to better serve high priority customers through loyalty bonus, personalized services and draft customer retention plans and strategies
  • Developed PL SQL procedures and functions to automate billing operations, customer barring and number generations
  • Designed, prepared, documented and tested web services using C Sharp
  • Environment& Skill Used: Spark, python, Hadoop, Hive, Java, SQL Server 2008R2/2005 Enterprise, Crystal Reports,
  • Hadoop, Windows Enterprise Server 2000, SQL Profiler, and Query Analyzer

Software Engineer

Mercer
Gurugram, Haryana
12.2010 - 08.2012

Project: DocProd

Team Size 8

Description:Provide services and APIs to interact with the DocProd system’s document generation functionality.This solution defines a DocProd WebService, simple API’s to consume the service, the security approach required to interact with the service methods.Provide a light-weight, .NET API that can be consumed by clients with minimal overhead and dependencies

Responsibilities:

  • Created PL/SQL packages and DatabaseTriggers and developed user procedures and prepared user manuals for the new programs
  • Used python to validate master data from mainframe system
  • Performed performance improvement of the existing Data warehouse applications to increase theefficiency of the existing system
  • Attended to both business and technical considerations when designing solutions to project/team or company related issues
  • Participation in Knowledge Transfer
  • Environment and skills used: C#, Sql Server, Python, Visual Studio, Windows Server 2000, Jquery, JavaScript,
  • AngularJs

Education

MCA - Computers

IEC College of Engineering & Technology
UPTU
07.2006 - 06.2009

BSc(Computer Science) - Computers

Gurukul University
Haridwar
07.2003 - 06.2006

12th - Science

UP Board
U.P.
06.2002 - 06.2003

10th - Science

UP Board
U.P
06.2000 - 06.2001

Skills

    Databases - SQL Server, DB2, MySQL, Nosql (Cassandra)

undefined

Certification

Certified BigData Engineer , [Nasscom] - [2016]

Accomplishments

  • Achieved two times Star Of The Month award in Mercer.
  • Achieved Shining Star award in Fiserv.

Timeline

Software Engineer Lead I

Optum Global Services
06.2020 - Current

Certified BigData Engineer , [Nasscom] - [2016]

08-2019

Technical Lead

Fiserv India Pvt. Ltd.
10.2016 - 06.2020

10.2012 - 06.2016

Software Engineer

Mercer
12.2010 - 08.2012

MCA - Computers

IEC College of Engineering & Technology
07.2006 - 06.2009

BSc(Computer Science) - Computers

Gurukul University
07.2003 - 06.2006

12th - Science

UP Board
06.2002 - 06.2003

10th - Science

UP Board
06.2000 - 06.2001
Nitin KumarSenior Software Engineer Lead