I am an IT professional with over Six years of experience in the industry, specializing in Big Data technologies such as Hadoop, Hive, Spark, SQL, Pyspark, and MapReduce, and I am also certified in Artificial Intelligence and Machine Learning. My expertise extends to managing various data storage systems, including relational databases, data warehouses, and big data platforms. Skilled in collaborating with cross-functional teams to pinpoint business requirements, I am known for my strong communication and problem-solving abilities. With a detail-oriented approach, I am deeply passionate about crafting impactful data solutions.
DProject Name:- 'Equitable'
• Managing multiple portfolios that include both Finance and Non-finance components, each with various data products, requires a comprehensive and strategic approach.
• Under its Finance division, Developed and enhanced multiple data products, continuously improving them to better serve clients' needs and stay ahead in the evolving financial landscape.
• Under Non-Finance, Equitable offers products like Individual retirement, Group retirement, Wealth Management, Life claims, and Comp audit. I have worked on data products for these areas, collaborating directly with clients and business stakeholders to complete enhancements and logic changes.
• I manage all month-end, quarter-end, and week-end jobs, verifying source data and running queries to ensure the latest data meets requirements. I address any job failures with root cause analysis, make necessary code adjustments, and ensure smooth operations post-deployment.
• Designing and implementing data pipelines to extract, transform, and load (ETL) data from various sources into data warehouses. This involves writing code or using ETL tools to cleanse, validate, and transform data for further analysis.
• Developed and maintained end-to-end big data pipelines. Used PySpark, Hadoop, MapReduce, and Sqoop to efficiently process and transform massive volumes of data from diverse sources, ensuring data integrity and high performance.
• Working closely with data scientists, analysts, and other stakeholders to understand their data requirements.
• Working on Databricks for Analytics and Advanced Data Processing.
Project Name:- 'Cisco'
• I was using SAP Ariba (SAP Ariba is a cloud solution for comprehensive supplier management. The fast implementation, existing interfaces to other Ariba solutions as well as continuous further developments guarantee innovation and state-of-the-art technology) to manage its sourcing and procurement activities. Registration and onboarding process to follow to set up suppliers in Ariba upon which all orders and payment was done smoothly. I was also addressing all ad-hoc requests regarding supplier requests promptly. I was also using Salesforce for other project activities.
Project Name:- 'Google Maps'
• Google Maps Data processing and data Analysis using Google in-house tools like Pushpin, Crowd Computes & Super Mario. I was also part of the quality check team where we used to check the final data once it met the client's expectations & requirements the data used to go live on Google Maps, and I was also using SQL scripts to fetch data and numbers for production use.
Received Excellence Award at Infosys for outstanding performance in a quarter, alongside two Spot Awards for commendable achievements. Additionally, recognized multiple times as a top team player at TCS.
In my free time, I enjoy watching cricket & Football matches and exploring the latest in Technology.