Summary
Overview
Work History
Education
Skills
Certification
Websites
Languages
Personal Information
Driving License - Category
Timeline
Generic

HEMANTH RATAKONDA

Trier

Summary

Boasting over a decade of expertise in the full software development lifecycle, my forte lies in Data Engineering, MLOps, and Artificial Intelligence. Fueled by an unwavering passion for Data and AI, I excel in crafting innovative and scalable solutions by employing creative and strategic problem-solving techniques. My track record is marked by the successful design and implementation of large-scale systems utilizing cutting-edge Big Data technologies. As an entrepreneur, I've overcome significant challenges, which have not only bolstered my resilience but also honed my skills in delivering effective solutions amidst high-pressure situations. These critical experiences have been instrumental in my professional development, reinforcing my dedication to excellence and securing my trajectory toward long-term success in the tech sphere.

Overview

12
12
years of professional experience
1
1
Certification

Work History

TEAM LEAD, AI

EIB
Luxembourg
09.2024 - Current
  • Led and architected AI solutions for the engineering team, overseeing end-to-end design, development, and deployment of scalable, secure, production-ready systems.
  • Developed and deployed AI-driven chatbots using OpenAI's ChatGPT and AI Foundry, automating support workflows and enhancing customer engagement through intelligent dialogue systems.
  • Built enterprise-grade conversational AI with Microsoft Copilot Studio, integrating internal knowledge bases and business logic to automate tasks and streamline data retrieval.
  • Designed and implemented advanced NLP pipelines, including custom models for summarization, entity recognition, and intent classification, enabling real-time understanding and response.
  • Leveraged Microsoft Azure services to deploy low-latency, cloud-native AI applications, optimizing performance, reducing costs, and ensuring system reliability.
  • Architected and maintained enterprise data lakes and pipelines, using ETL, ELT, and CDC for both real-time and batch data processing.
  • Led AI team's data engineering initiatives, focusing on pipeline orchestration, schema management, data quality, and storage optimization for AI and analytics use cases.
  • Collaborated with product managers, stakeholders, and cross-functional teams to identify and prioritize AI/ML solutions aligned with business goals.
  • Directed agile ceremonies, sprint planning, and code reviews, mentoring junior engineers and ensuring adherence to best practices, secure coding, and model governance.
  • Applied OCR and machine learning techniques to extract structured data from unstructured sources like PDFs and images, enabling automated document intelligence workflows.

SENIOR DATA ENGINEER - CONSULTANT

COOP
Stockholm
06.2023 - 09.2024
  • Worked closely with data scientists for building predictive models using PySpark.
  • Extensive experience with Delta Lake, Azure Databricks, Azure Data Factory, Blob-Storage, ADL, Synapse, Kafka.
  • Continuously monitor and improve data engineering processes and methodologies to enhance efficiency and productivity.
  • Implemented MLOps practices, including continuous monitoring and improving data engineering processes and methodologies.
  • Led and mentored junior data engineers, providing guidance on best practices and development methodologies.
  • Managed life cycle elements of ETL development, from robust testing to final deployment.
  • Effectively integrated various information systems, including CRMs, ERPs and accounting systems.
  • Create ETL processes to extract images from the CRM and execute AI algorithms to detect and prevent fraudulent activities.

CONSULTANT

Ericsson
Stockholm
12.2021 - 03.2023
  • Worked closely with data scientists for building predictive models using PySpark.
  • Extensive experience with Azure Delta Lake, Azure Databricks, Azure Data Factory, Azure HD Insights.
  • Developed ETL pipelines using different Azure Data Factory activities and Databricks. The transformations include lookup, datacasting, data cleansing, data transform, and aggregations.
  • Flattening and transforming huge amounts of nested data in parquet and delta forms using Spark SQL and the newest join optimization methods, then loading them into Hive, Delta Lake, and Snowflake tables.
  • Supported operation team in Hadoop cluster maintenance activities including commissioning and decommissioning nodes and upgrades.
  • Responsible for defining, developing, and communicating key metrics and business trends to partner and management teams.
  • Designed a prototype for a product recommendation system utilizing python machine learning libraries and the Azure SDK, which included ADLS, Azure Identity, and Azure Key Vault.
  • Experienced in Gitlab CI and Jenkins for CI and End-to-End automation for all build and CD.
  • Implemented automated alerting and monitoring systems on top of our applications, which are built using tools like Grafana and Kibana. So, we would receive automated alerts in case of any production failures to report any on-call rotation person in the team.

MACHINE LEARNING ENGINEER-CONSULTANT

SEB
Stockholm
05.2020 - 11.2021
  • Transformed raw data to conform to assumptions of machine learning algorithm.
  • Developed advanced graphic visualization concepts to map and simplify analysis of heavily-numeric data and reports.
  • Researched, designed and implemented machine learning applications to solve business problems affecting millions of users.
  • Design and develop Model Ops using Open Source ML-Flow, openshift, GCP.
  • Design and develop pipelines using spark, scala, Python, Ni, presto on CDP platform.
  • Prototyped machine learning applications and quickly determined application viability.
  • Tuned systems to boost performance. Authored code and enhancements for inclusion in future code releases and patches.
  • Optimize the Pyspark jobs to run on Kubernetes Cluster for faster data processing.
  • Developed analytical components using PySpark and Spark Stream.
  • Designed and coordinated with the Data Science team in implementing Advanced Analytical Models in Hadoop Cluster over large Datasets.
  • Wrote scripts in Hive SQL for creating complex tables with high performance metrics like Partitioning.

DATA ENGINEER- CONSULTANT

Qliro
09.2019 - 03.2020
  • Contributed to internal activities for overall process improvements, efficiencies and innovation. Monitored incoming data analytics requests, executed analytics and efficiently distributed results to support business strategies.
  • Prepared written summaries to accompany results and maintain documentation.
  • Addressed ad hoc analytics requests and facilitated data acquisitions to support internal projects, special projects and investigations.
  • Write ETL job and Develop application using spark scala Applications. Aws Cloud Migration, setting up MSK, EMR, Databricks.
  • Write data pipelines to bring data from API (paymentiq, Zendesk and Google Analytics) and ingest into Data Lake.
  • Involved in CDC design and implement data pipelines to make use CDC. Designed and developed analytical data structures.
  • Collaborated on ETL (Extract, Transform, Load) tasks, maintaining data integrity and verifying pipeline stability.

MACHINE LEARNING ENGINEER - CONSULTANT

H&M
Stockholm
03.2019 - 08.2019
  • Identified new problem areas and researched technical details to build innovative products and solutions.
  • Develop recommendation Engine.
  • Collaborated with multi-disciplinary product development teams to identify performance improvement opportunities and integrate trained models.
  • Develop ETL tasks along with models deploy them in Azure.
  • Productionize models and responsible for MLOps.
  • Created customized applications to make critical predictions, automate reasoning and decisions and calculate optimization algorithms.
  • Transformed raw data to conform to assumptions of machine learning algorithm.
  • Collaborated with multi-disciplinary product development teams to identify performance improvement opportunities and integrate trained models.
  • Composed production-grade code to convert machine learning models into services and pipelines to be consumed at web-scale.

BIG DATA DEVELOPER

kindredGroup Plc
Stockholm
03.2017 - 02.2019
  • Acted as expert technical resource to programming staff in program development, testing and implementation process.
  • Partnered with infrastructure engineers and system administrators in designing big-data infrastructures.
  • Worked in hybrid environment where legacy and data warehouse applications and new big-data applications co-existed.
  • Engaged with business representatives, business analysts and developers and delivered comprehensive business-facing analytics solutions.
  • Bring data to Hadoop from Solace/JMS Source to Hadoop using flume.
  • Wrote Custom interceptors to convert to protobuf to avro.
  • Developing application that comply GDPR guidelines. Bridge between data engineering team and data science team.
  • Compiled, cleaned and manipulated data for proper handling.
  • Wrote software that scaled to petabytes of data and supported millions of transactions per second.
  • Engaged with business representatives, business analysts and developers and delivered comprehensive business-facing analytics solutions.

BIG DATA ENGINEER

Kogentix
Hyderabad
07.2016 - 02.2017
  • Convert Pl/Sql functionality to Scala code.
  • Worked closely with cloudera team to gain knowledge on Apache kudu and implemented Apache kudu in Dev Environment.
  • Did POC on Apache Kudu to fit as alternative to Hive.
  • Involved in end to end testing of applications.
  • Create Hive tables and load them with data from oracle.
  • Developed, implemented and maintained data analytics protocols, standards and documentation.
  • Analyzed complex data and identified anomalies, trends and risks to provide useful insights to improve internal controls.
  • Designed and developed analytical data structures.

CONSULTANT

Akamai
Bengaluru
02.2016 - 06.2016
  • Collaborated with cross functional team on ETL (Extract, Transform, Load) tasks, maintaining data integrity and verifying pipeline stability.
  • Authored specifications for data processing tools and technologies.
  • Process logs and extract data required for business for Analysis.
  • Analyzing data and creating data for clients from live data available.
  • Downloading data from hdfs to local server and to extract data as per requirement and visualize it for business.
  • Part of NEA (Network Experience Analytics) product development team, to develop tool for network performance analysis.
  • Analyzed problematic areas to provide recommendations and solutions.
  • Collaborated with teams to define, strategize and implement marketing and web strategies.
  • Delivered outstanding service to clients to maintain and extend relationship for future business opportunities.

APPLICATION DEVELOPER

IBM
Bengaluru
01.2014 - 01.2016
  • Participated with clients in discussion meetings.
  • Resolved system test and validation problems to provide normal program functioning.
  • Participated in design and planning exercises for future software rollouts.
  • Wrote code for database-driven applications.
  • Processed Big Data using Hadoop cluster of 638 nodes.
  • Involved in PIG script development for data processing.
  • Extensively used HIVE for querying large datasets.
  • Creating hive tables and working on them.
  • Writing Scoop jobs for moving data between Relational Databases and HDFS.
  • Involved in unit testing, system integration testing for Hadoop jobs and prepared test cases.
  • Working in 24/7 environment to fix production issues and identify root causes.

Education

Bachelor of Science - Instrumentation Engineering

JNTUA
08-2013

Skills

  • Python
  • scala
  • Spark
  • Databricks
  • AWS EMR, S3, AWS Glue, Athena, Databricks, EC2, DocumentDB, MongoDB, Snowflake
  • Airflow
  • GCP, Docker, Databricks, BigQuery, Dataproc, Pub/Sub, Snowflake
  • Cloudera/HDP - Spark, NiFi, Kudu, Hive, Sqoop, Oozie, HBase SQL
  • ML-Flow
  • CI/CD, GitOps, Jenkins, Azure DevOps, Ansible, GoCD
  • Docker
  • kubernetes
  • Collibra
  • Data Lake, Data Mesh, Lakehouse, Data Vault
  • Azure, ADF, Databricks, SQL, Synapse, ADL, OpenAI, AI Foundry, Copilot, Copilot Studio

Certification

  • AWS certified cloud practitioner
  • Microsoft Certified: Azure Fundamentals
  • Microsoft Certified: Azure Data Engineer Associate
  • Azure Databricks platform architect,
  • Microsoft certified: Azure AI fundamentals

Languages

  • Telugu
  • English
  • Hindi

Personal Information

  • Date of Birth: 05/11/91
  • Nationality: Swedish
  • Marital Status: Single

Driving License - Category

B

Timeline

TEAM LEAD, AI

EIB
09.2024 - Current

SENIOR DATA ENGINEER - CONSULTANT

COOP
06.2023 - 09.2024

CONSULTANT

Ericsson
12.2021 - 03.2023

MACHINE LEARNING ENGINEER-CONSULTANT

SEB
05.2020 - 11.2021

DATA ENGINEER- CONSULTANT

Qliro
09.2019 - 03.2020

MACHINE LEARNING ENGINEER - CONSULTANT

H&M
03.2019 - 08.2019

BIG DATA DEVELOPER

kindredGroup Plc
03.2017 - 02.2019

BIG DATA ENGINEER

Kogentix
07.2016 - 02.2017

CONSULTANT

Akamai
02.2016 - 06.2016

APPLICATION DEVELOPER

IBM
01.2014 - 01.2016

Bachelor of Science - Instrumentation Engineering

JNTUA
HEMANTH RATAKONDA