Summary
Overview
Work History
Education
Skills
Websites
Languages
Certification
Accomplishments
Timeline
Generic

Tirumala Jalakam

Bangalore

Summary

Software Developer and Data Engineer with expertise in building scalable backend systems and data pipelines. Skilled in API development, ETL, workflow automation, and CI/CD integration. Adept at optimizing data processing, automating security workflows, and improving system reliability. A problem solver and collaborative team player, focused on performance optimization and efficient project execution.

Overview

6
6
years of professional experience
1
1
Certification

Work History

Software Developer

Tata Consultancy Services
Bangalore
11.2024 - Current

Responsibilities:

  • Backend API Enhancement: Enhanced and maintained Django & DRF-based APIs integrating with Black Duck for software composition analysis and security vulnerability tracking.
  • Scalable Data Pipeline Optimization: Improved data processing efficiency in a Redis queue-based pipeline, ensuring scalable and reliable data flow for real-time security analysis.
  • Script Development & Data Processing: Developed and optimized multiple Python scripts to automate Black Duck data extraction, transformation, and enrichment processes, improving maintainability and performance.
  • Large Dataset Management: Implemented pagination, filtering, and structured data handling mechanisms to process large datasets (thousands of records per run) without performance degradation.
  • Workflow Automation: Automated Jira ticket workflows using Groovy scripts, streamlining issue tracking, ticket reopening, and comment logging based on Black Duck data changes.
  • Collaboration & System Optimization: Worked closely with cross-functional teams to optimize API performance and strengthen compliance monitoring processes through efficient integration strategies.

Data Engineer

Tata Consultancy Services
Bangalore
04.2023 - 11.2024

Responsibilities:

  • Data Collection and Integration: Identified and extracted data necessary from the clients’ databases, as well as to collect such information and transform it appropriately for organizational and business purposes, if needed.
  • ETL Pipeline Development: Also created and ensured the maintenance of enhanced ETL procedures to extract and transform data for various source systems and types of storage devices.
  • Data Storage: Verified and sanitized the data and then further loaded the data to GCS buckets.
  • CI/CD Pipeline Development: They have enacted a CI/CD procedure using Apache Airflow to sustain data conduits and to evaluate the Continuous Integration/Continuous Deployment Levels.
  • Technology Utilization: Apache PySpark programming language was utilized for all computations and in addition, I used Python along with Pandas tools for the data cleaning and preparation process.

Software Engineer

MSYS Technologies Private Limited
Chennai
02.2021 - 03.2023

Responsibilities:

  • Automation Development and Bug Resolution: Worked on developing and improving features. Developed automation and fixed bugs. resolved problems with automation scripts to improve their efficiency and dependability in accordance with the non-acute standards.
  • Technical Expertise: Three is considerable technical proficiency in integration, maintenance and optimization in Python, Elasticsearch, Rest API, Git services across computations.
  • Storage Array Testing: Usability tests on high end and low end storage arrays is doing tests and arriving at the measurements of each so as to ascertain the functionality standards of the kinds of the models.
  • Automation Testing: Modified and integrated unit tests across numerous models of storage arrays, inclusively through many advocating automation scripts that were used to create an extensive range of tests.
  • Debugging and Quality Assurance: Corrected some of the errors that were observed while performing the storage reliability tests in order to bring about a more reliable testing environment. Established a set of general QA automation and it was possible to make enhancements in order to improve quality and achieve the best performing cohort of the software.

Software Engineering Intern

Glenwood Systems
Chennai
09.2019 - 10.2019

Responsibilities:

  • Cross-Departmental Collaboration: Worked effectively with members of the software development team and personnel from various departments to ensure cohesive project development and successful outcomes.
  • Task Prioritization: Prioritized and managed tasks efficiently, ensuring completion in order of importance to meet project deadlines and objectives.

Education

Bachelor of Technology - Computer Science And Engineering

JNT University
Ananthpur
12-2020

Skills

Technical Skills:

Programming Languages: Python, Java, Groovy

Data Processing & Automation: Pandas, PySpark, Kafka, Redis

Big Data & Cloud Platforms: Google Cloud Platform (GCP), Amazon EC2

Database Technologies: NoSQL (Elasticsearch), SQL (PostgreSQL, MySQL)

Web Development: Django, Django REST Framework (DRF), Reactjs

API Development & Integration: REST APIs, Black Duck API, Jira API

Workflow Automation: Groovy (Jira automation), Apache Airflow

Containerization & Orchestration: Docker, GCP Cloud Run

Version Control & CI/CD: Git, Bitbucket, Jenkins, GCP Cloud Build, GCP Cloud Repo

Collaboration Tools: Jira, Confluence

Soft Skills:

Problem Solving: Designing scalable solutions, optimizing data pipelines, handling large datasets efficiently

Task & Project Management: Prioritization, time management, CI/CD pipeline development, ETL pipeline development

Collaboration & Teamwork: Cross-functional collaboration, effective communication, coordinating with security/compliance teams

Languages

Telugu
First Language
English
Advanced (C1)
C1
Hindi
Beginner
A1

Certification

  • Hackerrank certifications on Problem solving, Java and Python.
  • NPTEL Certification on Programming in Java.

Accomplishments

  • Personal Project: Web App Development
    Implemented the complete Web application using Django REST Framework (DRF) for the backend and React JS for front end.
    Authentications implemented, data visualization was done and RESTful API incorporated.
    Successfully deployed the application using Docker and configured integration with CI/CD with the help of GCP Cloud Build and Cloud Run.
  • Personal Project: Real-Time Weather Data Pipeline
    Designed and implemented an end-to-end data pipeline using Docker, Kafka, PySpark, and Airflow to ingest, process, and store weather data.
    Used Kafka for real-time data streaming, with producers generating weather data and consumers reading raw data from Kafka topics.
    Leveraged PySpark for batch processing of weather data, performing data transformations (flattening, timestamp conversion) and aggregating key metrics.
    Managed the entire ETL workflow using Airflow for task scheduling and orchestration, including Kafka consumer and PySpark data processing jobs.
    Data was stored in GCP, utilizing Google Cloud Storage (GCS) for raw data and BigQuery for processed data, enabling efficient querying and analysis.
  • Personal Project: Discord Bot Development
    Developed a Discord bot using Python with Elasticsearch.
    Linked it with an open-source API used for getting and displaying data to the users in real-time.
    Utilized Elasticsearch for faster data read and search operations.
    Hosted and ran the bot on Amazon AWS EC2 for scalability and availability.

Timeline

Software Developer

Tata Consultancy Services
11.2024 - Current

Data Engineer

Tata Consultancy Services
04.2023 - 11.2024

Software Engineer

MSYS Technologies Private Limited
02.2021 - 03.2023

Software Engineering Intern

Glenwood Systems
09.2019 - 10.2019

Bachelor of Technology - Computer Science And Engineering

JNT University
Tirumala Jalakam