Highly-motivated employee with desire to take on new challenges. Strong work ethic, adaptability, and exceptional interpersonal skills. Adept at working effectively unsupervised and quickly mastering new skills. Accustomed to working with data architects to develop comprehensive data pipelines based on business requirements.
Overview
9
9
years of professional experience
1
1
Certification
Work History
GenAI Engineer
VOIS
Pune
10.2024 - Current
Development and implementation of LLMOps solutions for deploying, monitoring, and managing large language models and generative AI systems.
Collaborated with data scientists, ML engineers, and product teams to build scalable pipelines for LLM fine-tuning, evaluation, and inference.
Ensure responsible AI practices by integrating data anonymization, bias detection, and compliance monitoring into LLM workflows.
Develop and maintain monitoring and management tools to ensure the reliability and performance pipleine.
Work with stakeholders across the organization to understand their generative AI needs and deliver LLMOps solutions that drive business impact.
Stay up-to-date with the latest trends and technologies in LLMOps, generative AI, and responsible AI, and share your knowledge to keep the team at the cutting edge.
Troubleshoot and resolve issues related to LLM deployment, scaling, and performance.
Experience with LLMOps platforms and tools (e.g., Hugging Face, LangChain, Ray, MLFlow, Kubeflow).
Familiarity with prompt engineering, retrieval-augmented generation (RAG), and LLM evaluation frameworks.
Experience with data privacy, PII anonymization, and responsible AI practices.
Strong knowledge of containerization and orchestration (Docker, Kubernetes).
Experience with DevOps, CI/CD, and scalable API development for LLM serving. Passion for the engineering challenges of generative AI and scaling LLM solutions.
Experience designing and building LLMOps infrastructure, including data pipelines, model management, and monitoring tools.
Deep understanding of LLM architectures (e.g., GPT, Llama) and experience with model fine-tuning, evaluation, and deployment
Senior Data Engineer
VOIS
Pune
09.2022 - 10.2024
Leveraged Google Cloud to build and fine-tune robust ETL pipelines, delivering highquality, actionable business-driven data.
Built Dataflow jobs to ingest diverse, large-scale data from GCS into BigQuery, enforcing schema & scrubbing data to ensure data integrity.
Crafted & executed complex SQL queries & stored procedures in BigQuery, transforming & enriching data across layers for the final business dataset.
Orchestrated Data pipelines using Apache Airflow and leveraged Bamboo for CI/CD, ensuring seamless deployments to higher environments.
Expertly managed and version-controlled code repositories in Bitbucket, facilitating streamlined development within Agile methodologies.
Optimize data processing jobs for performance and cost.
Work closely with data architect to understand data requirements and deliver solutions that meet business needs.
Communicate technical concepts effectively to both technical and non-technical stakeholders.
Participate in agile development processes, including sprint planning, stand-ups, and retrospectives.
Debug and resolve technical issues related to data pipelines and infrastructure.
Provide support and maintenance for existing data solutions.
Continuously monitor and improve data pipeline performance and reliability.
Data Engineer
Tata Consultancy Services
Pune
03.2016 - 09.2022
Develop, test and maintenance of DataMart objects using ETL techniques.
Developed ETL pipelines using informatica, stored procedures, and views on DB2-based Data Warehouse.
To perform data profiling and analysis on source system data to ensure that source system data can be integrated and represented properly in our models.
Work directly with Architects/DBA on data modeling, schema design, troubleshooting, debugging, and issue resolution in production environment.
Communicate effectively with end users, peers and management regarding assignment of work to provide solutions.
Delivering projects within deadlines with high level of data quality and integrity.
Provide support for the resolution of all reporting issues escalated by support operations team post implementation.
Exposure to JIRA to create a productive, high quality development environment.
Efficiently deploying task in production using CI/CD approach through Jenkins.
Understanding of requirements of large enterprise applications.
Actively contribute and participate in Scrum, daily stands-ups.
Sound Knowledge about Batch Processing System.
Performance tuning of database SQL queries and informatica mapping, workflows.
Education
B.E. - Computer Science & Engineering
RGPV
Bhopal
01.2015
Skills
Rag
Artifact Registry
Cloud Build
VertexAI
Docker
Kubeflow
GCP
Cloud Run
BigQuery
Cloud Storage
DataFlow
Cloud Functions
Data Fusion
Cloud Composer
Pub/Sub
ETL
ELT
Python
GIT
Jira
Agile
Data Warehousing
Shell Scripting
SQL
Machine Learning
GenAI
SAFe Agile
BitBucket
Confluence
Putty
Autosys
Unix
CI/CD
DbArtisan
Large language models
Version control
Languages
English, Proficient
Hindi, Native
Online Profiles - Linkedin
www.linkedin.com/in/akankshatripathi-18888ba7
Certification
SQL (Basic) Certificate, 2022, Hacker Rank
SQL (Intermediate) Certificate, 2022, Hacker Rank
Oracle Cloud Infrastructure AI Foundations Associate, 2024, OCI Certified Foundation Associate
Accomplishments
Received 'On the Spot Award' for excellent performance in the project.
Received the 'Vodafone Star Award' for designing and implementing a cloud function pipeline
Designed multiple fact tables with aggregated monthly data so that reporting can be done efficiently.
Optimized complex BigQuery script for daily, weekly, and monthly pipelines to perform faster execution.
Created multiple Table Functions for reporting tables so that data is not stored on BigQuery Storage and reporting tables can create on top of these table functions which saved a significant amount of storage cost.
Created BigQuery External Tables for use case where data was stored in GCS bucket and updated once in a week, so that users can run the SQL queries without the burden of storing data.
Training
Preparing for your Professional Data Engineer Journey