GEN AI, RAG, LLM, Artificial Intelligence


Professional with high standards and strong background in Machine Learning, Artificial Intelligence & GEN AI. Expertise in programming languages, data preprocessing, and model evaluation, combined with results-driven mindset.Developed critical thinking and problem-solving skills in fast-paced, technology-driven environment. Seeking to transition into new field where these transferable skills can drive innovative solutions and growth.
Python
[Python], [E-Cell IIT Hyderabad ] - [Jan - FEB 2022]
GEN AI, RAG, LLM, Artificial Intelligence
1. "Multi-Agent LLM-Based Automated Report Generation System with Human Feedback and Statistical Evaluation."
[ Details Regarding project: Automated, structured report generation using parallel LLM agents and real-time web research. The system integrates human feedback, statistical validation, and full execution traceability to ensure high-quality, reproducible outputs. Parallel orchestration reduced generation latency by up to 40%, while human evaluations showed significant improvements in coherence and usefulness over single-LLM baselines.]
2. "Human-Aligned Prompt Optimization for Newsletter Generation Using Preference-Based RLHF and Graph-Structured Learning"
[ Details Regarding project: Used AI21 Maestro (frozen LLM), offline RLHF, graph-based prompt optimization, contextual bandits, and annotator reliability weighting. Results: Achieved up to 81% human preference win rate, statistically significant improvements (p < 0.01), with interpretable prompt graphs and CPU-only deployment. ]
3. "Human-in-the-Loop Image Generation with CLIP-Conditioned Latent Diffusion Models"
[Details Regarding project: Designed a Stable Diffusion–based image generation system using token-level CLIP cross-attention and classifier-free guidance for precise prompt control. Implemented iterative human feedback with LoRA fine-tuning of UNet cross-attention layers to improve visual alignment and realism. Achieved faster convergence and higher prompt fidelity while reducing trainable parameters by over 90% compared to full fine-tuning.]