When should you retrieve, and when should you fine-tune? This talk demystifies when to use RAG vs fine-tuning for GenAI applications. We'll then go hands-on with LoRA and MCP to fine-tune a model live showing how quickly you can build domain-specific LLMs on your own terms.
Join us for an in-depth exploration of the critical decision between RAG (Retrieval-Augmented Generation) and fine-tuning for customizing Large Language Models. This session will provide clear guidance on when to use each approach, along with practical demonstrations using LoRA (Low-Rank Adaptation) and MCP (Model Context Protocol) to fine-tune models efficiently.
AI/ML Specialist
Kanishka Mohaia is an AI/ML specialist with expertise in machine learning and artificial intelligence applications. Details to be updated.