← Back to Sessions

RAG or Fine-Tune? Customizing LLMs with LoRA and MCP

30 minIntermediateSydney
GenAILLMRAGFine-tuningLoRAMCPMachine LearningAI

Description

When should you retrieve, and when should you fine-tune? This talk demystifies when to use RAG vs fine-tuning for GenAI applications. We'll then go hands-on with LoRA and MCP to fine-tune a model live showing how quickly you can build domain-specific LLMs on your own terms.

Abstract

Join us for an in-depth exploration of the critical decision between RAG (Retrieval-Augmented Generation) and fine-tuning for customizing Large Language Models. This session will provide clear guidance on when to use each approach, along with practical demonstrations using LoRA (Low-Rank Adaptation) and MCP (Model Context Protocol) to fine-tune models efficiently.

Key Takeaways

  • Understand when to use RAG vs fine-tuning for different use cases
  • Learn practical implementation of LoRA for efficient model fine-tuning
  • Explore MCP for enhanced model customization and control
  • See live demonstrations of domain-specific LLM development
  • Gain insights into building custom LLMs on your own terms

Speaker

Kanishka Mohaia

Kanishka Mohaia

AI/ML Specialist

Kanishka Mohaia is an AI/ML specialist with expertise in machine learning and artificial intelligence applications. Details to be updated.

LinkedIn Tracking