Welcome to the Workshop

Welcome to the Local LLM Workshop!

Thank you for joining us for this hands-on exploration of local large language models. This workshop is designed to give you practical, actionable skills for running and using LLMs on your own hardware.

Why Local LLMs?

In 2025, running LLMs locally has become increasingly accessible and practical. Here’s why you might choose local LLMs:

Privacy First

  • Your data never leaves your computer
  • No cloud providers tracking your interactions
  • Perfect for sensitive or confidential information

Cost Effective

  • No subscription fees or per-token charges
  • One-time setup, unlimited usage
  • Run as many queries as you want

Full Control

  • Choose your models and configurations
  • Customize behavior for your specific needs
  • No rate limits or service interruptions

Offline Capability

  • Works without internet connection (after initial download)
  • Reliable even in low-connectivity environments

What Makes 2025 Different?

Recent developments have made local LLMs significantly more accessible:

  1. Efficient Models: Llama 3.2 and similar models run well on consumer hardware
  2. Easy Installation: Tools like Ollama provide one-click setup
  3. Native GUIs: No more terminal commands for basic usage
  4. Improved Quantization: Smaller models with minimal quality loss
  5. Better Integration: Python libraries and APIs make automation simple

Workshop Philosophy

This workshop is hands-on and practical:

  • We focus on doing, not just learning theory
  • All examples use real tools and models you can run today
  • We provide complete working code and notebooks
  • Questions are encouraged throughout

Who This Workshop Is For

This workshop is designed for:

  • Beginners: No prior LLM experience required
  • Librarians and Information Professionals: Interested in local AI for privacy-sensitive work
  • Developers: Want to integrate LLMs into applications
  • Researchers: Need to process data locally
  • Privacy Advocates: Prefer local-first tools

What We’ll Cover Today

Part 1: Fundamentals (2 hours)

  • Understanding how LLMs work
  • Installing and configuring Ollama
  • Writing effective prompts
  • Hands-on experimentation

Part 2: Intermediate Topics (2 hours)

  • Automating with Python
  • Vector embeddings and semantic search
  • Building RAG (Retrieval-Augmented Generation) systems
  • Working with your own data

Getting Help During the Workshop

If you encounter issues:

  1. Ask Questions: Use the chat or raise your hand on Zoom
  2. Check the Docs: This website has troubleshooting guides
  3. Help Each Other: We encourage peer support
  4. Don’t Worry: Setup issues are common and we’ll work through them

Workshop Materials

All materials are available at:

Let’s Get Started!

We’re excited to have you here. By the end of today, you’ll have a fully functional local LLM environment and the knowledge to use it effectively.

Pro Tip: Keep this website open in a browser tab throughout the workshop. You can reference it anytime and it will remain available after the workshop for future reference.

Ready? Let’s dive into LLM Concepts!