🔨 Workshop: Getting started with local LLMs

Facilitator(s)

Nishad Thalhath (@nishad)

Abstract

Local large language models (LLMs) are becoming popular due to their privacy-first approach, security, and cost-effectiveness. This workshop introduces participants to the basics of setting up and running local LLMs using open-source tools.

The workshop will cover both basic and intermediate topics. In Part 1, participants will learn about the basics of LLMs and related concepts, setting up local LLMs, selecting the right tools and models, and the fundamentals of prompting. In Part 2, the focus will shift to intermediate topics such as programmatically using local LLMs and automating tasks, vector embeddings and semantic search, and retrieval augmented generation (RAG) basics.

This hands-on workshop allows participants to set up their local LLMs during the session or follow up later with the provided materials. It is designed for beginners and those seeking intermediate-level knowledge, making it suitable for participants at various stages of their journey with local LLMs.

Since LLMs are resource-intensive, participants are expected to have a computer with at least 8GB of RAM and 20GB of free disk space to follow along. Although the instructor will use Macs, the workshop is platform-agnostic, accommodating MacOS, Linux, or Windows users. A stable and fast internet connection is recommended. For Part 2, participants should have a basic understanding of Python programming and have a recent version of Python installed on their computers with Jupyter Notebook support.

:information_source: To register your participation in this workshop click on the “Going” button above. You will then receive an email notification as soon as facilitators post an update. Watch out to not register for two parallel workshops.

Hello, LLM Workshop Participants! :wave:

Thank you so much for signing up for the workshop :slightly_smiling_face:. I’m looking forward to meeting all of you on November 25th! I hope you’ll find the session both exciting and insightful.

The workshop will take place on Zoom. Here’s everything you need to join:

:desktop_computer: Zoom Meeting Details:

If you’re new to LLMs and generative AI, Check out this 20-minute introductory video to get a quick head start:
:arrow_forward: Generative AI in a Nutshell

For those of you who enjoy a deeper dive, I recommend this for the weekend read:
:open_book: What Is ChatGPT Doing and Why Does It Work?

Feeling adventurous?
You can try installing and experimenting with Ollama in advance! We’ll primarily use Ollama in the workshop, working with Llama 3.2. Don’t worry if this is all new—we’ll cover everything step by step during the session.

Preparation Tips for the workshop:

  • For the final part of the workshop, where we’ll cover more advanced use cases, if you would like to follow along please ensure you have the latest version of Python and Jupyter Notebook installed on your computer.

  • If your internet connection is slow, it’s a good idea to install Ollama with Llama 3.2 beforehand. (If your computer has limited memory, it’s better to install Llama 3.2:1b instead of the default Llama 3.2:3b.

:alarm_clock: We’ll open the Zoom room 30 minutes early for anyone who needs help with installations or setup. Feel free to join early if you have questions or need assistance!

Finally, if you have any specific questions, use cases, or topics you’d like us to address, please share them in this thread. I’ll do my best to include them in the workshop.

Enjoy your weekend, and see you on Monday!

Thanks for a very informative workshop Nishad! Glad I attended.

1 Like

Thanks for this great workshop. Unfortunately I think I fell asleep before the good part (around 2 a.m. my time). I wanted to learn about RAG and how it could be used with linked data. … Was this workshop by chance recorded?

Workshop Follow-up: Local LLMs - Updates and Resources

Thank you all for participating in the Local LLMs workshop! Despite the challenging timing across different time zones, I was truly impressed by the level of engagement and participation throughout the four-hour session.

Workshop Materials Update

I’ve taken some time during the holidays to revise and enhance the workshop materials:

  • The advanced notebooks covering Ollama and RAG implementation have been thoroughly updated with improved documentation and clearer code structure. You’ll find these materials in our workshop repository: GitHub - nishad/llm-workshop-notebooks: Getting Started with Local LLMs - Workshop Notebooks

  • The RAG notebook now includes expanded content on topics we briefly touched during the workshop, such as using Jinja2 templates for prompt creation and enhanced prompting techniques for generating outputs with proper citation references.

  • The complete workshop notes are available at: LLMS Workshop – LLM Workshop

Support and Resources

If you’d like to revisit the advanced concepts we covered or explore the new additions:

  • The notebooks are organized into numbered subsections for easy reference
  • You can report issues or ask questions through:
  • When reporting issues, please reference the specific subsection number to help me provide better support

While my response time might be slightly delayed, I am committed to addressing all your questions and helping you implement these concepts in your projects. Please don’t hesitate to reach out if you need clarification or assistance with any aspect of the workshop materials.

Closing Notes

Thank you again for your enthusiastic participation and support throughout the workshop. I appreciate your patience and engagement during the extended four-hour session. I wish you all a wonderful holiday season and a bright new year ahead!


Feel free to continue our discussions here or through the provided channels. I look forward to seeing how you apply these concepts in your work!

2 Likes