πŸ”¨ Workshop: Getting started with local LLMs

Facilitator(s)

@nishad

Abstract

Local large language models (LLMs) are becoming popular due to their privacy-first approach, security, and cost-effectiveness. This workshop introduces participants to the basics of setting up and running local LLMs using open-source tools.

The workshop will cover both basic and intermediate topics. In Part 1, participants will learn about the basics of LLMs and related concepts, setting up local LLMs, selecting the right tools and models, and the fundamentals of prompting. In Part 2, the focus will shift to intermediate topics such as programmatically using local LLMs and automating tasks, vector embeddings and semantic search, and retrieval augmented generation (RAG) basics.

This hands-on workshop allows participants to set up their local LLMs during the session or follow up later with the provided materials. It is designed for beginners and those seeking intermediate-level knowledge, making it suitable for participants at various stages of their journey with local LLMs.

Since LLMs are resource-intensive, participants are expected to have a computer with at least 8GB of RAM and 20GB of free disk space to follow along. Although the instructor will use Macs, the workshop is platform-agnostic, accommodating MacOS, Linux, or Windows users. A stable and fast internet connection is recommended. For Part 2, participants should have a basic understanding of Python programming and have a recent version of Python installed on their computers with Jupyter Notebook support.

:information_source: To register your participation in this workshop click on the β€œGoing” button above. Watch out to not register for two parallel workshops.