Articles
This section lists recommended reading and community links.
Model Selection
- 50 Open-Source Options for Running LLMs Locally by Vince Lam: An extensive list of 50 open-source tools and projects for running LLMs locally, offering various options and their use cases.
- Code Llama: Llama 2 Learns to Code by https://huggingface.co/: An introduction to Code Llama, highlighting its variants—Base, Python, and Instruct—tailored for coding tasks. The article explores how these models, optimized from Meta’s Llama 2, enhance code generation by bridging natural language and programming, making them powerful tools for developers.
Installation and Configuration
- Mac for Large Language Models by Allan Witt: A comprehensive guide on configuring Macs, especially with Apple Silicon, for running large language models efficiently.
- Local LLMs on Apple Silicon by Aaditya Bhat: Insights and techniques for running local LLMs `on Apple Silicon, highlighting performance optimizations and practical applications.
- Local LLM Fine-Tuning on Mac M1 16GB by Shaw Talebi: This article explores fine-tuning local LLMs on a Mac M1 with 16GB RAM, offering insights and steps for effective model training and optimization.
- A Simple, Practical Guide to Running Large-Language Models on Your Laptop by Ryan Stewart: This guide shows how to run large-language models locally on your laptop using
llama-cpp-python
and GGUF models. It highlights benefits like cost savings, privacy, and faster iteration, providing step-by-step instructions for setup on both CPU and GPU. - How Much GPU Memory is Needed to Serve a Large Language Model (LLM)? by Mastering LLM: This article provides a formula to estimate the GPU memory needed for LLM deployment, covering key factors like model size, precision, and overhead, with practical tips for scaling models like GPT and LLaMA.
Advanced Usage
- Using a Local LLM as a Free Coding Copilot in VS Code by Simon Fraser: A guide on setting up and using a local LLM as a coding assistant in VS Code, detailing the setup process and benefits.
- Integrating Large Language Models with Apple’s Core ML by Pedro Cuenca: This article discusses the integration of large language models with Apple’s Core ML, providing a detailed guide on leveraging these models in Swift applications.
Local Servers
- How to Set Up and Use a Windows NAS: This article guides you through setting up a Windows NAS, covering hardware selection, drive configuration, and network sharing. It also includes tips for optimizing performance and securing your NAS for a home or small office network.