Have you ever wanted to try running different open source LLM models locally? Have private chats with LLMs? This setup could be fun for you.

Using some different open source tools you can run a local, private LLM model on your computer. I originally saw this setup on NetworkChuck’s YouTube channel, but the macOS version didn’t work well for me. Since some of my family have been interested in AI, I thought I would try to make this setup as easy as possible for others. This post will be more detailed than usual for that reason.

Here is the end product. A web page that lets you interact with different LLM models you can download. It has a bunch of features you might expect from other paid products.

What this setup will install:

  • Homebrew (this makes installing other apps easier)
  • Ollama (this hosts the LLM models we’ll use)
  • Llama3 (LLM model we’ll start with, you can download others)
  • Docker (this allows us to create a container that will host the web front end for LLM interaction). We’ll be using Docker desktop as I think it helps non-technical people see the results a bit better and there are some gotcha with running the cli with macOS
  • OpenWebUI (this is the web front end for LLM interaction)

You’ll need:

  • Administrative access to your Mac
  • At least 6GB of free space.

This setup runs best on at least an M1 CPU Mac. You can install this on other Macs, but your milage may vary.

How to complete the setup: First, copy the text from the script below.

#!/bin/bash

# Install Homebrew (need to authenticate as admin and choose "Enter" when prompted)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Add brew paths to profile
(echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

#Install Ollama through brew
echo "Installing Ollama..."
brew install --cask ollama

# Pull down the llama3 model. This will pop-up the Ollama installer, just ignore
echo "Pulling down the llama3 model..."
ollama pull llama3

# Download and install Docker
brew install --cask docker
sleep 5

# Start Docker
echo "Starting Docker...Please accept the terms and choose the recommended installation"
open -a Docker
sleep 5
open -a Docker && while ! Docker info > /dev/null 2>&1; do sleep 1 ; done

# Create the Docker container
echo "Creating Docker container..."
sudo docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
sleep 10 # Wait for container to start

# Open the web browser to the specified URL
echo "Opening web browser"
open "http://localhost:3000"

echo "Setup complete!"

Open TextEdit and paste in the contents.

Save the file with a “.sh” file extension in a familiar location (in this example “Downloads”).

New Macs, it has been my experience, will always try to save the files as .rtf. If you run into this issue try restarting TextEdit after doing the steps below so that it uses plain text files instead.

  • Go to TextEdit - Settings - Under “Format” choose “Plain text” and exit. Then restart TextEdit.

Open the Terminal app.

In the terminal window, change directory to the location you saved the script by issuing this command and hit enter.

cd Downloads/

It will look something like:

Enter the following command to make the script executable (you can hit tab at the file name to auto-complete) and hit enter.

chmod +x ollama_install.sh

It will look something like:

Run the script by issuing this command and hit enter:

./ollama_install.sh

During the install you’ll need to do the following:

  • Enter your password in the terminal. This is for the Homebrew install.
  • Hit “Enter” in the terminal to confirm the Homebrew install.
  • You’ll soon notice the Ollama icon in tray and pop-up. You can ignore this and focus back on the terminal window.

  • Downloading the llama3 model will take some time……
  • At some point below if you have a Mac with Apple Silicon (CPU), and don’t already have Rosetta installed, you maybe prompted to install it at this time. Go ahead and follow those prompts to do so.
  • Enter your password in the terminal to allow the Docker install.
    • You might need to click “Open” to allow Docker (confirming the app came from the internet)
      • If the screen below does not pop up, just start “Docker” from your Applications folder.
    • Accept the terms for Docker
  • Choose “Finish” (Use recommended settings).

  • Enter your password to configure Docker

  • The script will now continue. On the Docker window, click through the rest of Docker’s prompts by doing the following and then go back to the Terminal window.
    • Choose “continue without signing in”
    • Choose “Skip survey” at the bottom.
    • You’ll see the Docker desktop application (and will eventually see a container created like below), but for now go back to the Terminal window to follow the progress.
  • When the script is finished, your default browser will open to the OpenWebUI login page.
  • Choose “Sign up” at the bottom to create the first user account. The first account will be the Admin user that can manage other users.

    Once you create your account, first choose the model you wish to use. We’ve only downloaded one right now.

You can download other models either through the settings or through the command line. Note that you’ll need significant hard drive space for some models and in my experience it’s best to stick to the smaller version of models as the larger ones require much more resources to run and will likely fail to work unless you have a massively resourced machine.

Once you choose your model you can select a sample prompt from the main page or write your own. The speed at which you get responses back will depend on the resources of your machine. If you are used to ChatGPT, these models are not as polished. Regular caveats to check the outputs before believing them.

Play around with all the settings, upload files (still local), and invite other users if they are on the same network. Just replace the “localhost” in the URL with the IP address of the machine that is running the setup.

If you want to uninstall all these tools after playing with them, below is a script you can use to get rid of everything. This assumes you don’t use any of these apps already for other purposes. Make it executable as in the first steps of this process. Just make sure you quit Docker and Ollama before running this. Enjoy!

# Uninstall all tools (make sure they are not running in your system tray)

# Uninstall Docker
brew uninstall docker
rm -rf ~/Library/Group\ Containers/group.com.docker
rm -rf ~/Library/Containers/com.docker.docker
rm -rf ~/.docker

# Uninstall Ollama
brew uninstall ollama
rm -rf ~/.ollama
rm /usr/local/bin/ollama

# Uninstall Homebrew
NONINTERACTIVE=1 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/uninstall.sh)"
rm -rf /opt/homebrew/etc/
rm -rf /opt/homebrew/share/
rm -rf /opt/homebrew/var/