Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Ollama

Instalação

Completa no Sistema

cd /usr/local
wget -O- https://ollama.com/download/ollama-linux-amd64.tgz | tar -xzf -

# bash-completion
wget -O /etc/bash_completion.d/ollama https://github.com/ehrlz/ollama-bash-completion-plugin/raw/refs/heads/main/plugin.sh

Simplificada para o Usuário

cd ~/.local/bin
wget -O- https://ollama.com/download/ollama-linux-amd64.tgz | tar -xzf - bin/ollama --strip-components=1

# bash-completion
wget -O ~/.local/share/bash-completion/completions/ollama https://github.com/ehrlz/ollama-bash-completion-plugin/raw/refs/heads/main/plugin.sh

Execução

Inicia serviço:

ollama serve

Baixa LLM:

ollama run <name>:<tag>

Executa LLM no terminal iterativo:

ollama run <name>:<tag>

LLM

Open WebUI

Executa ollama ouvindo via rede:

OLLAMA_HOST=172.17.0.1:11434 ollama serve

Executa Open WebUI:

docker run -it --rm -p 3000:8080 --add-host=host.docker.internal:host-gateway ghcr.io/open-webui/open-webui:main-slim

docker-compose

services:
  open-webui:
    image: ghcr.io/open-webui/open-webui:main-slim
    volumes:
      - open-webui:/app/backend/data
    ports:
      - 3000:8080
    extra_hosts:
      - host.docker.internal:host-gateway
volumes:
  open-webui: