Skip to Main Content
Building llama.cpp Image Generation

Open WebUI

System Administration
OpenWebUI is a self-hosted, web-based interface that allows you to run AI models entirely offline. It integrates with various LLM runners, such as OpenAI and Ollama, and supports features like markdown and LaTeX rendering, model management, and voice/video calls. It also offers multilingual support and the ability to generate images using APIs like DALL-E or ComfyUI

Commands:

sudo systemctl reset-failed openwebui.service
sudo systemctl restart openwebui.service
sudo systemctl status openwebui.service
sudo journalctl -u openwebui.service -n 120 --no-pager

File name: /etc/systemd/system/openwebui.service


[Unit]
Description=OpenWebUI Docker Container
After=docker.service
Requires=docker.service
[Service]
Type=simple
User=sysadmin
ExecStartPre=-/usr/bin/docker stop openwebui
ExecStartPre=-/usr/bin/docker rm openwebui
ExecStart=/usr/bin/docker run --name openwebui \
   -p 3000:8080 \
   -v openwebui:/app/backend/data  \
   ghcr.io/open-webui/open-webui:main
ExecStop=/usr/bin/docker stop openwebui
ExecStopPost=/usr/bin/docker rm openwebui
# Restart automatically if it crashes
Restart=always
RestartSec=10
# Logging
StandardOutput=append:/var/log/openwebui/openwebui.log
StandardError=append:/var/log/openwebui/openwebui.log
# Harden the service
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=full
ProtectHome=false
[Install]
WantedBy=multi-user.target