What can I use for an offline, selfhosted LLM client, pref with images,charts, python code execution
-
Ollama for API, which you can integrate into Open WebUI. You can also integrate image generation with ComfyUI I believe.
It's less of a hassle to use Docker for Open WebUI, but ollama works as a regular CLI tool.
This is what I do its excellent.
-
You should try https://cherry-ai.com/ .. It's the most advanced client out there. I personally use Ollama for running the models and Mistral API for advnaced tasks.
But its website is Chinese. Also what's the github?
-
Ollama for API, which you can integrate into Open WebUI. You can also integrate image generation with ComfyUI I believe.
It's less of a hassle to use Docker for Open WebUI, but ollama works as a regular CLI tool.
wrote last edited by [email protected]But won't this be a mish-mash of different docker containers and projects creating an installation, dependency, upgrade nightmare?
-
But won't this be a mish-mash of different docker containers and projects creating an installation, dependency, upgrade nightmare?
All the ones I mentioned can be installed with pip or uv if I am not mistaken. It would probably be more finicky than containers that you can put behind a reverse proxy, but it is possible if you wish to go that route. Ollama will also run system-wide, so any project will be able to use its API without you having to create a separate environment and download the same model twice in order to use it.
-
But its website is Chinese. Also what's the github?