Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

NodeBB

  1. Home
  2. Selfhosted
  3. What can I use for an offline, selfhosted LLM client, pref with images,charts, python code execution

What can I use for an offline, selfhosted LLM client, pref with images,charts, python code execution

Scheduled Pinned Locked Moved Selfhosted
selfhosted
25 Posts 14 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • ? Guest

    Took ages to produce answer, and only worked once on one model, then crashed since then.

    C This user is from outside of this forum
    C This user is from outside of this forum
    [email protected]
    wrote last edited by [email protected]
    #15

    Try the beta on the github repo, and use a smaller model!

    1 Reply Last reply
    0
    • H [email protected]

      Maybe LocalAI? It doesn't do python code execution, but pretty much all of the rest.

      C This user is from outside of this forum
      C This user is from outside of this forum
      [email protected]
      wrote last edited by
      #16

      This looks interesting - do you have experience of it? How reliable / efficient is it?

      mitexleo@buddyverse.oneM 1 Reply Last reply
      2
      • C [email protected]

        I was looking back at some old lemmee posts and came across GPT4All. Didn't get much sleep last night as it's awesome, even on my old (10yo) laptop with a Compute 5.0 NVidia card.

        Still, I'm after more, I'd like to be able to get image creation and view it in the conversation, if it generates python code, to be able to run it (I'm using Debian, and have a default python env set up). Local file analysis also useful. CUDA Compute 5.0 / vulkan compatibility needed too with the option to use some of the smaller models (1-3B for example). Also a local API would be nice for my own python experiments.

        Is there anything that can tick the boxes? Even if I have to scoot across models for some of the features? I'd prefer more of a desktop client application than a docker container running in the background.

        mitexleo@buddyverse.oneM This user is from outside of this forum
        mitexleo@buddyverse.oneM This user is from outside of this forum
        [email protected]
        wrote last edited by
        #17

        You should try https://cherry-ai.com/ .. It's the most advanced client out there. I personally use Ollama for running the models and Mistral API for advnaced tasks.

        1 Reply Last reply
        1
        • C [email protected]

          I was looking back at some old lemmee posts and came across GPT4All. Didn't get much sleep last night as it's awesome, even on my old (10yo) laptop with a Compute 5.0 NVidia card.

          Still, I'm after more, I'd like to be able to get image creation and view it in the conversation, if it generates python code, to be able to run it (I'm using Debian, and have a default python env set up). Local file analysis also useful. CUDA Compute 5.0 / vulkan compatibility needed too with the option to use some of the smaller models (1-3B for example). Also a local API would be nice for my own python experiments.

          Is there anything that can tick the boxes? Even if I have to scoot across models for some of the features? I'd prefer more of a desktop client application than a docker container running in the background.

          mitexleo@buddyverse.oneM This user is from outside of this forum
          mitexleo@buddyverse.oneM This user is from outside of this forum
          [email protected]
          wrote last edited by
          #18

          You should try https://cherry-ai.com/ .. It's the most advanced client out there. I personally use Ollama for running the models and Mistral API for advnaced tasks.

          mitexleo@buddyverse.oneM C 2 Replies Last reply
          3
          • mitexleo@buddyverse.oneM [email protected]

            You should try https://cherry-ai.com/ .. It's the most advanced client out there. I personally use Ollama for running the models and Mistral API for advnaced tasks.

            mitexleo@buddyverse.oneM This user is from outside of this forum
            mitexleo@buddyverse.oneM This user is from outside of this forum
            [email protected]
            wrote last edited by
            #19

            It's fully open source and free (as in beer).

            1 Reply Last reply
            1
            • C [email protected]

              This looks interesting - do you have experience of it? How reliable / efficient is it?

              mitexleo@buddyverse.oneM This user is from outside of this forum
              mitexleo@buddyverse.oneM This user is from outside of this forum
              [email protected]
              wrote last edited by
              #20

              LocalAI is pretty good but resource-intensive. I ran it on a vps in the past.

              1 Reply Last reply
              0
              • andrew0@lemmy.dbzer0.comA [email protected]

                Ollama for API, which you can integrate into Open WebUI. You can also integrate image generation with ComfyUI I believe.

                It's less of a hassle to use Docker for Open WebUI, but ollama works as a regular CLI tool.

                M This user is from outside of this forum
                M This user is from outside of this forum
                [email protected]
                wrote last edited by
                #21

                This is what I do its excellent.

                1 Reply Last reply
                0
                • mitexleo@buddyverse.oneM [email protected]

                  You should try https://cherry-ai.com/ .. It's the most advanced client out there. I personally use Ollama for running the models and Mistral API for advnaced tasks.

                  C This user is from outside of this forum
                  C This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #22

                  But its website is Chinese. Also what's the github?

                  H 1 Reply Last reply
                  2
                  • andrew0@lemmy.dbzer0.comA [email protected]

                    Ollama for API, which you can integrate into Open WebUI. You can also integrate image generation with ComfyUI I believe.

                    It's less of a hassle to use Docker for Open WebUI, but ollama works as a regular CLI tool.

                    C This user is from outside of this forum
                    C This user is from outside of this forum
                    [email protected]
                    wrote last edited by [email protected]
                    #23

                    But won't this be a mish-mash of different docker containers and projects creating an installation, dependency, upgrade nightmare?

                    andrew0@lemmy.dbzer0.comA 1 Reply Last reply
                    0
                    • C [email protected]

                      But won't this be a mish-mash of different docker containers and projects creating an installation, dependency, upgrade nightmare?

                      andrew0@lemmy.dbzer0.comA This user is from outside of this forum
                      andrew0@lemmy.dbzer0.comA This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #24

                      All the ones I mentioned can be installed with pip or uv if I am not mistaken. It would probably be more finicky than containers that you can put behind a reverse proxy, but it is possible if you wish to go that route. Ollama will also run system-wide, so any project will be able to use its API without you having to create a separate environment and download the same model twice in order to use it.

                      1 Reply Last reply
                      1
                      • C [email protected]

                        But its website is Chinese. Also what's the github?

                        H This user is from outside of this forum
                        H This user is from outside of this forum
                        [email protected]
                        wrote last edited by
                        #25

                        https://github.com/CherryHQ/cherry-studio

                        1 Reply Last reply
                        2
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Login or register to search.
                        Powered by NodeBB Contributors
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups