Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

NodeBB

  1. Home
  2. Selfhosted
  3. "Self-host a local AI stack and access it from anywhere" | Tailscale Blog

"Self-host a local AI stack and access it from anywhere" | Tailscale Blog

Scheduled Pinned Locked Moved Selfhosted
selfhosted
3 Posts 3 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • otters_raft@lemmy.caO This user is from outside of this forum
    otters_raft@lemmy.caO This user is from outside of this forum
    [email protected]
    wrote last edited by
    #1

    NOTE: I got to this article by clicking on a banner ad that Tailscale bought in the release notes of OpenWebUI's GitHub. However, I thought it was still neat and worth sharing. Please discuss and share any alternatives for the paid/proprietary/problematic pieces of this setup:

    Proxmox for virtualization, NixOS for repeatability, Docker for packaging, and Tailscale for secure access from anywhere. We’ll wire up an NVIDIA A4000 GPU to a NixOS VM via PCIe passthrough, set up Ollama and Open WebUI, and get chatting with local large language models all entirely within your Tailnet.

    B 1 Reply Last reply
    51
    • otters_raft@lemmy.caO [email protected]

      NOTE: I got to this article by clicking on a banner ad that Tailscale bought in the release notes of OpenWebUI's GitHub. However, I thought it was still neat and worth sharing. Please discuss and share any alternatives for the paid/proprietary/problematic pieces of this setup:

      Proxmox for virtualization, NixOS for repeatability, Docker for packaging, and Tailscale for secure access from anywhere. We’ll wire up an NVIDIA A4000 GPU to a NixOS VM via PCIe passthrough, set up Ollama and Open WebUI, and get chatting with local large language models all entirely within your Tailnet.

      B This user is from outside of this forum
      B This user is from outside of this forum
      [email protected]
      wrote last edited by
      #2

      I was going to click, until I saw the computer system it is running on.

      30p87@feddit.org3 1 Reply Last reply
      3
      • B [email protected]

        I was going to click, until I saw the computer system it is running on.

        30p87@feddit.org3 This user is from outside of this forum
        30p87@feddit.org3 This user is from outside of this forum
        [email protected]
        wrote last edited by
        #3

        You mean an A4000? My 1070 can run llama 3.1 comfortably (the low B versions). My 7800XT answers instantly.

        1 Reply Last reply
        1
        Reply
        • Reply as topic
        Log in to reply
        • Oldest to Newest
        • Newest to Oldest
        • Most Votes


        • Login

        • Login or register to search.
        Powered by NodeBB Contributors
        • First post
          Last post
        0
        • Categories
        • Recent
        • Tags
        • Popular
        • World
        • Users
        • Groups