Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

NodeBB

  1. Home
  2. Selfhosted
  3. What CLIP Machine Learning Model can I use for Immich?

What CLIP Machine Learning Model can I use for Immich?

Scheduled Pinned Locked Moved Selfhosted
dockerselfhostimmichmachine learnin
5 Posts 4 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • E This user is from outside of this forum
    E This user is from outside of this forum
    [email protected]
    wrote last edited by
    #1

    I'm currently running my Immich server on a mini PC with proxmox

    It's got 3x N97 CPU cores available to it and 7gb of ram
    It's using the default ViT-B-32__openai model, I was wondering if I can use a more powerful model, but I'm not sure which one or if I should enable hardware acceleration etc.

    This is my yaml file

      immich-machine-learning:  
        container_name: immich_machine_learning  
        # For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.  
        # Example tag: ${IMMICH_VERSION:-release}-cuda  
        image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}  
        # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration  
        #   file: hwaccel.ml.yml  
        #   service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable  
        volumes:  
          - immich-model-cache:/cache  
        env_file:  
          - stack.env  
        restart: always  
        healthcheck:  
          disable: false  
    

    I looked at the docs but it's a bit confusing so that's why I'm here.

    J N M 3 Replies Last reply
    10
    • E [email protected]

      I'm currently running my Immich server on a mini PC with proxmox

      It's got 3x N97 CPU cores available to it and 7gb of ram
      It's using the default ViT-B-32__openai model, I was wondering if I can use a more powerful model, but I'm not sure which one or if I should enable hardware acceleration etc.

      This is my yaml file

        immich-machine-learning:  
          container_name: immich_machine_learning  
          # For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.  
          # Example tag: ${IMMICH_VERSION:-release}-cuda  
          image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}  
          # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration  
          #   file: hwaccel.ml.yml  
          #   service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable  
          volumes:  
            - immich-model-cache:/cache  
          env_file:  
            - stack.env  
          restart: always  
          healthcheck:  
            disable: false  
      

      I looked at the docs but it's a bit confusing so that's why I'm here.

      J This user is from outside of this forum
      J This user is from outside of this forum
      [email protected]
      wrote last edited by [email protected]
      #2

      According to this paste, you're not even using inference at all, or rather, it's using CPU.

      Change the release tag and "cpu" to openvino and see if that performs any better.

      E 1 Reply Last reply
      1
      • E [email protected]

        I'm currently running my Immich server on a mini PC with proxmox

        It's got 3x N97 CPU cores available to it and 7gb of ram
        It's using the default ViT-B-32__openai model, I was wondering if I can use a more powerful model, but I'm not sure which one or if I should enable hardware acceleration etc.

        This is my yaml file

          immich-machine-learning:  
            container_name: immich_machine_learning  
            # For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.  
            # Example tag: ${IMMICH_VERSION:-release}-cuda  
            image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}  
            # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration  
            #   file: hwaccel.ml.yml  
            #   service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable  
            volumes:  
              - immich-model-cache:/cache  
            env_file:  
              - stack.env  
            restart: always  
            healthcheck:  
              disable: false  
        

        I looked at the docs but it's a bit confusing so that's why I'm here.

        N This user is from outside of this forum
        N This user is from outside of this forum
        [email protected]
        wrote last edited by
        #3

        OpenVino is about your only option here. It is not super efficient and will increase system load during those jobs.

        1 Reply Last reply
        1
        • E [email protected]

          I'm currently running my Immich server on a mini PC with proxmox

          It's got 3x N97 CPU cores available to it and 7gb of ram
          It's using the default ViT-B-32__openai model, I was wondering if I can use a more powerful model, but I'm not sure which one or if I should enable hardware acceleration etc.

          This is my yaml file

            immich-machine-learning:  
              container_name: immich_machine_learning  
              # For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.  
              # Example tag: ${IMMICH_VERSION:-release}-cuda  
              image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}  
              # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration  
              #   file: hwaccel.ml.yml  
              #   service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable  
              volumes:  
                - immich-model-cache:/cache  
              env_file:  
                - stack.env  
              restart: always  
              healthcheck:  
                disable: false  
          

          I looked at the docs but it's a bit confusing so that's why I'm here.

          M This user is from outside of this forum
          M This user is from outside of this forum
          [email protected]
          wrote last edited by
          #4

          They have a list here of the models with performance and RAM usage data: https://immich.app/docs/features/searching/

          You kind of just have to pick one, try it, and see if it crashes from low memory.

          Also enable OpenVINO HWaccel, because it will be extremely slow otherwise.

          1 Reply Last reply
          2
          • J [email protected]

            According to this paste, you're not even using inference at all, or rather, it's using CPU.

            Change the release tag and "cpu" to openvino and see if that performs any better.

            E This user is from outside of this forum
            E This user is from outside of this forum
            [email protected]
            wrote last edited by
            #5

            thanks I managed to enable openvino and pick a stronger model and it's working quite well, I also bumped the ram to 10gb

            1 Reply Last reply
            0
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Login or register to search.
            Powered by NodeBB Contributors
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • World
            • Users
            • Groups