What CLIP Machine Learning Model can I use for Immich?
-
I'm currently running my Immich server on a mini PC with proxmox
It's got 3x N97 CPU cores available to it and 7gb of ram
It's using the defaultViT-B-32__openai
model, I was wondering if I can use a more powerful model, but I'm not sure which one or if I should enable hardware acceleration etc.This is my yaml file
immich-machine-learning: container_name: immich_machine_learning # For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag. # Example tag: ${IMMICH_VERSION:-release}-cuda image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release} # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration # file: hwaccel.ml.yml # service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable volumes: - immich-model-cache:/cache env_file: - stack.env restart: always healthcheck: disable: false
I looked at the docs but it's a bit confusing so that's why I'm here.
-
I'm currently running my Immich server on a mini PC with proxmox
It's got 3x N97 CPU cores available to it and 7gb of ram
It's using the defaultViT-B-32__openai
model, I was wondering if I can use a more powerful model, but I'm not sure which one or if I should enable hardware acceleration etc.This is my yaml file
immich-machine-learning: container_name: immich_machine_learning # For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag. # Example tag: ${IMMICH_VERSION:-release}-cuda image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release} # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration # file: hwaccel.ml.yml # service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable volumes: - immich-model-cache:/cache env_file: - stack.env restart: always healthcheck: disable: false
I looked at the docs but it's a bit confusing so that's why I'm here.
wrote last edited by [email protected]According to this paste, you're not even using inference at all, or rather, it's using CPU.
Change the release tag and "cpu" to
openvino
and see if that performs any better. -
I'm currently running my Immich server on a mini PC with proxmox
It's got 3x N97 CPU cores available to it and 7gb of ram
It's using the defaultViT-B-32__openai
model, I was wondering if I can use a more powerful model, but I'm not sure which one or if I should enable hardware acceleration etc.This is my yaml file
immich-machine-learning: container_name: immich_machine_learning # For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag. # Example tag: ${IMMICH_VERSION:-release}-cuda image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release} # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration # file: hwaccel.ml.yml # service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable volumes: - immich-model-cache:/cache env_file: - stack.env restart: always healthcheck: disable: false
I looked at the docs but it's a bit confusing so that's why I'm here.
OpenVino is about your only option here. It is not super efficient and will increase system load during those jobs.
-
I'm currently running my Immich server on a mini PC with proxmox
It's got 3x N97 CPU cores available to it and 7gb of ram
It's using the defaultViT-B-32__openai
model, I was wondering if I can use a more powerful model, but I'm not sure which one or if I should enable hardware acceleration etc.This is my yaml file
immich-machine-learning: container_name: immich_machine_learning # For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag. # Example tag: ${IMMICH_VERSION:-release}-cuda image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release} # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration # file: hwaccel.ml.yml # service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable volumes: - immich-model-cache:/cache env_file: - stack.env restart: always healthcheck: disable: false
I looked at the docs but it's a bit confusing so that's why I'm here.
They have a list here of the models with performance and RAM usage data: https://immich.app/docs/features/searching/
You kind of just have to pick one, try it, and see if it crashes from low memory.
Also enable OpenVINO HWaccel, because it will be extremely slow otherwise.
-
According to this paste, you're not even using inference at all, or rather, it's using CPU.
Change the release tag and "cpu" to
openvino
and see if that performs any better.thanks I managed to enable openvino and pick a stronger model and it's working quite well, I also bumped the ram to 10gb