New server for the family, Proxmox or TrueNAS, LXC or Docker?
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
Hey breh. My setup us bare metal proxmox, a truenas VM and a couple Debian LXCs. Main LXC is running docker just fine with zero issues.
I'd honestly suggest skipping the truenas VM and just managing your disks in proxmox but my setup has been holding steady for about 5 years now.
Definitely don't have to choose between lxc or docker.
-
Hey breh. My setup us bare metal proxmox, a truenas VM and a couple Debian LXCs. Main LXC is running docker just fine with zero issues.
I'd honestly suggest skipping the truenas VM and just managing your disks in proxmox but my setup has been holding steady for about 5 years now.
Definitely don't have to choose between lxc or docker.
this is me. i originally had truenas core on bare metal but wanted to be able to do more, so truenas is a vm running the same exact pools as before.
proxmox is so goddamn slick!
lxcs for docker compose stacks,
another proxmox setup with backup truenas, rsync weekly!
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
I used to run proxmox, but I wasn't using most of its functions. I now have migrated to a couple of low power Debian machines on zfs with lxc. I use incus and ansible to manage everything, including backups.
-
How is that different from a VM and using docker inside it? Any specific advantages/disadvantages to both approaches?
In Proxmox, LXCs allow you to easily share resources between containers like your iGPU can be shared with your Jellyfin container and a separate Immich container. From my understanding, VMs bind whatever resource to the VM which can't easily be used with other VMs or containers.
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
I run proxmox, and proxmox manages the zfs pool, there are VMs for important and convenience services, where important only hold things needed for the machine to work (so networking related) and metrics. I also have a desktop VM for the occasional use, and you can install opnsense later if you want an advanced firewall for VLANs and maybe internet too.
the storage is made accessible through virtiofs shares, but setup is quite hacky, and some things don't like it (like it can't store any kind of databases) because virtiofs works technically like a network filesystem, and does not support some consistency features (yet?). maybe ceph would be a solution, it is natively supported by proxmox.if I were to build a new one, I would try out TrueNAS, it's newer linux based version. I heard that can run VMs too if needed. I suspect that it can be more user friendly, but I haven't used its web interface yet ever.
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
I know that it's a different options from what OP wanted. But I've had the same battle in my mind, ended up with Fedora Server.
-
I used to run proxmox, but I wasn't using most of its functions. I now have migrated to a couple of low power Debian machines on zfs with lxc. I use incus and ansible to manage everything, including backups.
Aha! I was considering moving from proxmox to incus too, but incus seemed quite new and not much documentation (at the time)
How do you find it now?
-
In Proxmox, LXCs allow you to easily share resources between containers like your iGPU can be shared with your Jellyfin container and a separate Immich container. From my understanding, VMs bind whatever resource to the VM which can't easily be used with other VMs or containers.
This is really interesting, might be the way to go for me
-
My setup is TrueNAS SCALE on bare metal with VMs for Proxmox and Jellyfin. I pass an Arc A380 to Jellyfin for transcoding and it works great. I also leverage LXC contains a lot for small services. I keep everything behind a VPN. It's pretty easy to distribute Wireguard configs and import them on most OSes, but It's been a mixed bag getting family members to use it though.
I'm a fan of having dedicated network hardware and VLANs on one router. I generally go for Mikrotik. I used to run pfsense on a VM and when the server when down so did everything else, which caused the house to erupt into chaos.
Also if you're considering new hardware already I really recommend looking into surplus enterprise gear. I run my whole lab on an R730XD. It holds a ton of drives, has an IDRAC (I can't live without it now), ECC for extra peace of mind during ZFS scrubs, and they hold an insane amount of inexpensive RAM. They're fairly cheap on eBay or from refurbishment companies. Bring your own drives with warranties though, used drives are a headache. Servers like this can be really noisy though, I keep mine in the basement.
I'll also suggest a second drive to mirror your boot drive. You can and should back up your configs but a mirror saves headache and down time if the boot ssd fails. Probably even more important if you're planning on using this pool for VM storage.
Have fun!
Also if you're considering new hardware already I really recommend looking into surplus enterprise gear. I run my whole lab on an R730XD. It holds a ton of drives, has an IDRAC (I can't live without it now), ECC for extra peace of mind during ZFS scrubs, and they hold an insane amount of inexpensive RAM. They're fairly cheap on eBay or from refurbishment companies. Bring your own drives with warranties though, used drives are a headache. Servers like this can be really noisy though, I keep mine in the basement.
I've briefly considered it but it is out of the question for me. Not enough space in the house and enterprise gear is way too noisy. This setup will probably sit next to the TV in the living room so it has to be as silent as possible.
-
I'd not recommend Synology anymore, as they're starting to implement vendor lock on their drives and NAS boxes. As in, you'll have to use their drives for the Nas to work.
I've seen this floating around. But is this solid info? I mean, a big percentage of users does not have Synology HDDs, what would happen if they implement this? Maybe this will be the case for business uses of some of their apps?
-
I run proxmox, and proxmox manages the zfs pool, there are VMs for important and convenience services, where important only hold things needed for the machine to work (so networking related) and metrics. I also have a desktop VM for the occasional use, and you can install opnsense later if you want an advanced firewall for VLANs and maybe internet too.
the storage is made accessible through virtiofs shares, but setup is quite hacky, and some things don't like it (like it can't store any kind of databases) because virtiofs works technically like a network filesystem, and does not support some consistency features (yet?). maybe ceph would be a solution, it is natively supported by proxmox.if I were to build a new one, I would try out TrueNAS, it's newer linux based version. I heard that can run VMs too if needed. I suspect that it can be more user friendly, but I haven't used its web interface yet ever.
I have good news. I have just read the Proxmox 8.4 changelog, and they added support for using virtiofs with VMs, so now using it does not seem to require hacks anymore! But the limitation with databases probably still applies.
@[email protected] unsure if you have read it already so tagging.
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
Proxmox w/VMs for Docker, per your original plan (don't use Portainer, use "Dockge" instead). You can also use small LXCs for services that aren't set up for Docker, and Proxmox offers turnkey LXC images to make it that much easier.
-
this is me. i originally had truenas core on bare metal but wanted to be able to do more, so truenas is a vm running the same exact pools as before.
proxmox is so goddamn slick!
lxcs for docker compose stacks,
another proxmox setup with backup truenas, rsync weekly!
Why are we running Docker inside LXC? That's not a wise decision, and is specifically stated as a big "no-no" by both Docker and Proxmox devs.
VMs don't use as much resources as you realize. I've got multiple VMs full of Docker stacks (along with other VMs running various game servers, and several LXCs for various "not set up for Docker" services) spread across three i7-7700T servers; none of them are even close to being taxed.
-
what do you use your virtualized proxmox for?
Pretty much everything else virtualization. I have a few small LXC containers running Ad Guard and Unifi Controller, and VMs for a gitlab instance, gitlab runner, and some game servers. I could host all that in TrueNAS directly but I like proxmox's UI.
-
Also if you're considering new hardware already I really recommend looking into surplus enterprise gear. I run my whole lab on an R730XD. It holds a ton of drives, has an IDRAC (I can't live without it now), ECC for extra peace of mind during ZFS scrubs, and they hold an insane amount of inexpensive RAM. They're fairly cheap on eBay or from refurbishment companies. Bring your own drives with warranties though, used drives are a headache. Servers like this can be really noisy though, I keep mine in the basement.
I've briefly considered it but it is out of the question for me. Not enough space in the house and enterprise gear is way too noisy. This setup will probably sit next to the TV in the living room so it has to be as silent as possible.
Oh ya makes sense. Anything in a rack form factor would be much too loud to live with. I think in that case you've made great choices in hardware!
-
I've seen this floating around. But is this solid info? I mean, a big percentage of users does not have Synology HDDs, what would happen if they implement this? Maybe this will be the case for business uses of some of their apps?
Yeah, it's started to roll out on their new hardware:
https://www.theverge.com/news/652364/synology-nas-third-party-hard-drive-restrictions
-
What taxon said.
Most of my services have their own lxc with docker.
A few that need it are vms
It works so well i often forget how i set things up because its very leave and forget about it,
Keeps working.So your running one lxc with one docker container in it?
-
So your running one lxc with one docker container in it?
I am running a few lxc which all run a docker container each.
-
Aha! I was considering moving from proxmox to incus too, but incus seemed quite new and not much documentation (at the time)
How do you find it now?
Really great. Passing through hardware is a lot easier, settings can be defined in profiles (containers that should start with boot, which should have uid/gid mapping, privileged, etc), and overall system memory usage is way lower.
-
Really great. Passing through hardware is a lot easier, settings can be defined in profiles (containers that should start with boot, which should have uid/gid mapping, privileged, etc), and overall system memory usage is way lower.
Nice. I'll go that way when I next brave the dust and cobwebs where the server's currently located