New server for the family, Proxmox or TrueNAS, LXC or Docker?
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
The recommended rate for RAM to TB of storage is one, so for every 1TB of data they recommend you need to have 1GB of ram. Not sure how relevant it actually is since I have a lot more ram than storage on the server, but just something to note.
Virtualising ZFS on top of ZFS is generally not supported and can cause issues. I run TrueNas baremetal and run a lot of docker services in TrueNas itself, you can also run VM's on TrueNas. I know some people virtualise TrueNas in Proxmox, but the only use I see is to host something in a VM on proxmos that you cannot really run inside a TrueNas VM.
The thing with TrueNas is that everything is either stored in the datasets or in the config file. The latter can be backupped pretty easily.I have both a TrueNas server and a Proxmox box with OPnsense running, Proxmox is very nice I just don't see a reason to run TrueNas inside Proxmox.
-
Have you given Synology any thoughts? It can run containers with bo issues. Regarding streaming media you might get a separate tv box (I use Nvidia tv sheild with lineageos) and you won't have any transcoding issues!
I'd not recommend Synology anymore, as they're starting to implement vendor lock on their drives and NAS boxes. As in, you'll have to use their drives for the Nas to work.
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
To expose your services easily and securely look up tailscale it's completely free and is setup per device, ex download on your moms phone and you can manage that phones access.
Tailscale uses wiregaurd and some weird ass NAT magic to make every device have direct connections to each other creating a "tailnet"
It's a zero trust architecture so you have to whitelist every device on it. What that means practically is it's very difficult to compromise and that's by it's nature. You don't have to have a high technical level to be very secure using tailscale.
There is also twingate which I think is similar but I'm not as familiar with twingate
-
Have you given Synology any thoughts? It can run containers with bo issues. Regarding streaming media you might get a separate tv box (I use Nvidia tv sheild with lineageos) and you won't have any transcoding issues!
No, I wanted something that I could upgrade if I wanted in the future, especially for disks. I still have 4x 3.5' slots available in the case and as for sata ports on the mobo I can always by a controller to plug in the pcie slots
-
Proxmox all the way, cannot recommend it higher. I was very scared to try it in the beginning but its the best server choice i ever made.
You dont need to choose between lxc or docker. You can just run docker in lxc many helper scripts set you up like that by default.
Super technically they dont recommend doing so but i know others who do this and have never encountered a problem with it.
How is that different from a VM and using docker inside it? Any specific advantages/disadvantages to both approaches?
-
To expose your services easily and securely look up tailscale it's completely free and is setup per device, ex download on your moms phone and you can manage that phones access.
Tailscale uses wiregaurd and some weird ass NAT magic to make every device have direct connections to each other creating a "tailnet"
It's a zero trust architecture so you have to whitelist every device on it. What that means practically is it's very difficult to compromise and that's by it's nature. You don't have to have a high technical level to be very secure using tailscale.
There is also twingate which I think is similar but I'm not as familiar with twingate
I had thought of that, I didn't really like the idea of using a third party service to access my machines.
Also I didn't mention in the post, but, while my ISP gives me a public IP, I only use port forwarding to wireguard into my home networks. My services are exposed via a vps hosted on oracle cloud free tier free, which forwards public traffic to my server via another wireguard connection
-
How is that different from a VM and using docker inside it? Any specific advantages/disadvantages to both approaches?
Lower overhead
-
How is that different from a VM and using docker inside it? Any specific advantages/disadvantages to both approaches?
What taxon said.
Most of my services have their own lxc with docker.
A few that need it are vms
It works so well i often forget how i set things up because its very leave and forget about it,
Keeps working. -
I had thought of that, I didn't really like the idea of using a third party service to access my machines.
Also I didn't mention in the post, but, while my ISP gives me a public IP, I only use port forwarding to wireguard into my home networks. My services are exposed via a vps hosted on oracle cloud free tier free, which forwards public traffic to my server via another wireguard connection
Headscale is a locally hosted version. You can also just manually do wire guard. But tailscale is just a management tool for setting up an overlay network using wire guard.
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
My vote is for Proxmox because you can run all those other options (and many more) INSIDE Proxmox.
-
TrueNAS will remove VMs the next release. It still supports containers directly.
Edit: apparently I misremembered that and its untrue.
Do you have a source for that? I can't find anything related to TrueNAS deprecating VMs.
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
I stopped self-hosting stuff that's for the family.
In case something happens to me, no way my wife is going to keep this stuff running. And the kids are too young. So they would lose everything.Family stuff goes in managed solutions (like Proton).
Personal and public stuff is selfhosted.Just something to consider.
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
My setup is TrueNAS SCALE on bare metal with VMs for Proxmox and Jellyfin. I pass an Arc A380 to Jellyfin for transcoding and it works great. I also leverage LXC contains a lot for small services. I keep everything behind a VPN. It's pretty easy to distribute Wireguard configs and import them on most OSes, but It's been a mixed bag getting family members to use it though.
I'm a fan of having dedicated network hardware and VLANs on one router. I generally go for Mikrotik. I used to run pfsense on a VM and when the server when down so did everything else, which caused the house to erupt into chaos.
Also if you're considering new hardware already I really recommend looking into surplus enterprise gear. I run my whole lab on an R730XD. It holds a ton of drives, has an IDRAC (I can't live without it now), ECC for extra peace of mind during ZFS scrubs, and they hold an insane amount of inexpensive RAM. They're fairly cheap on eBay or from refurbishment companies. Bring your own drives with warranties though, used drives are a headache. Servers like this can be really noisy though, I keep mine in the basement.
I'll also suggest a second drive to mirror your boot drive. You can and should back up your configs but a mirror saves headache and down time if the boot ssd fails. Probably even more important if you're planning on using this pool for VM storage.
Have fun!
-
My setup is TrueNAS SCALE on bare metal with VMs for Proxmox and Jellyfin. I pass an Arc A380 to Jellyfin for transcoding and it works great. I also leverage LXC contains a lot for small services. I keep everything behind a VPN. It's pretty easy to distribute Wireguard configs and import them on most OSes, but It's been a mixed bag getting family members to use it though.
I'm a fan of having dedicated network hardware and VLANs on one router. I generally go for Mikrotik. I used to run pfsense on a VM and when the server when down so did everything else, which caused the house to erupt into chaos.
Also if you're considering new hardware already I really recommend looking into surplus enterprise gear. I run my whole lab on an R730XD. It holds a ton of drives, has an IDRAC (I can't live without it now), ECC for extra peace of mind during ZFS scrubs, and they hold an insane amount of inexpensive RAM. They're fairly cheap on eBay or from refurbishment companies. Bring your own drives with warranties though, used drives are a headache. Servers like this can be really noisy though, I keep mine in the basement.
I'll also suggest a second drive to mirror your boot drive. You can and should back up your configs but a mirror saves headache and down time if the boot ssd fails. Probably even more important if you're planning on using this pool for VM storage.
Have fun!
what do you use your virtualized proxmox for?
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
Might want to check out unraid.
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
IMHO, it really depends on the specific services you want to run. I guess you are most familiar with Docker and everything that you want to run has a first-class-citizen Docker container for it. It also depends on whether the services you want to run are suitable for Internet exposure or not (and how comfortable you are with the convenience tradeoff).
LXC is very different. Although you can run Docker nested within LXC, you gotta be careful because IIRC, there are setups that used to not work so well (maybe it works better now, but Docker nested within LXC on a ZFS file system used to be a problem).
I like that Proxmox + LXC + ZFS means that it's all ZFS file systems, which gives you a ton of flexibility; if you have VMs and volumes, you need to assign sizes to them, resize if needed, etc.; with ZFS file systems you can set quotas, but changing them is much less fuss. But that would likely require much more effort for you. This is what I use, but I think it's not for everyone.
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
Hey breh. My setup us bare metal proxmox, a truenas VM and a couple Debian LXCs. Main LXC is running docker just fine with zero issues.
I'd honestly suggest skipping the truenas VM and just managing your disks in proxmox but my setup has been holding steady for about 5 years now.
Definitely don't have to choose between lxc or docker.
-
Hey breh. My setup us bare metal proxmox, a truenas VM and a couple Debian LXCs. Main LXC is running docker just fine with zero issues.
I'd honestly suggest skipping the truenas VM and just managing your disks in proxmox but my setup has been holding steady for about 5 years now.
Definitely don't have to choose between lxc or docker.
this is me. i originally had truenas core on bare metal but wanted to be able to do more, so truenas is a vm running the same exact pools as before.
proxmox is so goddamn slick!
lxcs for docker compose stacks,
another proxmox setup with backup truenas, rsync weekly!
-
cross-posted from: https://sh.itjust.works/post/39436154
Hello everyone, I'm building a new server for the house, it will act as
a NAS for everyone and host a few services like paperless, immich, baikal, jellyfin, syncthing probably navidrone, etc. The main reason I'm building a new one is that my current one is a HP prebuilt with a 3rd gen i5 and 8GB ram that is slowly beating the bucket, my 4TB HDD is completely full and there's no more sata ports nor space in the case.I am fully psychologically prepared to be 24/7 tech support, but after all I already am, and in this way I have to support services for which I know how they work (and that I trust!) and not some strange Big Tech service whose UI and inner workings changes every other day.
For reference my new build is:
- CPU: Ryzen 5 PRO 4560G + stock cooler. Has integrated graphics, can use it for Jellyfin transcoding.
- RAM: Corsair Vengeance 2x8GB (from my desktop before I upgraded to 64GB RAM. If needed in the future I will upgrade the capacity and probably switch to ECC, I've chosen the CPU since it has support for it)
- SSD NVME (boot+VM storage): Verbatim VI3000 512GB
- Storage (SATA): 4x12TB Seagate Enterprise (White label) to use ZFS and Raid Z1 + 1x512GB Samsung SSD as cache.
I'm planning on using proxmox on bare metal and spin up VM/containers as needed, for which I'm wondering:
-
I know proxmox can manage ZFS arrays, is it better to create the array via proxmox, then share it as needed via something like openmediavault in vm/container OR to create a TrueNAS VM and passthrough the SATA controller to it, then manage everything via TrueNAS? I've done the latter in the past on another server, it's holding strong
-
I don't know if exposing the server to the open internet is a good idea (of course with fail2ban and a firewall properly configured) or to just keep a VPN connection to the server always open. I think the latter would be more secure, but also less user-friendly for parts of the family. I'm using wireguard currently to remote into my server when needed, and sometimes networks like eduroam in my university block it completely.
- Self signed SSL certificates might also be a problem in the latter case
-
Since I will experiment with this server a little bit, I was thinking of keeping:
- One VM for services for the family (exposed to internet or VPN)
- One VM for services I still want to expose (I currently expose a couple websites for friends with data archived in my NAS)
- One VM for me to experiment with before going in "production" for the family
Each VM would host its services using Docker+Portainer.
My question is: is this too convoluted? Should I just use proxmox's LXC containers (which I have no experience with) and host services there?I was also thinking of spinning up a pfsense/opnsense box and put the server into a separate VLAN from the domestic lan. But that will be a project for a second time. Unfortunately the way ethernet is wired in my house and for the physical space I have available prevent me from separating the networks by physically using another router.
Thanks!
I used to run proxmox, but I wasn't using most of its functions. I now have migrated to a couple of low power Debian machines on zfs with lxc. I use incus and ansible to manage everything, including backups.
-
How is that different from a VM and using docker inside it? Any specific advantages/disadvantages to both approaches?
In Proxmox, LXCs allow you to easily share resources between containers like your iGPU can be shared with your Jellyfin container and a separate Immich container. From my understanding, VMs bind whatever resource to the VM which can't easily be used with other VMs or containers.