Scoping Virtual CPUs and RAM for ProxMox

One of the more interesting problems of virtualizing your infrastructure is how beefy your servers must be to provide good reliable performance. Over the years different “rules of thumb” have been use, for example a single core processor of the one gigahertz speed could be said to provide equivalent horsepower to about four 233 megahertz processors for the purpose of virtualization. If you weren’t using a graphical user interface (GUI) operating system, running four or more virtual machines on that single core, single threaded processor could still work just fine. But then there were “multi-socket” motherboards, and “multi-core” central processing units, and “multi-threading” cores…

For the TL:DR crowd, modern “vCPU” calculations is “Threads x Cores = vCPU” so a 4 core, 4 thread ancient Intel i5 would be “4 cores times 4 threads = 16 virtual cpus” for the purpose of how many VMs you could run on that machine with an esxi, xen, or ProxMox hypervisor. For Random Access Memory (RAM), total up the RAM assigned to VMs, then add 2gb for for the hypervisor to keep track of stuff and handle incoming traffic to the VMs.

Why is this? Well, generally speaking your home network is running maybe gigabit or 2.5 gigabit ethernet at best, or some sort of WiFi with roughly equivalent speeds, and that network traffic is really “bursty” rather than “constant.” With “bursty” traffic the hypervisor can idle down actual CPU usage to minimal until a network input, and then pump up the assigned core count to that VM until the data is processed. But, in order to do this, all that VM data needs to be in RAM or on a rapidly accessible SSD, or else you get slow performance.

For example, I am running an Ubuntu Server 22.04 machine as a Pi Hole on ProxMox (inspired by Craft Computing: https://www.youtube.com/watch?v=FnFtWsZ8IP0&t=238s ). The base system is a dual core celeron, so 2 cores and 2 threads, for a “vCPU scoping of 4 vCPUs.” However, even answering hundreds of DNS queries, the VM rarely uses more than 5% of the total host CPU capacity. So this is a very lightweight VM, on a very lightweight system. I also assigned a measly 512 mb of RAM to that VM, but that happens to be plenty for running a Pi Hole.

So what next? Well I created a Minecraft Server, and gave it a single vCPU, and let my two sons have at it. Even under full load from two teenagers, it never popped more than 30% of the host CPU resources. Now I only assigned 2GB of RAM to the Minecraft Server, but that seems to be enough for a relatively “vanilla” Minecraft instance. Of the 8 GB of RAM on that system, 2 gb is for the hypervisor, leaving 6 GB for VMs, of which 2.5 gb is designated already. This leaves me with two more vCPUs and 3.5 GB or RAM to play with before performance starts noticeably dropping. I may add in a Turnkey Linux active directory VM just to start playing around with it…..

Now, with a slightly stronger CPU, such as a hyperthreading Intel Core i3, we get 2 cores, and 4 threads, so 8 vCPUs. If we use that as the base for ProxMox, we could assign more vCPUs to each VM (although the Pi Hole doesn’t need one) or we could have more VMs. More VMs is great if you are working on a “hacking lab” where you spin up a target VM from vulnhub.com then compete with your friends to get the flag first.

One thing that is a bit more challenging of a resource is Random Access Memory (RAM). Each VM needs a set amount of it, and at least in ProxMox, once you set it it is set. So the 8gb of RAM in my duel core Celeron system won’t go very far if I throw multiple game servers that need more than 2gb of RAM to function properly, no matter how many vCPUs I might have left, as ProxMox uses “designated memory” for VMs or “guest operating systems.”

The last thing, all your VMs need to be mapped to a solid state drive, and for this SATA III is just fine. With a 6 GB/s transfer rate and massively high input/outputs per second (IOPS) even a cheap SSD is going to give you great VM performance. If you use “spinning rust drive” in a virtual environment, use them for network attached storage where reads/writes are going to be sequential, rather than random. But even a tiny 120 gb SSD is going to have plenty of space for ProxMox plus four to ten more VMs. If you need “bulk storage” then spinning rust drives are still the best option in terms of terabyte for buck, but spinning rust drives are not good in the IOPS department, and should be reserved for NAS roles. But still, One TB SSDs keep dropping in price, so snapping one up when cheap would be a good thing.

And this is why I maintain a cheap used corporate drone box is the best by for a homelab, it may use more electricity than a Raspbery Pi or Zimaboard, but it uses way less than a Dell Poweredge or HP 380. Heck, if you can find a cheap 1st or 2nd generation Ryzen 5, with 6cores/12 threads, you’ll likely never run out of virtual CPU power for any homelab project (6×12= 72 vCPUs). Even a cheap i7 with 4c/8t is 32 vCPUs, that’s more than enough for network services, game servers, media servers, or storage.

Caveat, my brother in law runs a generative AI system in his homelab, which means he has to use a GPU (a Geforce RTX 2060 in this case), which is not a cheap project. But running a VM with GPU passthrough for AI/ML workloads isn’t something I plan to do in the near to mid future.

There is one downside to going with an old corporate drone box, and that is heat. My celeron system sips power, and even on full load the CPU is passively cooled at a measly 10W of electricity. A core 17-2600s has a TDP of 65 watts, which puts off 550% more heat at max use, but generally a homelab isn’t running 100% for very long, as opposed to the datacenter servers where idle is the abnormal condition.

This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a comment