I hope this is a signal that a third cloud option, BYOC (build your own cloud), is finally becoming practical. Yes, the physical management of racks is a massive part of managing a cloud but the software stack is honestly why AWS and the like are winning much of the time, at least for the small use cases I have been a part of. I priced out some medium servers and the cost of buying enough for load plus extras for fail over, and host them, was -way- under AWS and other cloud vendors (these were GPU loads) but the management of them was the issue. 'just spin up an instance...' is such a massive enabler for ideas. Something that gives me a viable software stack to build my own cloud on easily is a huge win for abandoning the major cloud vendors. Keep it coming!
I think the main selling point for SME (wtih a small IT team) is that Proxmox is very easy to setup (download iso, install debian, ready to go). CloudStack seems to require a lot of work just to get it running: https://docs.cloudstack.apache.org/en/latest/quickinstallati...
Maybe I'm wrong - but where I am from, companies with less than 500 employees are like 95% of the workforce of the country. That's big enough for a small cluster (in-house/colocation), but to small for something bigger.
Yeah. The keys here are 'easy' and 'I can play with it at home first'. Let's be honest, being able to throw together a bunch of old dead boxes and put proxmox on them in a weekend is a game changer for a learning curve.
The main reason I never tried OpenStack was that the official requirements were more than I had in my home VM host, and I couldn't figure out if the hardware requirements were real or suggested.
Proxmox has very little overhead. I've since moved to Incus. There are some really decent options out there, although Incus still has some gaps in the functionality Proxmox fills out of the box.
PLEASE DON'T DOWN VOTE ME TO HELL THIS IS A DISCLAIMER I AM JUST SHARING WHAT I'VE READ I AM NOT CLAIMING THEM AS FACTS.
...ahem...
When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.
> When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.
And according to every ex-Amazoner I've ment: the core of AWS is a bunch of Perl scripts glued together
I think you know as well as I do that it very much does matter. Even if you have an army of engineers around to fix things when they break, things still break.
I think the point is that for Amazon it's their own code and they pay full time staff to be familiar with the codebase, make improvements, and fix bugs. OpenStack is a product. The people deploying it are expected to be knowledgeable about it as users / "system integrators" but not developers. So when the abstraction leaks, and for OpenStack the pipe has all but burst, it becomes a mess. It's not expected that they'll be digging around in the internals and have 5 other projects to work on.
The reason there were so many commercial distributions of open stack was because setting it up reliably end to end was nearly impossible for most mere mortals.
Company’s like meta cloud or mirantis made a ton of money with little more than openstack installers and a good out of the box default config with some solid monitoring and management tooling
CERN is the biggest scientific facility in the world, with a huge IT group and their own IXP. Most places are not like that.
Heck, I work at a much smaller particle accelerator (https://ifmif.org) and have met the CERN guys, and they were the first to say that for our needs, OpenStack is absolutely overkill.
> Heck, I work at a much smaller particle accelerator (https://ifmif.org) and have met the CERN guys, and they were the first to say that for our needs, OpenStack is absolutely overkill.
I currently work in AI/ML HPC, and we use Proxmox for our non-compute infrastructure (LDAP, SMTP, SSH jump boxes). I used to work in cancer with HPC, and we used OpenStack for several dozen hypervisors to run a lot of infra/services instances/VM.
I think that there are two things determine which system should be looked at first: scale and (multi-)tenancy. More than one (maybe two) dozen hypervisors, I could really see scaling/management issues with Proxmox; I personally wouldn't want to do it (though I'm sure many have). Next, if you have a number internal groups that need allocated/limited resource assignments, then OpenStack tenants are a good way to do this (especially if there are chargebacks, or just general tracking/accounting).
I'm happily running some Proxmox now, and wouldn't want to got more than a dozen hypervisor or so. At least not in one cluster: that's partially what PDM 1.0 is probably about.
I have run OpenStack with many dozens of hypervisors (plus dedicated, non-hyperconverged Ceph servers) though.
I love Proxmox as a virtual server manager - I can't imagine running anything else as a base for a homelab. Free, powerful, VMs or CTs operating quickly, graphical shell for administration, well documented and used, ZFS is a first class citizen.
I've kind of wanted to build a three node cluster with some low end stuff to expand my knowledge of it. Now they have a datacenter controller. I'd need to build twice the nodes.
Question: Does anyone know large businesses that utilize proxmox for datacenter operations?
Yes!
In this great video from level1techs Wendel walks around in a brand new ai gpu datacenter, an engineer tells what they use for all the normal stuff :-)
The company I work for is migrating a few hundred VMWare hosts to Proxmox due to licensing and cost considerations. In our case, since most of the hosts are not clustered, the migration process is quite straightforward. The built-in migration tool proves to be exceptionally effective.
Both my current org and previous org (large) have mentioned it many times as an option, but both ended up choosing other commercial alternatives: HyperV and XenServer.
I think the missing datacenter manager was causing a lot of hesitation for those that don't manage via automation
I run roughly 30 PVE hosts across several customers (all ex-VMware). Few more to migrate.
You can migrate a three node cluster from VMware to PVE using the same hardware if you have a proper n+1 cluster.
iSCSI SANs don't (yet) do snapshots on PVE. I did take a three node Dell + flash SAN and an additional temporary box with rather a lot of RAM and disc (ZFS) and took the SSDs out of the SAN and whistled up a Ceph cluster on the hosts.
Another customer, I simply migrated their elderly VMware based cluster (a bit of a mess with an Equallogic) to a smart new set of HPEs with flash on board - Ceph cluster. That was about two years ago. I patched it today, as it turns out. Zero downtime.
PVE's high availability will auto evacuate a box when you put it into maintenance mode, so you get something akin to VMware's DRS out of the box, for free.
PDM is rather handy for the likes of me that have loads of disparate systems down the end of VPNs. You do have to take security rather seriously and it has things like MFA built in out of the box, as does PVE itself.
PVE and PDM support ACME too and have done for years. VMware ... doesn't.
I could go on somewhat about what I think of "Enterprise" with a capital E software. I won't but I was a VMware fanboi for over 20 years. I put up with it now. I also look after quite a bit of Hyper-V (I was clearly a very bad boy in a former life).
We run proxmox on a bunch of hardware servers, but for "homelab" we use Ubuntu on ZFS + Incus cluster. What I look at is IncusOS: a radically new approach to base cluster OS: no SSH, no configuration. So far it looks too radical, but eventually I see that as the only way to go for somebody who has a "zoo" of servers behind Tailscale: just base OS which upgrades safely, immutable and encrypted, without any unique configuration. The vision looks beautiful.
A vCentre runs one or more datacentres but only for one organisation or org umbrella. A PDM can connect to and control multiple "trusting" parties.
I (we) have several customers with PVE deployments and VPNs etc to manage them. PDM allows me to use a single pane of glass to manage the lot, with no loss of security. My PDM does need to be properly secured and I need to ensure that each customer is properly separated from each other (minimal IPSEC P2s and also firewall ingress and egress rules at all ends for good measure).
I should also point out that a vCentre is a Linux box with two Tomcat deployments and 15 virty discs. One TC is the management and monitoring system for the actual vCentre effort. Each one is a monster. Then you slap on all the other bits and pieces - their SDN efforts have probably improved since I laughed at them 10+ years ago. VMware encourage you to run a separate management cluster which is a bit crap for any org sub say 5000 users.
PDM is just a controller of controllers and that's all you need. Small, fast and a bit lovely.
VE can be a cluster of nodes that you can still manage via the same UI. ESXi cant do that, ESXi UI is a single node, and not even everything that a single node can do with vCenter added.
Just migrated from xcp-ng 7 to Proxmox 9.1 for a client this week.
Honestly the whole process was incredibly smooth, loving the web management, native ZFS. Wouldn't consider anything else as a type 1 hypervisor at this stage - and really unless I needed live VM migrations I can't see a future where I'd need anything else.
Managed to get rid of a few docker cloud VPS servers and my TrueNAS box at the same time.
I'd prefer if it was BSD based, but I'm just getting picky now.
Budget sensitive client that didn't want to pay for xcp-ng tools needed in version 8, as well as the server needed a hardware upgrade anyway from SSDs to nVME drives so just ripped the bandaid off at the same time.
It does three things, It adds a viewport meta tag for a proper mobile scaling. Prevents long words/URLs from breaking thr page layout and disables automatic font size adjustment on Safari in landscape mode
Ex XCP-ng user here. The web management portal requires Xen Orchestra and needs to be installed as a seperate VM which can be irritating, with a seperate paid license. Proxmox has a web GUI natively on install which is super convenient and pretty much free for 90% of use cases.
Yup, I have two xen orchestras running on different vm clusters in different DCs managing about 8 pools (some on all the time, some in vehicles which are sometimes on, sometimes off), all open source, works well enough.
I don't change the pools enough to make it worth automating the management.
I've heard good things about XCP-ng as well and tried it out at home and proxmox seems much easier to use out of the box. Not saying XCP-ng is bad just that it wasn't as intuitive to me as proxmox was when we were moving away from vmware
K8S doesn't scale nearly as well due to etcd and latency sensitivity. Multi-site K8S is messy. The whole K8S model is overly-complex for what almost any org actually needs. Proxmox, Incus, and Nomad are much better designed for ease of use and large scale.
That said, I still run K8S in my homelab. It's an (unfortunately) important skill to maintain, and operators for Ceph and databases are worth the up-front trouble for ease of management and consumption.
multi-site k8s is also very "interesting" if you encounter anything like variable latency in your network paths. etcd is definitly not designed for use across large distances. (more then a 10km single mode fiber path).
Maintaining an external database as a replacement takes you off the blessed path, is it's own hassle to maintain high availability, and tarnishes the shiny hyperconvergance story. I'd be a lot more interested if Kine offered an embedded HA database like YugabyteDB, CockroachDB, TiDB, etc.
reply