Join 2M+ Professionals Getting Ahead on AI
Keeping up with AI shouldn't feel like a second job.
But between the new tools, viral posts, and endless hot takes, most people spend hours every week trying to figure out what actually matters.
The Rundown AI fixes that.
It's a free newsletter that gives you the AI news, tools, and tutorials you actually need to know. All in just 5 minutes a day.
Over 2M professionals at companies like Apple, Google, and NASA already read it every morning to stay ahead.
Plus, if you complete the quiz after signing up, they'll recommend the best tools, guides, and courses for your specific job and needs.
Hey {{first name | there}},
Divine here, I decided to switch things up this week because I came across a post on Reddit that reminded me a lot of Jubril.
I felt it was important that someone without a 7-node Proxmox cluster write about it. That person is me. I have a laptop and strong opinions.
Housekeeping:
To make sure you don’t miss future emails, here are two quick GIFs showing how to move this email to your Primary tab and add this address to your contacts.


Here’s the post.

See the thread
The setup is called “Pfannkuchen” — German for pancakes — and it’s clearly the result of years of work.
Seven nodes, enterprise Xeon hardware, a Dell PowerStore SAN, WireGuard tunnels, Grafana, Kubernetes, self-hosted Matrix, Forgejo as a source of truth for compose files. On paper, it reads like a small production environment.
The dashboard screenshot shows 1% CPU utilisation across 300 threads. Node 6 is dedicated entirely to Emby with 512GB of RAM. Node 1, running Grafana, Prometheus, and a wiki, has 768GB of RAM and 48 threads.
One commenter estimated the whole thing draws around 3kW continuously. Another pointed out the workload would likely fit on a single machine, pulling a fraction of that.
It’s a straightforward observation, but it comes up a lot in threads like this.
There’s a version of homelab culture where building and collecting start to look the same from the outside. The setups get more elaborate, the specs get higher, and it becomes harder to tell what is being exercised versus what is just available.
What stood out more here was the Butler API.
It takes an API call with node, IP, hostname, cores, memory, and disk specs, builds a bootable ISO with cloud-init config, creates and starts a Proxmox VM, waits for SSH, then runs an Ansible playbook to configure Docker, services, and backups.
End-to-end VM provisioning in about ten minutes.
Well, that usually comes from repetition. Doing the same setup enough times that automating it becomes the easier option. The “What I Learned” section reflects that as well.
Git as the source of truth for Docker compose files. VMs over LXC for portability. An external reverse proxy on a VPS instead of dealing with home NAT. Backup monitoring is treated as a first-class concern.
These are the kinds of decisions that tend to come from things breaking at inconvenient times.
The hardware is what draws attention, but it’s not the only story here. There’s a fully automated provisioning pipeline, a consistent way of managing services, and a clear attempt to treat the setup as a system rather than a collection of containers.
You sometimes see similar patterns at work. Internal tooling grows, deployment pipelines become more layered, and over time, it becomes less obvious whether the complexity maps cleanly to the problem or to how the system evolved.
This setup sits somewhere in between, which is probably where most of them end up.
Meanwhile, Jubril reviewed this draft. He didn't say much. He just asked me three separate times what the RAM spec on Node 6 was. I'm choosing not to read into that.
If this sounds familiar, share this link with that colleague or fellow DevOps engineer with suspiciously large electricity bills. If they’re pricing enterprise Xeon chips for a home setup, share it twice.
Stop Drowning In AI Information Overload
Your inbox is flooded with newsletters. Your feed is chaos. Somewhere in that noise are the insights that could transform your work—but who has time to find them?
The Deep View solves this. We read everything, analyze what matters, and deliver only the intelligence you need. No duplicate stories, no filler content, no wasted time. Just the essential AI developments that impact your industry, explained clearly and concisely.
Replace hours of scattered reading with five focused minutes. While others scramble to keep up, you'll stay ahead of developments that matter. 600,000+ professionals at top companies have already made this switch.
Until next time,
Divine Odazie
CEO, EverythingDevOps




