Home About

MESA!

~16 min read by naphtha, 2026-04-29

It's here. It's finally here. Mesa is here. After countless delays, it's here.

What's Mesa?

Mesa is THE best way to host any software*. Full stop.

*that can run in Linux containers

It's pretty good. Keep reading. But more about Mesa later. First,

The Boring Stuff

The Special Maintenance Operation

This update included some major internal hypervisor networking, storage and general VM-related code updates. We tested them in staging, I tested the migration against a VM running PVE, I looked at the code up, down, and sideways, this should've been a smooth migration, MAYBE a few hours of fixing whatever breaks. It was anything but that. To top it all off, I got a really bad flu in the middle of it.

The migration took about 5 days from start to finish during which the VM networks and dashboard were intermittently unreachable for a large subset of users, and a few more days of fixing smaller issues with Bricks and guests in weird states. I did not expect the migration to be this difficult, but it's done now, and we thank you for your patience. Extra special thanks to all the good people that really helped me with troubleshooting (instead of spamming me with "bro just use claude instead of codex" and "when my vm up eta pls?").

For the people that are thinking "why no rollback?": I did take backups of all relevant configs in case something went horribly wrong, but everything that did go wrong was (eventually) fixable with hotfix patches, so at first I said I'd rather get this over with now rather than have many future small but annoying service issues. On about the second day, the sunk cost fallacy kicked in, "just one more fix until it's really done". One more fix turned into 500 where each one slightly moved the needle until I said fuck it and took everything offline to rebuild clean on a shared stable base, which was what eventually allowed me to fix everything.

For the people that review bombed us, spammed us and reported us to the cops (lol), I don't know how many more times we have to say this: do NOT use Kyun for anything that earns you money without a verified, up to date backup that you can roll over to at a moment's notice. If you choose to ignore our advice, you have no right to abuse and harrass me and my team.

For the people that understand what Kyun is but are still disappointed by how I handled this, my sincere apologies. Lessons learned:

Bye Bye PVE

We've completely moved Kyun off Proxmox VE, to a custom solution via our own QEMU wrapper library.

This was a necessary step in the right direction. PVE has served us well, but let's be real, it's a mess of Perl scripts with a UI framework from 2006 on top, too many features, too many moving parts, too much attack surface. It works for an amateur homelab, not so much for a supposedly serious enterprise. We can also now manage our internal network and storage however we want, not how PVE wants.

Since we use TypeScript, Elysia and Eden end-to-end in Kyun across the entire 200k+ LOC codebase, and PVE couldn't be further from that, it has always been the weak link and the source of many, many bugs. Bugs will, of course, still exist, but they will be much easier to catch early, debug and fix.

Bricks

Bricks are now 5 EUR/TB/mo, down from 7 EUR.

Underlying improvements:

Etc

We've (almost) completely rewritten most of Kyun, especially the backend, in order to create a stable foundation that allows us to expand in the future. Please bear with us while we continue to squash the remaining bugs. Word of the day: "make it exist first, you can make it perfect later"

The Stock Situation

I know that Kyun has been low on stock for a long time, and launching such a big new service is stupid if most people aren't going to be able to try it out because there's no stock, but my hand was forced. I tried, honest.

About 2 months ago, I bought a new server, all in, and delivered it myself, personally, to a provider at a datacenter near Amsterdam. Drove 20 hours. Dropped it off and left it for them, thinking "this is a reputable company, they must be capable of plugging in 4 cables and doing some basic network configuration they've done a million times before, right?" 6 weeks later, despite me asking them and pressuring them many times, I still had nothing.

So I've had it with them, asked for my money and my server back, and went with the second best option I found that was near that datacenter. Paid them a shit ton of money just for the guarantee that they will personally pick the server up from the datacenter, set it up in theirs, within 7 days.

About 4 more weeks later, I'm still waiting.

I have other plans to bring in more stock at a steady pace, but it's hard not to feel defeated.

Back 2 Mesa

Maybe you're just a regular, everyday, normal motherfucker. You've watched Linus Tech Tips and Mental Outlaw a few too many times and you think it would be a good idea to stop giving Google and Apple control over all of your personal data, so you want to self-host your own services and take back control over said data, but you don't know what SSH is and you have more important things to do with your time (like having sex, or going outside).

Or maybe you actually know what you're doing. Your walls are covered with tin foil, you run Qubes behind 20 different proxies, connected to McDonald's WiFi 20km away via a Yagi, you write kernels and compilers for fun, you COULD deploy anything you want manually, but you just want a nice pretty (yet powerful) centralized dashboard where everything lives under one roof instead of spread across 50 terminals.

Mesa aims to appeal to both of these groups, in equal capacity.

"So it's like Coolify / Portainer / CapRover?"

No. Those tools give you a panel bolted onto a VM you still have to provision, secure, update, and babysit. You're still renting a full Linux box with 900 packages you don't need and a huge attack surface. Plus, they don't have The Kyun Touch(TM).

Mesa stacks run on a custom ultra-minimal Alpine build that has no SSH, no package manager, no nothing. For you, there's no "underlying server" to worry about, kitten. You don't even pay for the OS disk - the storage you buy is only for your containers to use, not the OS. It's not "a VM with a panel." It's just your stack, running, with nothing else.

"OK but what does it actually do?"

You pick a recipe (or build your own stack from scratch), fill in the blanks in a GUI - your domain, recipe parameters, your environment variables, and hit deploy. That's it. Mesa handles:

Ready-to-go recipes

We have 45 official recipes ready to deploy right now, including:

Every recipe is fully customizable. And if you want to launch something that isn't in the recipe list, Mesa lets you build a stack from scratch.

Mesa is currently in open beta. Expect things to not work perfectly, it's ok, please report them to us, we'll fix it.

Technical Details

Docker Hub Cache

Docker Hub pulls are proxied through a per-datacenter global cache, so instead of having to go through the Internet to download commonly pulled images, the request never leaves LAN.

Recipes

A recipe is a declarative stack definition that describes everything needed to run a multi-container application: container images, environment variables, storage bindings, healthchecks, dependencies between containers, and setup scripts.

Parameters are typed (string, email, url, password, domain, number, boolean, select, list) with validation rules, so the GUI can generate the right input fields and catch mistakes before anything gets deployed. Environment variables can reference variables (managed secrets, parameters, IP, etc).

Storage

Stacks can declare storage bindings, and any binding can be flagged as brickable, meaning you can attach a Brick to it for persistent, detachable storage.

Mesa Recipes vs Compose

The recipe format is similar to Compose in spirit, but it's typed and built for hosted deployments rather than local dev environments. Some key differences:

Open Source

Both core components of Mesa are published as separate repositories under the PolyForm Noncommercial License (meaning you can leave Kyun and take your toys with you, but you can't make money off it):

The recipe format itself is open and documented. If you've written a compose.yaml before, you already know most of what you need. A recipe is a TypeScript file that declares containers, parameters, mounts, and setup scripts, and compiles down to a stack.json. The plan is to eventually let you write your own recipes and share them with other Kyun users (or keep them for yourself), with capability to convert your existing Compose files to Mesa recipes.

Testing

Before deploying to production, both the base image and all recipes are heavily tested in CI. None of this is mocked. The tests boot real VMs with real QEMU which run real containers.

mesa-base runs a full integration test suite against the built QCOW2 image. The CI pipeline builds the image, boots it in a QEMU VM, connects over the Mesa Control Protocol, and runs a bunch of tests that confirm the image is ready for production (podman works, network works, known good recipe provisions successfully, etc).

mesa-recipes goes even further. Every recipe is provisioned from scratch inside a real VM, with the exact recommended specs. Containers are pulled, configure scripts run, and healthchecks are validated.

On top of provisioning, recipes with web UIs get automated screenshot capture. A headless Chromium instance connects to the running service, and an LLM navigates the UI, clicking through pages, filling in forms, exploring the application, while capturing screenshots at each step. These screenshots are used for the recipe gallery on the website. This also doubles as a functional E2E test: if the LLM can't navigate the UI, something is broken.

Recipes are SHA-hashed so unchanged recipes are skipped on re-runs. The daily update pipeline only re-tests recipes whose container images actually changed.

Special Thanks

Many thanks to:

What's Next

From a software standpoint, Mesa will keep being iterated on heavily until the first non-beta release. Danbo will take a backseat for now since I feel that we've got it in a near-perfect state, ignoring any bugs that still need to be fixed.

Here is a list of ideas that I'd like to get into Kyun in the future. (also, some free game for the competition):

Pre-Orders

If a location is currently out of stock, you can pay in advance to reserve a spot, either:

With the second option, you can help us speed up the purchase of new nodes and accelerate the growth of Kyun. In exchange for your help you'll receive a flat discount for however long you commit for, in addition to the commitment discount.

Say, the new node preorder discount is 10% and the discount for committing to a year is 20%. You are pre-ordering a Danbo with 4 cores, 50 GB SSD and 8 GiB RAM, the normal price being 20 EUR/mo. In this case, pre-ordering on a future node and committing for a year will take the total price of that Danbo down from 240 EUR to 172.80 EUR, netting you total savings of 67.20 EUR (siiix seeveennnnn).

You can back out of the pre-order at any point before fulfillment and get a refund to your balance.

Live Migrate

Live migrate across datacenters, or migrate to another node of your choosing in the same datacenter, while keeping your service up and accessible during the migration. (of course, cross-datacenter migration will involve an IP change when the cutover happens)

Workspaces

Workspaces: node-based system for grouping and managing resources together logically in a GUI. Example:

Perks

Achievement-style system, where you receive stacked discounts and special support priority based on your account history. Examples:

Kyun-Managed Firewall

Port knocking, restrict access to ports based on source IP, block rudimentary resource exhaustion DoS attempts, generic internet noise (scanners, bruteforce bots), Shodan/Censys probes, and much more, all managed from your dashboard and handled by the hypervisor instead of the VM.

AI Focus

Get Kyoko to set up and/or maintain your Danbo or a custom Mesa stack, completely autonomously, for a flat price.

A "Kyun AI Gateway" type service is also an idea we'll explore, where you can use your Kyun balance to pay for access to various models, either through API or with our own GUI.