Building this Website on Git Push¶
Introduction¶
In the spirit of my recent fascination with self-sovereignity and decentralization I will replace the fancy friendly mature reliable GitHub Pages with a hack.
As a side note, I like GitHub Pages, the service democratized static website hosting and made it easily approachable to many developers. GitHub Actions allow flexible builds outside of the default Jekyll system. For example, I’m using a Makefile with SphinxDocs, and quiet happy with it. And all of it is completely free. But this article is not about GitHub Pages, this is about an alternative.
The hack is to run a Debian VM on my home server with a git repo and post-receive hook that builds a static website. The built artifact is then committed to another repo, which is pushed to a Cloud VPS. I could’ve simplified the setup by building the website directly on the VPS, the problem is that the VPS is so tiny, I doubt it can handle the build process.
The website build process got pretty involved over the years since I’m hoarding all my petty experiments for no good reason. The builder needs Python, graphviz, NodeJS, and customary imperial tonne of npm packages. On the bright side, the full build process is just a single make command.
I’ll keep the VM running at all times and preserve the build files so it can run incrementally. I’ll also ship the build artifact through git, which should be much faster then a complete artifact every time.
And I’ll have a mirror VPS to open a can of HTTPS certificate synchronization worms.
Virtual Machine¶
My favorite way of running virtual machines is with Vagrant and KVM.
Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "debian/trixie64"
config.vm.synced_folder "./", "/vagrant", type: "virtiofs"
config.vm.provider :libvirt do |libvirt|
libvirt.driver = "kvm"
libvirt.uri = 'qemu:///system'
libvirt.cpus = 2
libvirt.memory = "2048"
libvirt.memorybacking :access, :mode => "shared"
end
config.vm.provision "shell", path: "provision_builder.sh"
end
Upon creation (vagrant up) it immediately runs the provisioning script that sets everything up.
The script is reentrant, because I had to iterate a bit to polish all the kinks.
provision_builder.sh:
#!/bin/bash
set -euo pipefail
apt-get update
apt-get install -y \
curl \
build-essential \
git \
python3-venv \
python-is-python3 \
graphviz \
nodejs \
npm \
rsync
tailscale status || ( \
mkdir -p /etc/apt/keyrings \
&& curl -fsSL https://pkgs.tailscale.com/stable/debian/trixie.noarmor.gpg > /usr/share/keyrings/tailscale-archive-keyring.gpg \
&& curl -fsSL https://pkgs.tailscale.com/stable/debian/trixie.tailscale-keyring.list > /etc/apt/sources.list.d/tailscale.list \
&& apt-get update \
&& apt-get install -y tailscale \
&& tailscale login \
&& tailscale up \
)
id builder || useradd -rms /usr/bin/git-shell builder
install -o builder -g builder -m 0700 -d ~builder/.ssh
install -o builder -g builder -m 0600 /dev/stdin ~builder/.ssh/authorized_keys <<'EOF'
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYxOnUHnt2KZ8kdjYjO/xWflaFKxXXJLv6V8/TiXgow8L+QdFmcEJ/NRdR6/LVLEwiJ5h9l26mY8XxlpVAIY43NqbhPUdBp6SoeX2tpHFQa4R1i7coO3bO1sjAVqeTmTby4iROtWZ89OEsqYnWyYco4py+sn6X+h8TDRIbrl2zYQI9IwK8O2UJTV9qT2Vy4s4fitLTeO6AI7935OsrLzXV+iaGGmhoUfpZcHZ5I9puaaTOyxuJ3q4nA0PNiZ9Lw7+TYOo73eXPA+qRrsvEy6b6x3+izyj4WX31YSklksw5CX+jjc23d7muV8cHFaoO1GkueVYyve8ncqy0dGn9CiDQudVqUyhqkF49MvWO1Hjg9SeidaKGqalh0Pv8RJquTJ8aUXcVS9GwCmYu+/JfBVcCGYKEpcwrLOt/iYa9iHCsImb/wlO08n3R+HBIF4At0Jxgd4wWM8ZhSXoA2UjCBojZwcWLPuS+S/zplFgi3stv+mkfEf9WDQo1g5bueFJ+gK8= peterdemin@MBA
EOF
test -d ~builder/repo.git || sudo -u builder -s /bin/bash -c "git init --bare ~builder/repo.git"
test -d ~builder/pages.git || sudo -u builder -s /bin/bash -c "git init --bare ~builder/pages.git"
test -d ~builder/infra.git || sudo -u builder -s /bin/bash -c "git init --bare ~builder/infra.git"
test -d ~builder/venv || sudo -u builder -s /bin/bash -c "python3 -m venv ~builder/venv"
test -f ~builder/.ssh/id_ed25519 || sudo -u builder -s /bin/bash -c 'ssh-keygen -t ed25519 -f ~builder/.ssh/id_ed25519 -N ""'
echo "Copy public key to serving host:"
sudo -u builder -s /bin/bash -c 'ssh-keygen -yf ~builder/.ssh/id_ed25519'
echo
sudo -u builder /bin/bash -c 'git config --global user.email "builder@demin.dev"'
sudo -u builder /bin/bash -c 'git config --global user.name "Builder Bot"'
sudo -u builder /bin/bash -c 'git config --global init.defaultBranch master'
sudo -u builder /bin/bash -c 'ssh-keyscan -t ed25519 demin-dev.tail13c89.ts.net > ~/.ssh/known_hosts'
sudo -u builder /bin/bash -c 'ssh-keyscan -t ed25519 mirror.tail13c89.ts.net >> ~/.ssh/known_hosts'
install -o builder -g builder -m 0700 -d ~builder/worktree
install -m 0755 /dev/stdin ~builder/repo.git/hooks/post-receive <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
HOME=/home/builder
WORK_TREE="$HOME/worktree"
BRANCH="master"
BRANCH_REF="refs/heads/$BRANCH"
read oldrev newrev REFNAME
if [[ "$REFNAME" != "$BRANCH_REF" ]]; then
echo "Ignoring push to $REFNAME (only deploys $BRANCH_REF)"
exit 0
fi
git --git-dir="$HOME/repo.git" --work-tree="$WORK_TREE" checkout -f $BRANCH
cd "$WORK_TREE"
. $HOME/venv/bin/activate
make install lightweight compress
python3 infra/cli.py builder-publish build/html
EOF
And that’s how I cut my website publish time from 5 minutes down to 10 seconds.
Setting up the serving host is similar, except that instead of building, it just needs to check out.
provision_pages.sh:
#!/bin/bash
set -euo pipefail
apt-mark hold google-cloud-cli google-cloud-cli-anthoscli google-guest-agent google-osconfig-agent
mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.tailscale.com/stable/debian/trixie.noarmor.gpg > /usr/share/keyrings/tailscale-archive-keyring.gpg
curl -fsSL https://pkgs.tailscale.com/stable/debian/trixie.tailscale-keyring.list > /etc/apt/sources.list.d/tailscale.list
apt-get update
apt-get install -y \
ca-certificates \
screen \
lsof \
python3-venv \
python-is-python3 \
nginx \
git \
age \
tailscale
tailscale status || tailscale login
if [ ! -e /usr/bin/certbot ]; then
python3 -m venv /opt/certbot/
/opt/certbot/bin/python3 -m pip install certbot certbot-nginx
ln -s /opt/certbot/bin/certbot /usr/bin/certbot
fi
id pages || useradd -rms /usr/bin/git-shell pages
install -o pages -g www-data -m 0750 -d /var/www/pages
install -o pages -g pages -m 0755 -d /var/lib/infra
install -o pages -g pages -m 0700 -d ~pages/.ssh
install -o pages -g pages -m 0600 /dev/stdin ~pages/.ssh/authorized_keys <<'EOF'
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILG64GMcBIxl4rGuRum2n07Kf7dE9CUlzLl84e/TWvTM builder@trixie
EOF
test -f ~pages/.ssh/id_ed25519 || sudo -u pages /bin/bash -c 'ssh-keygen -t ed25519 -f ~pages/.ssh/id_ed25519 -N ""'
echo | install -o pages -g pages -m 0600 /dev/stdin ~pages/.ssh/known_hosts
echo "Copy public key to infra/keys/ directory:"
sudo -u pages /bin/bash -c 'ssh-keygen -yf ~pages/.ssh/id_ed25519'
echo
test -d ~pages/pages.git || sudo -u pages /bin/bash -c "git init --bare ~pages/pages.git"
install -m 0755 /dev/stdin ~pages/pages.git/hooks/post-receive <<'EOF'
#!/bin/bash
exec git --git-dir="/home/pages/pages.git" --work-tree="/var/www/pages" checkout -f master
EOF
test -d ~pages/infra.git || sudo -u pages /bin/bash -c "git init --bare ~pages/infra.git"
install -m 0755 /dev/stdin ~pages/infra.git/hooks/post-receive <<'EOF'
#!/bin/bash
exec git --git-dir="/home/pages/infra.git" --work-tree="/var/lib/infra" checkout -f master
EOF
install -m 0644 /dev/stdin /etc/systemd/system/infra-apply.path <<'EOF'
[Unit]
Description=Watch infra checkout for changes
[Path]
PathChanged=/var/lib/infra/keys/primary.pub
PathChanged=/var/lib/infra/keys
PathChanged=/var/lib/infra/challenges
PathChanged=/var/lib/infra/certs
[Install]
WantedBy=multi-user.target
EOF
install -m 0644 /dev/stdin /etc/systemd/system/infra-apply.service <<'EOF'
[Unit]
Description=Apply infra config
[Service]
Type=oneshot
ExecStart=/usr/bin/python3 /var/lib/infra/cli.py apply
EOF
systemctl daemon-reload
systemctl enable --now infra-apply.path
# 8< - - - - - Abort if nginx config already exist - - - - -
test -f /etc/nginx/sites-available/pages && exit 0
rm -f /etc/nginx/sites-enabled/default
cat > /etc/nginx/sites-available/pages <<'EOF'
server {
server_name peter.demin.dev;
root /var/www/pages;
index index.html index.htm;
location / {
gzip_static on;
try_files $uri $uri/ =404;
}
}
EOF
ln -fs /etc/nginx/sites-available/pages /etc/nginx/sites-enabled/pages
certbot --agree-tos --nginx -m peter@demin.dev --non-interactive -d peter.demin.dev
systemctl restart nginx.service
screen -dm /bin/sh -c "apt remove --allow-change-held-packages -y google-cloud-cli google-cloud-cli-anthoscli google-guest-agent google-osconfig-agent"
I added a flag to nginx to serve static precompressed gzip files to save CPU on the serving side.
All the heavy lifting is handled by a Python script:
infra/cli.py:
#!/usr/bin/env python3
import argparse
import os
import shlex
import shutil
import subprocess
import sys
import tarfile
import tempfile
from pathlib import Path
HERE = Path(__file__).parent
INFRA_DIR = Path("/var/lib/infra")
KEYS_DIR = INFRA_DIR / "keys"
PRIMARY_KEY = KEYS_DIR / "primary.pub"
BUILDER_KEY = KEYS_DIR / "builder.pub"
PAGES_HOME = Path("/home/pages")
AUTHORIZED_KEYS = PAGES_HOME / ".ssh" / "authorized_keys"
MIRRORS_LIST = Path("infra/mirrors.txt")
FORWARD_LIST = Path("infra/forward.txt")
CHALLENGES_DIR = INFRA_DIR / "challenges"
WEBROOT_CHALLENGES = Path("/var/www/pages/.well-known/acme-challenge")
CERTS_DIR = INFRA_DIR / "certs"
GIT_DIR = PAGES_HOME / "repo.git"
KNOWN_HOSTS = Path.home() / ".ssh/known_hosts"
def _ensure_root():
if os.geteuid() != 0:
raise SystemExit("Must run as root.")
class ApplyCommand:
_LOCAL_PUB = PAGES_HOME / ".ssh" / "id_ed25519.pub"
def add_subparser(self, sub):
p = sub.add_parser("apply", help="Apply infra config")
p.set_defaults(handle=self.handle)
def handle(self, args):
del args
_ensure_root()
if not PRIMARY_KEY.exists():
print(f"Missing {PRIMARY_KEY}; skipping primary selection.")
return 0
if not self._LOCAL_PUB.exists():
print(f"Missing {self._LOCAL_PUB}; cannot evaluate primary.")
return 1
# primary_fp = self._fingerprint(PRIMARY_KEY)
# local_fp = self._fingerprint(self._LOCAL_PUB)
# if primary_fp == local_fp:
# subprocess.check_call(
# ["systemctl", "enable", "--now", "certbot.timer"]
# )
# else:
# subprocess.check_call(
# ["systemctl", "disable", "--now", "certbot.timer"]
# )
if KEYS_DIR.exists():
keys = []
for path in (BUILDER_KEY, PRIMARY_KEY):
if path.exists():
keys.append(path.read_text(encoding="utf-8").strip())
if not keys:
print(
"No builder/primary keys found in infra/keys; "
"authorized_keys not updated."
)
else:
self._write_authorized_keys(keys)
self._sync_challenges()
self._sync_certs()
return 0
def _fingerprint(self, path: Path) -> str:
return subprocess.check_output(
["ssh-keygen", "-lf", str(path)],
text=True,
).split()[1]
def _write_authorized_keys(self, keys):
AUTHORIZED_KEYS.write_text(
"".join(f"{key}\n" for key in keys), encoding="utf-8"
)
os.chmod(AUTHORIZED_KEYS, 0o600)
subprocess.check_call(["chown", "pages:pages", str(AUTHORIZED_KEYS)])
def _sync_challenges(self) -> None:
if not CHALLENGES_DIR.exists():
return
WEBROOT_CHALLENGES.mkdir(parents=True, exist_ok=True)
for path in WEBROOT_CHALLENGES.glob("*"):
if path.is_file():
path.unlink()
for src in CHALLENGES_DIR.glob("*"):
if not src.is_file():
continue
shutil.copyfile(src, WEBROOT_CHALLENGES / src.name)
def _sync_certs(self) -> None:
if not CERTS_DIR.exists():
return
updated = False
key_path = Path("/home/pages/.ssh/id_ed25519")
if not key_path.exists():
print(f"Missing decryption key: {key_path}")
return
for enc in CERTS_DIR.glob("*.tar.age"):
with tempfile.TemporaryDirectory() as tmpdir:
out = Path(tmpdir) / enc.name.replace(".age", "")
try:
subprocess.run(
[
"age",
"--decrypt",
"-i",
str(key_path),
"-o",
str(out),
str(enc),
],
check=True,
)
self._install_certs_from_tar(out)
updated = True
except subprocess.CalledProcessError:
print(f"Failed to decrypt cert bundle: {enc}")
except (OSError, RuntimeError) as exc:
print(f"Failed to install certs from {enc}: {exc}")
if updated:
subprocess.run(["systemctl", "reload", "nginx"], check=True)
def _install_certs_from_tar(self, tar_path: Path) -> None:
with tempfile.TemporaryDirectory() as tmpdir:
with tarfile.open(tar_path, "r:gz") as tar:
tar.extractall(tmpdir)
tmp = Path(tmpdir)
Command("ls", "-lah")(tmp)
domain = tar_path.stem.replace(".tar", "")
live_dir = Path("/etc/letsencrypt/live") / domain
live_dir.mkdir(parents=True, exist_ok=True)
fullchain = tmp / "fullchain.pem"
privkey = tmp / "privkey.pem"
if not fullchain.exists() or not privkey.exists():
raise RuntimeError("Missing cert files in archive.")
shutil.copyfile(fullchain, live_dir / "fullchain.pem")
shutil.copyfile(privkey, live_dir / "privkey.pem")
os.chmod(live_dir / "fullchain.pem", 0o644)
os.chmod(live_dir / "privkey.pem", 0o600)
class Command:
def __init__(self, *c: str | Path, verbose: bool = False, **kwargs) -> None:
self._prefix = c
self._verbose = verbose
self._kwargs = kwargs
def runuser(self, user: str) -> "Command":
return self.__class__(
*(("runuser", "-u", user, "--") + self._prefix),
verbose=self._verbose,
**self._kwargs,
)
def subcommand(self, *c: str | Path, **kwargs) -> "Command":
return self.__class__(
*(self._prefix + c),
verbose=self._verbose,
**(self._kwargs | kwargs),
)
def call(self, *c: str | Path, **kwargs) -> int:
self._print(c)
return subprocess.call(self._prefix + c, **(self._kwargs | kwargs))
def __call__(self, *c: str | Path, **kwargs) -> None:
self._print(c)
subprocess.check_call(self._prefix + c, **(self._kwargs | kwargs))
def check_output(self, *c: str | Path, **kwargs) -> str:
self._print(c)
return subprocess.check_output(
self._prefix + c,
**(self._kwargs | kwargs | {"text": True}),
)
def _print(self, c: tuple[str | Path, ...]) -> None:
if self._verbose:
print(shlex.join(map(str, self._prefix + c)), file=sys.stderr)
class Mirror:
def __init__(self, infra: Path = HERE, home: Path = Path.home()) -> None:
self._infra = infra
self._home = home
self.known_hosts = self._home / ".ssh/known_hosts"
def all_mirrors(self) -> list[tuple[str, str]]:
return self._load_remotes(self._infra / "mirrors.txt")
def primary(self) -> tuple[str, str]:
primary = self._primary_comment()
mirrors = self.all_mirrors()
for remote, branch in mirrors:
if remote.startswith(primary):
return remote, branch
return mirrors[0]
def non_primary_infra(self) -> list[tuple[str, str]]:
return self._convert_to_infra(self._filter_non_primary(self.all_mirrors()))
def add_known_host(self, remote: str) -> None:
host = remote.partition("@")[2].partition(":")[0]
for line in self.known_hosts.open():
if line.startswith(host):
return
keyscan = Command("ssh-keyscan", "-t", "ed25519")
key = keyscan.check_output(host) + "\n"
with self.known_hosts.open("at") as fobj:
fobj.write(key)
def forwards(self) -> list[tuple[str, str]]:
return self._load_remotes(self._infra / "forward.txt")
def _filter_non_primary(
self, mirrors: list[tuple[str, str]]
) -> list[tuple[str, str]]:
primary = self._primary_comment()
primary_idx = 0
for i, (remote, _) in enumerate(mirrors):
if remote.startswith(primary):
primary_idx = i
break
return [m for i, m in enumerate(mirrors) if i != primary_idx]
def _convert_to_infra(
self, mirrors: list[tuple[str, str]]
) -> list[tuple[str, str]]:
return [
(r.replace(":pages.git", ":infra.git"), b)
for r, b in mirrors
if r.endswith(":pages.git") and b == "master"
]
def _primary_comment(self) -> str:
return Path(self._infra / "keys/primary.pub").read_text().strip().split()[-1]
def _load_remotes(self, path: Path) -> list[tuple[str, str]]:
result: list[tuple[str, str]] = []
for line in path.open(encoding="utf-8"):
line = line.strip()
if line and not line.startswith("#"):
remote, _, branch = line.partition(" ")
result.append((remote, branch or "master"))
return result
class BuilderPublishCommand:
def __init__(self) -> None:
bare_git = Command("git", "--git-dir")
self._pages_git = bare_git.subcommand(Path.home() / "pages.git")
self._infra_git = bare_git.subcommand(Path.home() / "infra.git")
self._source_git = bare_git.subcommand(Path.home() / "repo.git")
def add_subparser(self, sub):
p = sub.add_parser("builder-publish", help="Push to mirrors")
p.add_argument("content", help="Directory with content to publish")
p.set_defaults(handle=self.handle)
def handle(self, args):
"""
Publishes content and infra changes to all mirrors.
Executed from the worktree directory of source repo.
"""
mirror = Mirror()
mirrors = mirror.all_mirrors()
for remote, _ in mirrors:
mirror.add_known_host(remote)
self._push_content(args.content, mirrors)
infra_mirrors = [
(r.replace(":pages.git", ":infra.git"), b)
for r, b in mirrors
if r.endswith(":pages.git") and b == "master"
]
self._push_infra(infra_mirrors)
self._push_source(mirror.forwards())
return 0
def _push_content(self, content: str, mirrors: list[tuple[str, str]]) -> None:
git = self._pages_git.subcommand("-C", content, "--work-tree", ".")
git("add", "-A", ".")
git.call("commit", "-m", "build pages")
for remote, branch in mirrors:
self._pages_git.call("push", remote, f"+master:{branch}")
def _push_infra(self, mirrors: list[tuple[str, str]]) -> None:
git = self._infra_git.subcommand("-C", "infra", "--work-tree", ".")
git("add", "-A", ".")
git.call("commit", "-m", "infra")
p_remote, p_branch = self._pick_primary(mirrors)
git.call("pull", "--rebase", p_remote, p_branch)
for remote, branch in mirrors:
self._infra_git.call("push", "-f", remote, branch)
def _push_source(self, mirrors: list[tuple[str, str]]) -> None:
for remote, branch in mirrors:
self._source_git.call("push", remote, f"+master:{branch}")
def _pick_primary(self, mirrors: list[tuple[str, str]]) -> tuple[str, str]:
comment = Path("infra/keys/primary.pub").read_text().strip().split()[-1]
for remote, branch in mirrors:
if remote.startswith(comment):
return remote, branch
return mirrors[0]
class DistributeChallengeCommand:
def add_subparser(self, sub):
p = sub.add_parser(
"distribute-challenge",
help="Write ACME challenge and push infra branch to mirrors",
)
p.add_argument("--token", required=True)
p.add_argument("--validation", required=True)
p.set_defaults(handle=self.handle)
def handle(self, args):
_ensure_root()
token = args.token.strip()
validation = args.validation.strip()
if not token or not validation:
print("Token and validation must be non-empty.")
return 1
CHALLENGES_DIR.mkdir(parents=True, exist_ok=True)
WEBROOT_CHALLENGES.mkdir(parents=True, exist_ok=True)
(CHALLENGES_DIR / token).write_text(validation + "\n", encoding="utf-8")
(WEBROOT_CHALLENGES / token).write_text(validation + "\n", encoding="utf-8")
git = ["git", "--git-dir", GIT_DIR, "--work-tree", INFRA_DIR]
subprocess.check_call(git + ["checkout", "-f", "master"])
subprocess.check_call(git + ["add", "infra/challenges"])
if subprocess.call(git + ["diff", "--cached", "--quiet"]) == 0:
return 0
subprocess.check_call(git + ["commit", "-m", "Add challenge"])
if MIRRORS_LIST.exists():
for line in MIRRORS_LIST.read_text(encoding="utf-8").splitlines():
line = line.strip()
if not line or line.startswith("#"):
continue
subprocess.check_call(
[
"runuser",
"-u",
"pages",
"--",
"git",
"--git-dir",
GIT_DIR,
"push",
line,
"master",
],
)
return 0
class CleanupChallengeCommand:
def add_subparser(self, sub):
p = sub.add_parser(
"cleanup-challenge",
help="Remove ACME challenge and push infra branch to mirrors",
)
p.add_argument("--token", required=True)
p.set_defaults(handle=self.handle)
def handle(self, args):
_ensure_root()
token = args.token.strip()
if not token:
print("Token must be non-empty.")
return 1
challenge = CHALLENGES_DIR / token
web_challenge = WEBROOT_CHALLENGES / token
if challenge.exists():
challenge.unlink()
if web_challenge.exists():
web_challenge.unlink()
subprocess.run(
[
"git",
"--git-dir",
GIT_DIR,
"--work-tree",
INFRA_DIR,
"checkout",
"-f",
"master",
],
check=True,
)
subprocess.run(
[
"git",
"--git-dir",
GIT_DIR,
"--work-tree",
INFRA_DIR,
"add",
"infra/challenges",
],
check=True,
)
diff = subprocess.run(
[
"git",
"--git-dir",
GIT_DIR,
"--work-tree",
INFRA_DIR,
"diff",
"--cached",
"--quiet",
]
)
if diff.returncode == 0:
return 0
subprocess.run(
[
"git",
"--git-dir",
GIT_DIR,
"--work-tree",
INFRA_DIR,
"commit",
"-m",
f"Remove ACME challenge {token}",
],
check=True,
)
if MIRRORS_LIST.exists():
for line in MIRRORS_LIST.read_text(encoding="utf-8").splitlines():
line = line.strip()
if not line or line.startswith("#"):
continue
subprocess.run(
[
"runuser",
"-u",
"pages",
"--",
"git",
"--git-dir",
GIT_DIR,
"push",
line,
"master",
],
check=True,
)
return 0
class DistributeCertsCommand:
def __init__(self) -> None:
self._live = Path("/etc/letsencrypt/live")
def add_subparser(self, sub):
p = sub.add_parser(
"distribute-certs",
help="Encrypt certs to mirrors and push infra branch",
)
p.add_argument("--domain", required=True)
p.set_defaults(handle=self.handle)
def handle(self, args):
_ensure_root()
infra_git = Command(
"git", "--git-dir", PAGES_HOME / "infra.git", verbose=True
).runuser("pages")
git = infra_git.subcommand("--work-tree", ".", cwd=INFRA_DIR)
git("checkout", "-f", "master")
self._pack_certs(args.domain, CERTS_DIR / f"{args.domain}.tar.age")
git("add", "-A", CERTS_DIR)
if git.call("diff", "--cached", "--quiet") == 0:
return 0
self._setup_git_user(Command("git", verbose=True).runuser("pages"))
git("commit", "-m", "Update certs")
push_infra = infra_git.subcommand("push")
mirror = Mirror(home=PAGES_HOME)
for remote, _ in mirror.non_primary_infra():
mirror.add_known_host(remote)
push_infra(remote, "master")
return 0
def _pack_certs(self, domain: str, out_file: Path) -> None:
domain = domain.strip()
fullchain = self._live / domain / "fullchain.pem"
privkey = self._live / domain / "privkey.pem"
assert fullchain.exists()
assert privkey.exists()
recipients = []
for pub in sorted(KEYS_DIR.glob("*.pub")):
if pub.name not in ("primary.pub", "builder.pub"):
recipients.append(pub.read_text(encoding="utf-8").strip())
if not recipients:
raise ValueError(f"No recipients found in {KEYS_DIR}")
chown = Command("chown", "pages:pages")
out_file.parent.mkdir(parents=True, exist_ok=True)
chown(out_file.parent)
age = Command("age", "--encrypt", "-o", out_file, verbose=True)
for recipient in recipients:
age = age.subcommand("-r", recipient)
with tempfile.TemporaryDirectory() as tmpdir:
tar_path = Path(tmpdir) / f"{domain}.tar.gz"
with tarfile.open(tar_path, "w:gz") as tar:
tar.add(fullchain.resolve(), arcname=fullchain.name)
tar.add(privkey.resolve(), arcname=privkey.name)
age(tar_path)
chown(out_file)
def _setup_git_user(self, git: Command) -> None:
conf = git.subcommand("config", "--global")
if conf.call("user.email") != 0:
conf("user.email", "pages@demin.dev")
if conf.call("user.name") != 0:
conf("user.name", "Mr Pages")
def main(argv=None):
parser = argparse.ArgumentParser(prog="infra")
sub = parser.add_subparsers(dest="cmd", required=True)
ApplyCommand().add_subparser(sub)
BuilderPublishCommand().add_subparser(sub)
DistributeChallengeCommand().add_subparser(sub)
CleanupChallengeCommand().add_subparser(sub)
DistributeCertsCommand().add_subparser(sub)
args = parser.parse_args(argv)
return args.handle(args)
if __name__ == "__main__":
raise SystemExit(main())
Certificate Hell¶
Then I deployed another mirror to https://mirror.demin.dev under a separate Google Cloud Platform account.
Infra Repo¶
I keep infra state in a separate bare repo on each mirror (infra.git).
It carries operational config and runtime data:
Builder and primary public keys (
infra/keys/builder.pub,infra/keys/primary.pub).Mirror public keys for encryption (
infra/keys/*.pub).Mirror list (
infra/mirrors.txt).ACME challenges (
infra/challenges/*).Encrypted cert bundles (
infra/certs/*.tar.age).
Mirrors check out master from pages.git into /var/www/pages and
master from infra.git into /var/lib/infra.
A systemd .path unit watches /var/lib/infra and runs infra apply
(a tiny Python CLI in infra/cli.py) as root on every change.
The builder VM pushes content to all mirrors listed in infra/mirrors.txt
as part of the publish step. Infra data is handled by the primary mirror.
Separately, the source repo is forwarded to the destinations listed in
infra/forward.txt.
Certificates and Challenges¶
DNS-01 is not available to me, so I do HTTP-01 with challenge distribution. The primary mirror runs certbot with manual hooks:
infra distribute-challengewrites the token underinfra/challenges/and pushes the infra repo to mirrors.Mirrors apply the change and copy the token into
/var/www/pages/.well-known/acme-challenge/.infra cleanup-challengeremoves the token and pushes again.
Certificates are distributed via the same infra repo, but encrypted.
The primary packs fullchain.pem and privkey.pem into a tarball,
encrypts it with age for all mirror SSH public keys,
commits to infra/certs/, and pushes to mirrors.
Each mirror decrypts and installs the certs during infra apply
and reloads nginx.