Self-Hosted Security: Own Your Digital Legacy Infrastructure

19 min read

Self-Hosted Security: Own Your Digital Legacy Infrastructure

A self-hosted dead man's switch is a liveness-monitoring system that you deploy on your own server, giving you complete control over the encryption keys, the database, the notification channels, and the code that decides when to trigger your digital will. No third party holds your data, no SaaS company can go bankrupt and take your estate plan with it, and no terms of service can change under your feet.

Self-hosting matters most when the data in question is your most sensitive: instructions for accessing financial accounts, cryptocurrency seed phrases, family documents, and credentials that could cause real harm in the wrong hands. Trusting a third-party service with this information requires trusting that company's security practices, employee access controls, business continuity, and legal jurisdiction, all at the same time, for years or decades into the future.

This guide explains the trust problem with cloud-based estate services, compares the self-hosted tools available today, walks through a complete deployment of Burning Ash Protocol on your own server, and covers the security hardening you need to do after deployment.

The Trust Problem with Cloud Estate Services

When you store your digital will with a cloud service, you are making a bet. You are betting that the company will still exist when you die, that their security will hold for the entire duration, that their business model will not change, and that their employees will never access your data.

That bet looks reasonable on any given Tuesday. Over a 10, 20, or 30-year horizon, it becomes much less certain.

Companies Disappear

The digital estate planning space is littered with services that launched, operated for a few years, and shut down. When they close, users have a limited window to export their data, assuming they are still alive to do so. If the account holder has already died or become incapacitated, the data is simply gone.

This is not a hypothetical risk. The average lifespan of a startup is 3-5 years. Even well-funded companies pivot, get acquired, or sunset product lines. A digital will platform that relies on a specific company's continued operation has a structural fragility that no amount of marketing can compensate for.

Security Posture Is Opaque

When a company says "bank-level encryption" or "your data is secure," you are trusting their claim without the ability to verify it. You cannot audit their code, inspect their infrastructure, review their access logs, or confirm that encryption keys are managed properly. You are trusting a press release.

Self-hosted open-source software inverts this dynamic. You can read every line of code. You can verify that encryption is implemented correctly. You can ensure that no telemetry is phoning home. You can audit the entire system because you control the entire system.

A cloud service is subject to the laws of its incorporation, its server locations, and any jurisdiction where it does business. A US-based estate planning service can be compelled by court order to turn over your data. A service operating in a jurisdiction with weak privacy protections may not even need a court order.

When you self-host, the data lives where you put it. You choose the jurisdiction. You control physical access to the server. Legal compulsion must target you directly, not a third party that may have incentives to comply quickly.

The Encryption Key Problem

The fundamental question for any encrypted estate service is: who holds the encryption keys?

If the service holds the keys, they can decrypt your data. This means their employees can access it, their security breaches expose it, and legal compulsion can reveal it.

If you hold the keys and the service only stores encrypted blobs, the service is more trustworthy, but now key management is your problem. If you lose the key, the service cannot help.

Self-hosting with an open-source tool solves this cleanly. You hold the master key, the encryption happens on your server, and the code is auditable to confirm that the key is never transmitted. There is no third party in the trust chain.

Comparison of Self-Hosted Digital Will Tools

Several open-source and self-hostable tools address the dead man's switch and digital will space. Here is an honest comparison of the options available as of early 2026.

FeatureBurning Ash ProtocolDead Man's SnitchSeppukuPosthumousEmergencyWP
LicenseAGPL-3.0Proprietary (SaaS only)MITMITGPL-2.0
Self-hostableYes (Docker)NoYesYesYes (WordPress)
EncryptionAES-256-GCM + Shamir's Secret SharingNone (monitoring only)Basic file encryptionGPG-basedNone
Threshold accessYes (K-of-N survivors)N/ANoNoNo
Notification channelsEmail, SMS, WhatsApp, TelegramEmail, Slack, webhooksEmail onlyEmail onlyEmail only
Liveness checksConfigurable interval, response window, and escalation countConfigurable intervalCron-basedCron-basedWordPress cron
Web UIFull dashboard (Next.js)SaaS dashboardCLI onlyCLI onlyWordPress admin
Storage integrationsGoogle Drive, Dropbox, OneDrive, S3, SFTPN/ALocal filesystemLocal filesystemWordPress media
Active developmentYesYesMinimalMinimalMinimal
DatabaseSQLite (dev) / PostgreSQL (prod)N/AFilesystemSQLiteMySQL (WordPress)
LanguageGo + TypeScriptRubyPythonPythonPHP

Burning Ash Protocol

BAP is the most full-featured self-hosted option. It provides a complete web interface for managing wills, survivors, notification connectors, and storage backends. The encryption model uses per-will Data Encryption Keys with AES-256-GCM, and the keys are split among survivors using Shamir's Secret Sharing with configurable thresholds. Notification supports four channels. Liveness checks are fully configurable with three parameters: check-in interval (HCIT), response time window (HCRT), and missed check count before triggering (HCRAC).

The tradeoff is complexity. It is a full-stack application (Go API, Next.js frontend, database) that requires more infrastructure than simpler tools. For users who want a turnkey solution, BAP also offers a managed SaaS option with a free tier and a Pro plan at $4/month.

Dead Man's Snitch

Despite the similar name, Dead Man's Snitch is a monitoring service, not a digital will tool. It tracks whether cron jobs and scheduled tasks run on time. It is not self-hostable and has no encryption or document delivery features. It appears in searches for "dead man's switch" but solves a different problem.

Seppuku

A Python-based dead man's switch that runs on a server and sends email notifications if a check-in is missed. It is simple and lightweight but lacks encryption, threshold access, a web UI, and multi-channel notifications. Best suited for users who want a minimal, script-level solution and are comfortable with CLI configuration.

Posthumous

Another Python tool that allows scheduling encrypted messages for delivery after death. It uses GPG for encryption but does not implement threshold cryptography, meaning a single recipient gets the full key. No web UI, email-only notifications, and minimal active development.

EmergencyWP

A WordPress plugin that adds dead man's switch functionality to a WordPress site. If the admin does not check in, the plugin can publish a pre-written post or send an email. It is limited to WordPress's capabilities, has no encryption, and relies on WordPress's cron system, which is not true cron and only fires on page visits.

Which Should You Choose?

If you want comprehensive encryption with threshold-based access, multi-channel notifications, and a proper web interface, BAP is the only current option that provides all three. If you want something minimal and do not need encryption, Seppuku or Posthumous work as lightweight alternatives. If you are already running WordPress and want basic check-in monitoring, EmergencyWP is a quick addition.

Deploying Burning Ash Protocol with Docker Compose

This section walks through deploying BAP on your own server using Docker Compose. The process takes about 15 minutes and results in a fully functional self-hosted digital will system.

Prerequisites

You need a Linux server (Ubuntu 22.04+, Debian 12+, or similar) with:

  • Docker Engine 24+ and Docker Compose v2+ installed
  • At least 1 GB of RAM and 10 GB of storage
  • A domain name pointing to your server (for HTTPS)
  • SSH access to the server

If you do not have Docker installed, the official installation guide covers all major distributions: docs.docker.com/engine/install.

Step 1: Clone the Repository

git clone https://github.com/baprotocol/bap.git
cd bap

Step 2: Generate Configuration

BAP includes a Makefile target that creates your .env file from the example template:

make env

This copies .env.example to .env. Next, generate the required secret keys:

make generate-key

This populates JWT_SECRET and MASTER_KEY in your .env file with cryptographically random 256-bit keys. These are the two most critical values in your configuration:

  • JWT_SECRET signs authentication tokens. If compromised, an attacker can impersonate any user.
  • MASTER_KEY encrypts the Data Encryption Keys that protect your wills. If lost, your encrypted data is permanently inaccessible. Back this up securely.

Step 3: Configure Environment Variables

Open .env and verify the following settings:

# Database: SQLite is the default and works well for single-user self-hosted deployments
DB_TYPE=sqlite
DATABASE_PATH=./data/bap.db

# Deploy mode: selfhosted bypasses all billing and SaaS features
DEPLOY_MODE=selfhosted

# Frontend URL: set this to your actual domain
FRONTEND_URL=https://will.yourdomain.com
CORS_ORIGINS=https://will.yourdomain.com

# API URL for the frontend to reach the backend
NEXT_PUBLIC_API_URL=https://will.yourdomain.com/api

# Admin bootstrap secret: used once to create the first admin account
# Generate with: openssl rand -hex 32
ADMIN_BOOTSTRAP_SECRET=your_generated_secret_here

For a single-user self-hosted deployment, SQLite is the right choice. It stores everything in a single file, requires no separate database server, and handles the load of a personal deployment with no issues. Switch to PostgreSQL only if you expect multiple concurrent users.

Step 4: Start the Services

docker compose up -d

Docker builds and starts three services:

  • api on port 8080: the Go backend handling authentication, encryption, liveness checks, and notifications
  • web on port 3000: the Next.js frontend dashboard
  • docs (optional, in the docs profile): the documentation site

Verify everything is running:

docker compose ps

You should see the api and web services in a healthy state. The API health check endpoint confirms the backend is responding:

curl http://localhost:8080/api/health

Step 5: Bootstrap the Admin Account

BAP requires a super admin account for initial setup. Create one using the bootstrap endpoint:

curl -X POST http://localhost:8080/api/admin/bootstrap \
  -H "Content-Type: application/json" \
  -d '{
    "email": "you@yourdomain.com",
    "password": "a-strong-password-here",
    "name": "Your Name",
    "secret": "your_admin_bootstrap_secret"
  }'

The secret field must match the ADMIN_BOOTSTRAP_SECRET in your .env. This endpoint only works once, when no admin account exists.

Step 6: Access the Dashboard

Open your browser to http://localhost:3000 (or your configured domain). Log in with the admin credentials you just created. From the dashboard, you can:

  • Create your digital will with encrypted documents
  • Add survivors and configure the threshold (e.g., 3 of 5 must cooperate)
  • Set up notification connectors (email, SMS, WhatsApp, Telegram)
  • Connect storage providers (Google Drive, Dropbox, OneDrive, S3, SFTP)
  • Configure liveness check timing (how often you check in, response window, escalation count)

Security Hardening

A default Docker deployment is functional but not production-hardened. Work through these items before storing real sensitive data.

HTTPS with a Reverse Proxy

Never expose BAP over plain HTTP. Use a reverse proxy with automatic TLS. Caddy is the simplest option:

# Install Caddy
sudo apt install -y caddy

Create /etc/caddy/Caddyfile:

will.yourdomain.com {
    handle /api/* {
        reverse_proxy localhost:8080
    }
    handle {
        reverse_proxy localhost:3000
    }
}
sudo systemctl restart caddy

Caddy automatically obtains and renews Let's Encrypt certificates. No manual certificate management required.

Alternatively, use nginx with certbot or Traefik if you prefer those tools. The key requirement is that all traffic between the browser and your server is encrypted in transit.

Firewall Configuration

Restrict inbound traffic to only the ports you need:

# Allow SSH, HTTP (for ACME challenges), and HTTPS
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

# Block direct access to application ports from outside
sudo ufw deny 8080/tcp
sudo ufw deny 3000/tcp

sudo ufw enable

The application ports (8080 and 3000) should only be accessible from localhost. The reverse proxy handles external traffic on ports 80 and 443.

Database Backups

Your SQLite database file and your MASTER_KEY are the two things you cannot afford to lose. Set up automated backups:

#!/bin/bash
# /opt/bap-backup.sh
BACKUP_DIR="/opt/backups/bap"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)

mkdir -p "$BACKUP_DIR"

# SQLite online backup (safe while the database is in use)
docker compose exec -T api sqlite3 /data/bap.db ".backup '/data/backup.db'"
docker compose cp api:/data/backup.db "$BACKUP_DIR/bap_$TIMESTAMP.db"

# Compress and retain last 30 backups
gzip "$BACKUP_DIR/bap_$TIMESTAMP.db"
ls -tp "$BACKUP_DIR"/*.gz | tail -n +31 | xargs -I {} rm -- {}

Schedule this with cron:

# Run daily at 3 AM
echo "0 3 * * * /opt/bap-backup.sh" | sudo crontab -

Store backups off-server. Use rsync, rclone, or any backup tool to copy the compressed database files to a separate location. If your server fails, you need the backup file plus your MASTER_KEY to recover everything.

Back up your MASTER_KEY separately. Write it down on paper and store it in a fireproof safe or bank safety deposit box. If you lose this key, your encrypted wills are permanently inaccessible, by design.

Docker Security

Harden the Docker deployment:

# Run containers as non-root (BAP's Dockerfiles already do this)
# Verify with:
docker compose exec api whoami

# Limit container resources
# Add to docker-compose.yml under each service:
#   deploy:
#     resources:
#       limits:
#         memory: 512M
#         cpus: '0.5'

# Keep images updated
docker compose pull
docker compose up -d

Automatic Updates

BAP releases updates through new Docker images. Set up a lightweight update check:

#!/bin/bash
# /opt/bap-update.sh
cd /path/to/bap

# Pull latest images
docker compose pull

# Check if images changed
if docker compose up -d --dry-run 2>&1 | grep -q "Recreating"; then
    # Backup before updating
    /opt/bap-backup.sh
    docker compose up -d
    echo "BAP updated at $(date)" >> /var/log/bap-updates.log
fi

Always back up before updating. Database migrations run automatically at API startup, so upgrades are generally seamless, but a backup ensures you can roll back if needed.

Maintenance and Long-Term Operation

Self-hosting means you are responsible for keeping the system running. Here is what ongoing maintenance looks like.

Monitoring

At minimum, monitor two things:

  1. Is the service up? A simple HTTP check against the health endpoint is sufficient. Use any monitoring tool you prefer: Uptime Kuma (self-hosted), Healthchecks.io (free tier), or a cron job that curls the endpoint and alerts on failure.

  2. Is the database growing normally? Check the SQLite file size periodically. A personal deployment should stay small (under 100 MB) indefinitely. Unexpected growth could indicate a problem.

# Quick health check script
curl -sf http://localhost:8080/api/health > /dev/null || echo "BAP is DOWN" | mail -s "BAP Alert" you@yourdomain.com

Log Management

Docker captures container logs. Review them periodically for errors:

# View recent API logs
docker compose logs --tail 100 api

# View recent web logs
docker compose logs --tail 100 web

For long-term log management, configure Docker's logging driver to rotate logs and prevent disk fill:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Add this to /etc/docker/daemon.json and restart Docker.

Responding to Liveness Checks

This is the most important maintenance task. BAP sends liveness checks at the interval you configured (HCIT). You must respond within the response window (HCRT). If you miss enough consecutive checks (HCRAC), the system triggers the will transfer protocol.

Make sure you:

  • Configure notifications on a channel you actually check (email to your primary inbox, or a Telegram message)
  • Set intervals that match your actual availability (if you travel without internet for two weeks, your check-in interval should be longer than two weeks)
  • Test the entire flow, including a missed check-in, before relying on it for real

PostgreSQL Migration

If you outgrow SQLite (unlikely for personal use, but possible if you share your instance with family), BAP supports PostgreSQL. Enable it with the postgres Docker Compose profile:

# Update .env
DB_TYPE=postgres
DATABASE_URL=postgres://bap:changeme@postgres:5432/bap

# Start with PostgreSQL
docker compose --profile postgres up -d

Migrations for PostgreSQL are maintained alongside SQLite migrations and run automatically at startup.

When SaaS Makes More Sense

Self-hosting is not the right choice for everyone. Here is an honest assessment of when the managed SaaS option at baprotocol.com is the better path.

You Should Use SaaS If:

  • You do not have a server. Renting a VPS, keeping it patched, and maintaining backups is real work. If you are not already running a server for other purposes, the operational overhead of self-hosting a single application may not be worth it.
  • You are not comfortable with Docker or Linux administration. BAP's deployment is straightforward, but troubleshooting when things go wrong requires basic sysadmin skills. If "fix a broken Docker container" sounds intimidating, SaaS removes that burden.
  • You want platform-managed notification connectors. The SaaS version includes BAP-managed email and Telegram connectors on the free tier, plus SMS and WhatsApp on Pro. Self-hosted deployments require you to bring your own SMTP server, Twilio account, or Telegram bot.
  • You want zero maintenance. The SaaS version handles updates, backups, monitoring, and uptime. You just configure your will and respond to check-ins.

You Should Self-Host If:

  • Trust minimization is a priority. You do not want any third party to hold your encrypted data, regardless of their security claims.
  • You want full auditability. You need to verify the encryption implementation, inspect the code, and confirm that no data leaves your server.
  • You already run a homelab or VPS. Adding BAP to an existing server is trivial. The resource requirements are minimal (under 512 MB RAM, negligible CPU).
  • You want maximum configuration control. Self-hosted mode gives you direct database access, custom notification integrations, and the ability to modify the application itself (it is AGPL-licensed, so you have that right).
  • Long-term reliability matters more than convenience. Your self-hosted instance runs as long as your server does. It has no dependency on BAP's company continuing to operate.

The Hybrid Approach

Some users run both. They self-host BAP as their primary system and maintain a free SaaS account as a backup notification channel. If the self-hosted instance goes down, the SaaS account provides a fallback liveness check. This layered approach maximizes reliability without fully trusting either deployment.

Frequently Asked Questions

What are the minimum server requirements?

BAP runs comfortably on 1 GB of RAM and a single CPU core. A $5/month VPS from any major provider (Hetzner, DigitalOcean, Linode, Vultr) is more than sufficient. Disk usage stays minimal for personal deployments: the SQLite database, Docker images, and application code total under 1 GB.

Can I run BAP on a Raspberry Pi?

Yes. BAP's Go backend and Next.js frontend both run on ARM64. A Raspberry Pi 4 or 5 with 2 GB+ of RAM handles the workload. Ensure you have reliable power (a UPS is recommended) and off-site backups, because an SD card is not a durable storage medium. Use an external SSD for the data directory.

What happens if my server goes down?

If your server is down, liveness checks cannot be sent, and you cannot check in. This does not immediately trigger the will transfer. The system only triggers after you miss the configured number of consecutive checks (HCRAC), and those checks cannot be sent if the server is offline. When the server comes back up, the scheduler resumes and the check-in cycle resets.

However, prolonged downtime is a problem. If your server is down for months and you are also incapacitated, the dead man's switch has failed silently. This is why monitoring and off-site backups matter. Consider the hybrid approach (SaaS backup) if server reliability is a concern.

How do I migrate from SaaS to self-hosted?

BAP does not currently offer an automated migration path between SaaS and self-hosted deployments. You would need to recreate your will, survivors, and connector configurations on the self-hosted instance. The encrypted will documents themselves need to be re-uploaded and re-encrypted with the new instance's keys.

Can multiple family members share one self-hosted instance?

Yes. BAP supports multiple user accounts on a single instance. Each user (Host) has their own will, survivors, connectors, and encryption keys. The accounts are fully isolated at the application level. This is a practical setup for families who want to share infrastructure costs.

Is the AGPL license a problem for self-hosting?

No. The AGPL requires that if you modify BAP and offer it as a network service to others, you must make your modifications available as source code. For personal self-hosting, this has no practical impact. You can run, modify, and customize BAP for your own use without any obligation. The AGPL only activates if you host a modified version for other people.

How do I update BAP?

Pull the latest images and restart:

cd /path/to/bap
docker compose pull
docker compose up -d

Database migrations run automatically at startup. Always back up your database before updating. If an update causes problems, restore the backup and pin the previous image version in your docker-compose.yml until the issue is resolved.

What if I lose my MASTER_KEY?

Your encrypted wills are permanently inaccessible. This is by design: the encryption has no backdoor, no recovery mechanism, and no master override. The MASTER_KEY is the single most important piece of information in your self-hosted deployment. Write it down on paper. Store the paper in a fireproof safe or bank safety deposit box. Consider giving a sealed copy to your estate attorney.

Getting Started

If you have read this far and self-hosting sounds right for you, the deployment takes about 15 minutes. Clone the repository, generate your keys, start Docker Compose, and bootstrap your admin account. The steps are in the deployment section above.

For the broader context of why digital estate planning matters and how a dead man's switch fits into a complete plan, read our digital estate planning guide.

For a technical deep dive into the cryptography behind threshold-based access control, see our Shamir's Secret Sharing guide.