Docker Compose with the MarlinSpike app container plus PostgreSQL.
Deploy MarlinSpike as a shared, reverse-proxied Docker workbench.
The project install docs are very clear about the preferred operating model: Docker Compose for the app and database, a reverse proxy at the edge, and a private app port behind it.
Keep the app on 127.0.0.1:5001 internally and terminate TLS at the reverse proxy.
User content and the database live in persistent Docker volumes so rebuilds do not wipe them.
Local Docker deployment
The checked-in install flow is the normal starting point for local, lab, or field-host deployment:
- Copy
.env.exampleto.env. - Set strong values for
DB_PASSWORD,SECRET_KEY, andADMIN_PASSWORD. - Build and start the stack.
- Check the logs and then open the app on the internal port or through your proxy.
cp .env.example .env
docker compose up -d --build
docker compose logs -f app
If ADMIN_PASSWORD is left blank, the first boot generates a random admin password and prints it into the container logs.
Common commands
The install guide keeps the day-two operations set minimal:
docker compose ps
docker compose logs -f app
docker compose down
docker compose restart app
Persistent data
MarlinSpike stores runtime state in named Docker volumes so a rebuild does not remove user uploads, reports, or the database.
| Volume or path | Purpose |
|---|---|
marlinspike-data |
Uploads, reports, presets, and archived submissions. |
marlinspike-pgdata |
PostgreSQL data. |
/app/data/reports |
Generated report artifacts inside the app container. |
/app/data/uploads |
Uploaded capture files. |
/app/data/submissions |
Archived submissions. |
/app/data/presets |
Preset capture storage. |
Reverse proxy guidance
The project docs recommend keeping the app bound to 127.0.0.1:5001 and placing nginx, Caddy, or Traefik in front of it for TLS termination and public ingress.
- Terminate TLS at the proxy.
- Forward only the internal app port.
- Keep the Flask app bound privately unless you have a deliberate reason to expose it directly.
- Treat the deployment as a shared team surface rather than a general public internet app.
Upgrades and backups
For a normal code update, pull the latest changes and rebuild the containers:
git pull
docker compose up -d --build
Before major upgrades, back up both the database and the data volume:
docker compose exec db pg_dump -U marlinspike marlinspike > marlinspike.sql
Also archive the contents of the data volume or whatever mounted data directory your deployment uses.
Remote deployment and live capture
The repository includes a generic deploy.sh script for remote deployment. The documented pattern is:
REMOTE=deploy@example-host ./deploy.sh
For staging, the project docs also call out deploy-dev.sh.
Live capture is available, but the install notes frame it carefully: it depends on tshark inside the container and should only be enabled on an authorized physical interface for controlled local use, not broad public exposure.
Need the bigger product model behind the install path?
The architecture page explains why the deployment looks this way: shared workbench, portable report artifacts, passive-only analysis, and a clean separation between packet handling and downstream review.