Backend Container Deployment
The backend still builds as one image with two runtime processes:
api: Fastify HTTP serverworker: forecast sync and route compilation background worker
Local development continues to use Docker Compose from this repo. Hosted staging and production now deploy through repo-owned Ophelia source manifests, GHCR images, and the shared Ophelia Postgres addon on the VPS.
Local
- Copy or fill in
/Users/kyle/Developer/projects/multiplatform/aspectavy/aspectavy-platform/backend/.env.local - Start the stack:
./bin/setup-local-env
docker compose -f ops/docker/docker-compose.yml up -d --build
Useful checks:
docker compose -f ops/docker/docker-compose.yml ps
docker compose -f ops/docker/docker-compose.yml logs -f api
docker compose -f ops/docker/docker-compose.yml logs -f worker
curl http://127.0.0.1:3001/health
curl http://127.0.0.1:3001/livez
curl http://127.0.0.1:3001/readyz
Backup local bundled Postgres:
./ops/scripts/backup-postgres.sh local
Hosted Staging And Production
The canonical hosted deploy sources now live under:
- ops/deploy/ophelia/aspectavy-staging.ophelia.yml
- ops/deploy/ophelia/aspectavy-production.ophelia.yml
- ops/deploy/ophelia/README.md
Those manifests define the live bagels.top rehearsal layout:
- staging-app.bagels.top - staging-api.bagels.top - staging-docs.bagels.top - staging-admin.bagels.top
- app.bagels.top - api.bagels.top - docs.bagels.top - admin.bagels.top
- staging:
- production:
Each Ophelia app deploys:
- one GHCR image:
ghcr.io/mrbagels/aspectavy-backend - one
apiservice - one
workerservice - one shared Postgres database provisioned through the Ophelia addon system
dev.bagels.top remains platform-owned static hosting in the Ophelia repo.
One-Time VPS Bootstrap
Do this once per environment before the first CI deploy:
- ops/deploy/ophelia/aspectavy-staging.env.example -> ~/ophelia-runtime/apps/aspectavy-staging/env - ops/deploy/ophelia/aspectavy-production.env.example -> ~/ophelia-runtime/apps/aspectavy-production/env
- Copy the checked-in env example into the runtime app env path:
- Fill the secrets and provider credentials.
- Leave the file in place. CI deploys fail fast if it is missing.
The shared Postgres addon rewrites DATABASE_URL during the first successful deploy, so the placeholder value in the example file is expected.
CI Deployment Flow
Staging workflow:
- DEPLOY_STAGING_SSH_HOST - DEPLOY_STAGING_SSH_PORT - DEPLOY_STAGING_SSH_USER - DEPLOY_STAGING_SSH_PRIVATE_KEY
- branch:
next - workflow: deploy-staging.yml
- required repository secrets:
Production workflow:
- DEPLOY_PRODUCTION_SSH_HOST - DEPLOY_PRODUCTION_SSH_PORT - DEPLOY_PRODUCTION_SSH_USER - DEPLOY_PRODUCTION_SSH_PRIVATE_KEY
- branch:
master - workflow: deploy-production.yml
- required repository secrets:
Both workflows now:
- run backend
npm run typecheckandnpm test - build and push the backend image to GHCR
- sync the source manifest into
~/ophelia-runtime/apps/<app>/source-manifest.yml - run
~/ophelia/cli/ship deploy ... --applyon the VPS
Branch policy:
nextis the default integration branch for upcoming work and staging deploysmasteris the stable promotion branch and production deploy source- short-lived work branches should branch from
nextand merge back intonext
Runtime Validation
Useful hosted checks after a deploy:
curl -I https://staging-app.bagels.top/login
curl -I https://staging-api.bagels.top/readyz
curl -I https://staging-docs.bagels.top/
curl -I https://staging-admin.bagels.top/
curl -I https://app.bagels.top/login
curl -I https://api.bagels.top/readyz
curl -I https://docs.bagels.top/
curl -I https://admin.bagels.top/
Useful VPS-local checks:
ssh -p 22022 kyle@209.74.71.165 'docker compose -f ~/ophelia-runtime/apps/aspectavy-staging/compose.yml ps'
ssh -p 22022 kyle@209.74.71.165 'docker compose -f ~/ophelia-runtime/apps/aspectavy-production/compose.yml ps'
ssh -p 22022 kyle@209.74.71.165 'curl -f http://127.0.0.1:3401/readyz'
ssh -p 22022 kyle@209.74.71.165 'curl -f http://127.0.0.1:3501/readyz'
Legacy Compose Files
These files remain in the repo for local development, backup/restore helpers, and temporary rollback use only:
ops/docker/docker-compose.production.ymlops/docker/docker-compose.production.edge.ymlops/docker/docker-compose.production.bundled-db.ymlops/docker/docker-compose.staging.edge.ymlops/docker/docker-compose.staging.bundled-db.ymlops/scripts/deploy-stack.sh
They are no longer the primary hosted deployment path.
Backup and restore helpers remain available:
./ops/scripts/backup-postgres.sh production-bundled
DATABASE_URL=postgres://... ./ops/scripts/backup-postgres.sh production-external
FORCE=1 ./ops/scripts/restore-postgres.sh production-bundled ./backups/production-bundled-YYYYMMDD-HHMMSS.dump
FORCE=1 DATABASE_URL=postgres://... ./ops/scripts/restore-postgres.sh production-external ./backups/production-external-YYYYMMDD-HHMMSS.dump
Notes
- app.aspectavy.com - api.aspectavy.com - docs.aspectavy.com - admin.aspectavy.com
- The backend image is still built from backend/Dockerfile.
apiuses/readyzfor hosted health checks.workeruses the persisted heartbeat check from worker-healthcheck.ts.SESSION_COOKIE_DOMAIN=bagels.topis required for the splitapp/api/docs/adminrehearsal hosts to share browser session state.API_BASE_URL,DOCS_PORTAL_BASE_URL, andOPERATOR_PORTAL_BASE_URLare now manifest-owned in the Ophelia source manifests.- The canonical future mirror remains:
APPLE_APP_SITE_ASSOCIATION_APP_IDSmust keep the current team prefixYDZN22WL89..