This is the quick-reference HTML runbook for the recurring processes we use to keep the Time On Tasks contributor app current in development and staging. It is intentionally separate from the chronological setup log so routine work is easy to find.
Keep development, AWS staging, and the public runtime aligned on Python version, SQLite version, schema level, refreshed database content, backend code, frontend code, and service configuration. If those drift, report or workflow behavior can look broken even when the application logic itself is correct.
/var/www/laneaward-staging/var/www/laneaward-staging-console/var/lib/laneaward-staging/workforce.db| Contributor App | / = Time On Tasks |
|---|---|
| Console | /console/ = combined Admin + Reports frontend |
| API | /api/ = shared LaneAward backend |
| Legacy Route | /workforce/ redirects to / |
Use this section when a new asidta_file_* folder arrives from production and you need to refresh
the shared SQLite database with the latest customer numbers, customer names, order numbers, and order descriptions.
Never copy or rsync the local Mac Mini workforce.db to staging or production.
The local database is a dev artifact. Staging is the live operational database and contains contributor
work sessions, teams, task history, and team configurations that do not exist locally.
Pushing the local file overwrites all of that data and it has caused data loss multiple times.
The correct process uses push_reference_to_staging.sh, which copies only the ProfitMaker
DBF files to the VM and runs the import directly against the staging database on the VM.
The local database is never involved.
refresh_workforce_reference_snapshot.sh — promotes changed DBF/FPT/CDX files from the newest asidta_file_* folder into pm_database and updates the local db as a side effect (irrelevant — never pushed).
push_reference_to_staging.sh — copies only the required DBF files to the VM and runs import_profitmaker_reference.py directly against /var/lib/laneaward-staging/workforce.db on the VM. The local database file is never touched.
Run the refresh script locally. This promotes changed ProfitMaker files into the canonical
pm_database folder. It also updates the local workforce.db as a
side effect — that local file is irrelevant and is never used further.
Run with FORCE_REFRESH=1 on weekends.
FORCE_REFRESH=1 /Users/donaldscott/Project-Code/laneaward/repo/scripts/refresh_workforce_reference_snapshot.sh
This script copies only the required DBF files to the VM and runs the importer against
the live staging database on the VM. No local workforce.db is involved.
Run with FORCE_REFRESH=1 on weekends.
FORCE_REFRESH=1 SSH_KEY=~/.ssh/lane_webserver.pem \ /Users/donaldscott/Project-Code/laneaward/repo/scripts/push_reference_to_staging.sh
Confirm the API is healthy and that order search results reflect the latest ProfitMaker data.
curl -sS https://staging.timeontasks.laneaward.com/api/health
curl -sS "https://staging.timeontasks.laneaward.com/api/orders/search?q=107923&limit=3"
Current verified result from asidta_file_22 (2026-04-14):
latest order date 2026-04-13 — 3,257 customers and 4,910 orders imported.
Key sources that changed: CUSTOMER.DBF, COSTORD.DBF,
ATXDET.DBF, and APKGDTL.DBF.
Use this after changing the contributor frontend in timeontasks. The staging host uses the contributor app at the root route.
Copy the contributor app files from the Mac into the AWS staging upload area.
rsync -avh -e "ssh -i ~/.ssh/lane_webserver.pem" \ /Users/donaldscott/Project-Code/laneaward/repo/timeontasks/ \ ubuntu@3.130.69.109:~/laneaward-staging-upload/timeontasks/
Promote the uploaded files into the live AWS staging web root.
sudo rsync -avh ~/laneaward-staging-upload/timeontasks/ /var/www/laneaward-staging/
Publish only to the canonical root app folder. The legacy /workforce/ path
is now just a redirect and should not be maintained as a second live frontend.
Use the VM status helper when you want one quick read on service health, HTTP routes, and database row counts.
SERVICE_NAME=laneaward-workforce-api-staging.service \ DB_DIR=/var/lib/laneaward-staging \ DIRECT_HEALTH_URL=http://127.0.0.1:9193/api/health \ NGINX_HEALTH_URL=https://staging.timeontasks.laneaward.com/api/health \ ROOT_URL=https://staging.timeontasks.laneaward.com/ \ CONSOLE_URL=https://staging.console.laneaward.com/ \ LEGACY_WORKFORCE_URL=https://staging.laneaward.com/workforce/ \ /opt/laneaward-staging/workforce_app/deploy/vm_workforce_status.sh
This is the verified safe reset for staging activity only. It preserves users, roles, teams, customer references, sales orders, and ProfitMaker import metadata.
vm_workforce_reset_activity.sh
sudo env \ SERVICE_NAME=laneaward-workforce-api-staging.service \ LIVE_DB=/var/lib/laneaward-staging/workforce.db \ DIRECT_HEALTH_URL=http://127.0.0.1:9193/api/health \ NGINX_HEALTH_URL=https://staging.timeontasks.laneaward.com/api/health \ STATUS_SCRIPT=/opt/laneaward-staging/workforce_app/deploy/vm_workforce_status.sh \ /opt/laneaward-staging/workforce_app/deploy/vm_workforce_reset_activity.sh --yes
order_task, work_session, and material_usage, and also clears work_session_correction_audit through the work_session delete cascade.app_user and current PINs.--no-backup is used.This does not remove seeded users. User cleanup still needs either the admin tool or a dedicated user-maintenance process.
Use this when you need to verify or hand out current staging tablet PINs.
/opt/laneaward-staging/workforce_app/deploy/vm_workforce_pin_report.sh /var/lib/laneaward-staging/workforce.db
Use this only when backend code changed and you do not need to replace the SQLite database itself.
sudo env \ SERVICE_NAME=laneaward-workforce-api-staging.service \ DB_DIR=/var/lib/laneaward-staging \ DIRECT_HEALTH_URL=http://127.0.0.1:9193/api/health \ NGINX_HEALTH_URL=https://staging.timeontasks.laneaward.com/api/health \ ROOT_URL=https://staging.timeontasks.laneaward.com/ \ CONSOLE_URL=https://staging.console.laneaward.com/ \ LEGACY_WORKFORCE_URL=https://staging.laneaward.com/workforce/ \ STATUS_SCRIPT=/opt/laneaward-staging/workforce_app/deploy/vm_workforce_status.sh \ /opt/laneaward-staging/workforce_app/deploy/vm_workforce_cutover.sh --restart-only
Phase 1 durability hardening is now part of the backend and should be preserved whenever the application
programming interface, or API, is updated. The Time On Tasks API now opens SQLite in
Write-Ahead Logging (WAL) mode, waits up to 10 seconds for short lock contention, uses
synchronous = FULL for safer commits, and wraps each mutating route in a short
BEGIN IMMEDIATE write transaction.
PRAGMA journal_mode = WAL, which enables Write-Ahead LoggingPRAGMA busy_timeout = 10000, which gives SQLite up to 10,000 milliseconds to wait on a short lockPRAGMA synchronous = FULL, which favors safer disk writes over speedHTTP 503 Service Unavailableretryable: trueIteration test checklist:
The current backend reliability work is best understood as safe but incomplete, not partially conflicting. The first five database-reliability features are fully implemented and should remain in place together:
Write-Ahead Logging (WAL)busy_timeout = 10000synchronous = FULLBEGIN IMMEDIATE write transactionsHTTP 503 responses for SQLite busy/locked contentionIn practical terms, these five changes make the shared SQLite backend safer under short write collisions, safer during commit, and clearer when contention happens. They do not depend on the unfinished Phase 2 client work in order to remain valid.
Phase 2 is still open for client-side resilience:
The current risk is not incompatibility between Phase 1 and Phase 2. The current risk is simply that weak network conditions can still produce a manual retry moment or a stale-looking user interface because the client-side resilience layer is not finished yet.
All three LaneAward environments are protected by the 🚀 PWA BDR service — a shared macOS menu bar LaunchAgent on the development Mac. Nothing needs to be installed on the VM. Look for the 🚀 icon in the top menu bar to open the separate Lensboard and LaneAward dropdowns, trigger manual runs, adjust schedules, or mark jobs as Include or Skip.
| Job | What it protects | Default schedule |
|---|---|---|
VM·STAGING — Database |
/var/lib/laneaward-staging/workforce.db on AWS |
Every 12 h |
VM·PROD — Database |
/var/lib/laneaward/workforce.db on AWS |
Every 12 h |
LOCAL — Database |
workforce_app/data/workforce.db on Mac |
Every 12 h |
LOCAL — Source Code |
Full project on Mac (archives excluded), hardlink snapshots | Weekly |
~/projectbackups/laneaward/staging-database/~/projectbackups/laneaward/production-database/~/projectbackups/laneaward/local-database/~/projectbackups/laneaward/source/~/projectbackups/backup_logs/backup.log
Each VM backup is a safe online copy — the service SSHs into the VM,
runs sqlite3 .backup (no downtime, no locking), then
scps the result to the Mac and cleans up the temp file.
Every copy is independently restorable.
Pick a snapshot and push it to the VM, then restart the service:
scp ~/projectbackups/laneaward/staging-database/workforce_vm·staging_TIMESTAMP.db \
laneaward-vm:/tmp/restore.db
sudo cp /tmp/restore.db /var/lib/laneaward-staging/workforce.db sudo chown www-data:www-data /var/lib/laneaward-staging/workforce.db sudo systemctl restart laneaward-workforce-api-staging rm /tmp/restore.db
scp ~/projectbackups/laneaward/production-database/workforce_vm·prod_TIMESTAMP.db \
laneaward-vm:/tmp/restore.db
sudo cp /tmp/restore.db /var/lib/laneaward/workforce.db sudo chown www-data:www-data /var/lib/laneaward/workforce.db sudo systemctl restart laneaward-workforce-api rm /tmp/restore.db
sudo cp /var/lib/laneaward-staging/workforce.db \
/var/lib/laneaward-staging/workforce-$(date +%F-%H%M%S).db
sudo cp /var/lib/laneaward/workforce.db \
/var/lib/laneaward/workforce-$(date +%F-%H%M%S).db