LaneAward Operations

Time On Tasks Operations Runbook

This is the quick-reference HTML runbook for the recurring processes we use to keep the Time On Tasks contributor app current in development and staging. It is intentionally separate from the chronological setup log so routine work is easy to find.

Environment Parity Rule

Keep development, AWS staging, and the public runtime aligned on Python version, SQLite version, schema level, refreshed database content, backend code, frontend code, and service configuration. If those drift, report or workflow behavior can look broken even when the application logic itself is correct.

Environment

Development

Environment

Staging

Environment

Public

Canonical Routes

Contributor App/ = Time On Tasks
Console/console/ = combined Admin + Reports frontend
API/api/ = shared LaneAward backend
Legacy Route/workforce/ redirects to /

Process 1: Update The SQLite Database From New ProfitMaker Files

Use this section when a new asidta_file_* folder arrives from production and you need to refresh the shared SQLite database with the latest customer numbers, customer names, order numbers, and order descriptions.

⚠ Critical Rule — Read Before Every Run

Never copy or rsync the local Mac Mini workforce.db to staging or production. The local database is a dev artifact. Staging is the live operational database and contains contributor work sessions, teams, task history, and team configurations that do not exist locally. Pushing the local file overwrites all of that data and it has caused data loss multiple times.

The correct process uses push_reference_to_staging.sh, which copies only the ProfitMaker DBF files to the VM and runs the import directly against the staging database on the VM. The local database is never involved.

How It Works

refresh_workforce_reference_snapshot.sh — promotes changed DBF/FPT/CDX files from the newest asidta_file_* folder into pm_database and updates the local db as a side effect (irrelevant — never pushed).

push_reference_to_staging.sh — copies only the required DBF files to the VM and runs import_profitmaker_reference.py directly against /var/lib/laneaward-staging/workforce.db on the VM. The local database file is never touched.

Step 1: Promote DBF Files Into pm_database (Mac local)

Run the refresh script locally. This promotes changed ProfitMaker files into the canonical pm_database folder. It also updates the local workforce.db as a side effect — that local file is irrelevant and is never used further. Run with FORCE_REFRESH=1 on weekends.

[Mac-local]
FORCE_REFRESH=1 /Users/donaldscott/Project-Code/laneaward/repo/scripts/refresh_workforce_reference_snapshot.sh
Step 2: Push ProfitMaker Reference Data Directly To Staging (Mac local → VM)

This script copies only the required DBF files to the VM and runs the importer against the live staging database on the VM. No local workforce.db is involved. Run with FORCE_REFRESH=1 on weekends.

[Mac-local]
FORCE_REFRESH=1 SSH_KEY=~/.ssh/lane_webserver.pem \
/Users/donaldscott/Project-Code/laneaward/repo/scripts/push_reference_to_staging.sh
Step 3: Verify The Updated Data

Confirm the API is healthy and that order search results reflect the latest ProfitMaker data.

[Mac-local]
curl -sS https://staging.timeontasks.laneaward.com/api/health
[Mac-local]
curl -sS "https://staging.timeontasks.laneaward.com/api/orders/search?q=107923&limit=3"

Current verified result from asidta_file_22 (2026-04-14): latest order date 2026-04-13 — 3,257 customers and 4,910 orders imported. Key sources that changed: CUSTOMER.DBF, COSTORD.DBF, ATXDET.DBF, and APKGDTL.DBF.

Process 2: Local Contributor-App Publish To AWS Staging

Use this after changing the contributor frontend in timeontasks. The staging host uses the contributor app at the root route.

Step 1: Upload The Refreshed Frontend Files

Copy the contributor app files from the Mac into the AWS staging upload area.

[Mac-Prod]
rsync -avh -e "ssh -i ~/.ssh/lane_webserver.pem" \
/Users/donaldscott/Project-Code/laneaward/repo/timeontasks/ \
ubuntu@3.130.69.109:~/laneaward-staging-upload/timeontasks/
Step 2: Publish The Contributor App Live On AWS Staging

Promote the uploaded files into the live AWS staging web root.

[VM]
sudo rsync -avh ~/laneaward-staging-upload/timeontasks/ /var/www/laneaward-staging/

Publish only to the canonical root app folder. The legacy /workforce/ path is now just a redirect and should not be maintained as a second live frontend.

Process 3: Staging Status Check

Use the VM status helper when you want one quick read on service health, HTTP routes, and database row counts.

vm_workforce_status.sh

[VM]
SERVICE_NAME=laneaward-workforce-api-staging.service \
DB_DIR=/var/lib/laneaward-staging \
DIRECT_HEALTH_URL=http://127.0.0.1:9193/api/health \
NGINX_HEALTH_URL=https://staging.timeontasks.laneaward.com/api/health \
ROOT_URL=https://staging.timeontasks.laneaward.com/ \
CONSOLE_URL=https://staging.console.laneaward.com/ \
LEGACY_WORKFORCE_URL=https://staging.laneaward.com/workforce/ \
/opt/laneaward-staging/workforce_app/deploy/vm_workforce_status.sh

Process 4: Activity Reset for Staging

This is the verified safe reset for staging activity only. It preserves users, roles, teams, customer references, sales orders, and ProfitMaker import metadata.

vm_workforce_reset_activity.sh

[VM]
sudo env \
SERVICE_NAME=laneaward-workforce-api-staging.service \
LIVE_DB=/var/lib/laneaward-staging/workforce.db \
DIRECT_HEALTH_URL=http://127.0.0.1:9193/api/health \
NGINX_HEALTH_URL=https://staging.timeontasks.laneaward.com/api/health \
STATUS_SCRIPT=/opt/laneaward-staging/workforce_app/deploy/vm_workforce_status.sh \
/opt/laneaward-staging/workforce_app/deploy/vm_workforce_reset_activity.sh --yes

This does not remove seeded users. User cleanup still needs either the admin tool or a dedicated user-maintenance process.

Process 5: PIN Report

Use this when you need to verify or hand out current staging tablet PINs.

vm_workforce_pin_report.sh

[VM]
/opt/laneaward-staging/workforce_app/deploy/vm_workforce_pin_report.sh /var/lib/laneaward-staging/workforce.db

Process 6: Backend Restart Only

Use this only when backend code changed and you do not need to replace the SQLite database itself.

vm_workforce_cutover.sh

[VM]
sudo env \
SERVICE_NAME=laneaward-workforce-api-staging.service \
DB_DIR=/var/lib/laneaward-staging \
DIRECT_HEALTH_URL=http://127.0.0.1:9193/api/health \
NGINX_HEALTH_URL=https://staging.timeontasks.laneaward.com/api/health \
ROOT_URL=https://staging.timeontasks.laneaward.com/ \
CONSOLE_URL=https://staging.console.laneaward.com/ \
LEGACY_WORKFORCE_URL=https://staging.laneaward.com/workforce/ \
STATUS_SCRIPT=/opt/laneaward-staging/workforce_app/deploy/vm_workforce_status.sh \
/opt/laneaward-staging/workforce_app/deploy/vm_workforce_cutover.sh --restart-only

Process 7: Reliability Verification For Task Writes

Phase 1 durability hardening is now part of the backend and should be preserved whenever the application programming interface, or API, is updated. The Time On Tasks API now opens SQLite in Write-Ahead Logging (WAL) mode, waits up to 10 seconds for short lock contention, uses synchronous = FULL for safer commits, and wraps each mutating route in a short BEGIN IMMEDIATE write transaction.

Connection Settings
  • PRAGMA journal_mode = WAL, which enables Write-Ahead Logging
  • PRAGMA busy_timeout = 10000, which gives SQLite up to 10,000 milliseconds to wait on a short lock
  • PRAGMA synchronous = FULL, which favors safer disk writes over speed
Protected Writes
  • add task
  • start, pause, complete, cancel
  • material usage logging
  • admin user maintenance routes
Contention Response
  • lock contention now returns HTTP 503 Service Unavailable
  • response includes retryable: true
  • treat this as a transient retry condition, not a data-loss event

Iteration test checklist:

Current Reliability Status

The current backend reliability work is best understood as safe but incomplete, not partially conflicting. The first five database-reliability features are fully implemented and should remain in place together:

  • Write-Ahead Logging (WAL)
  • busy_timeout = 10000
  • synchronous = FULL
  • short BEGIN IMMEDIATE write transactions
  • retryable HTTP 503 responses for SQLite busy/locked contention

In practical terms, these five changes make the shared SQLite backend safer under short write collisions, safer during commit, and clearer when contention happens. They do not depend on the unfinished Phase 2 client work in order to remain valid.

Phase 2 is still open for client-side resilience:

  • client retry and backoff
  • temporary local or offline storage
  • idempotent write keys

The current risk is not incompatibility between Phase 1 and Phase 2. The current risk is simply that weak network conditions can still produce a manual retry moment or a stale-looking user interface because the client-side resilience layer is not finished yet.

Process 8: Backup and Recovery

All three LaneAward environments are protected by the 🚀 PWA BDR service — a shared macOS menu bar LaunchAgent on the development Mac. Nothing needs to be installed on the VM. Look for the 🚀 icon in the top menu bar to open the separate Lensboard and LaneAward dropdowns, trigger manual runs, adjust schedules, or mark jobs as Include or Skip.

Active Backup Jobs
JobWhat it protectsDefault schedule
VM·STAGING — Database /var/lib/laneaward-staging/workforce.db on AWS Every 12 h
VM·PROD — Database /var/lib/laneaward/workforce.db on AWS Every 12 h
LOCAL — Database workforce_app/data/workforce.db on Mac Every 12 h
LOCAL — Source Code Full project on Mac (archives excluded), hardlink snapshots Weekly
Backup Locations
  • Staging DB → ~/projectbackups/laneaward/staging-database/
  • Production DB → ~/projectbackups/laneaward/production-database/
  • Local DB → ~/projectbackups/laneaward/local-database/
  • Source snapshots → ~/projectbackups/laneaward/source/
  • Unified log → ~/projectbackups/backup_logs/backup.log

Each VM backup is a safe online copy — the service SSHs into the VM, runs sqlite3 .backup (no downtime, no locking), then scps the result to the Mac and cleans up the temp file. Every copy is independently restorable.

Restoring Staging

Pick a snapshot and push it to the VM, then restart the service:

[Mac-local]
scp ~/projectbackups/laneaward/staging-database/workforce_vm·staging_TIMESTAMP.db \
    laneaward-vm:/tmp/restore.db
[VM]
sudo cp /tmp/restore.db /var/lib/laneaward-staging/workforce.db
sudo chown www-data:www-data /var/lib/laneaward-staging/workforce.db
sudo systemctl restart laneaward-workforce-api-staging
rm /tmp/restore.db
Restoring Production
[Mac-local]
scp ~/projectbackups/laneaward/production-database/workforce_vm·prod_TIMESTAMP.db \
    laneaward-vm:/tmp/restore.db
[VM]
sudo cp /tmp/restore.db /var/lib/laneaward/workforce.db
sudo chown www-data:www-data /var/lib/laneaward/workforce.db
sudo systemctl restart laneaward-workforce-api
rm /tmp/restore.db
Manual VM-Side Snapshot (one-off, before a risky change)
[VM — Staging]
sudo cp /var/lib/laneaward-staging/workforce.db \
        /var/lib/laneaward-staging/workforce-$(date +%F-%H%M%S).db
[VM — Production]
sudo cp /var/lib/laneaward/workforce.db \
        /var/lib/laneaward/workforce-$(date +%F-%H%M%S).db

Known Gaps To Close Next

Source References