Covers downloading from S3, decryption, data inspection,
restoring entities via BC API with correct dependency order,
point-in-time restore with incrementals, GL entry restoration
via journal posting, and a full entity reference table.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The broad "locked" pattern was matching non-lock errors, causing
false retries. Now only matches specific BC lock messages. Also
surfaces the actual error text in WARN log lines for diagnostics.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The config file sets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as
shell variables, but aws cli needs them as environment variables.
Added explicit export statements in both bc-backup.sh and bc-cleanup.sh.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Incremental backups using BC API's lastModifiedDateTime filter to only
export records changed since the last successful run. Runs every 15
minutes via cron, with a daily full backup for complete snapshots.
bc-export.ps1:
- Add -SinceDateTime parameter for incremental filtering
- Append $filter=lastModifiedDateTime gt {timestamp} to all entity URLs
- Exit code 2 when no records changed (skip archive/upload)
- Record mode and sinceDateTime in export-metadata.json
bc-backup.sh:
- Accept --mode full|incremental flag (default: incremental)
- State file (last-run-state.json) tracks last successful run timestamp
- Auto-fallback to full when no state file exists
- Skip archive/encrypt/upload when incremental finds 0 changes
- Lock file (.backup.lock) prevents overlapping cron runs
- S3 keys organized by mode: backups/full/ vs backups/incremental/
bc-cleanup.sh (new):
- Lists all S3 objects under backups/ prefix
- Deletes objects older than RETENTION_DAYS (default 30)
- Handles pagination for large buckets
- Gracefully handles COMPLIANCE-locked objects
bc-backup.conf.template:
- Add BACKUP_MODE_DEFAULT option
cron-examples.txt:
- Recommended setup: 15-min incremental + daily full + daily cleanup
- Alternative schedules (30-min, hourly)
- Systemd timer examples
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
BC returns 500 with "being updated in a transaction done by another
session" when tables are locked by other users - this is normal during
business hours. Previously the script gave up after 3 quick retries.
Changes:
- Invoke-BCApi: increased retries to 10, table lock errors get longer
waits (30s + 15s per attempt, up to 120s) vs generic errors (10s)
- Export-EntityData: retries the whole entity up to 5 times on table
lock with 60s/120s/180s/240s waits between attempts
- Export-DocumentWithLines: same entity-level retry, restarts the
full document+lines export on table lock rather than giving up
- Also handles 429 rate limiting with longer backoff (30s per attempt)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The $expand approach had two fatal problems:
1. $top=50 with $expand made BC treat it as a hard limit with no
@odata.nextLink, so only 50 docs were exported total
2. salesOrders with $expand timed out even at 50 docs when orders
have many lines
New approach: fetch document headers normally (BC paginates fine on
its own), then for each document fetch its lines separately via
/salesInvoices({id})/salesInvoiceLines. More API calls but each is
small, fast, and reliable.
Also added:
- Invoke-BCApi with retry logic (backoff on 429/5xx/timeout)
- Separate output files: headers in {entity}.jsonl, lines in
{lineEntity}.jsonl
- Partial data is preserved if export fails mid-way
- Progress logged every 100 documents
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The previous $expand approach tried to load all documents with lines
into memory at once, causing hangs/OOM on companies with hundreds of
thousands of records.
Changes:
- Fetch documents with $expand in small pages ($top=50) instead of
loading everything into memory
- Stream each document to disk immediately as JSONL (one JSON object
per line) instead of accumulating in an array
- Add automatic token refresh for long-running exports (tokens expire
after ~60 min)
- Add 300s timeout per API request to detect stalls
- Log progress after each batch so you can see it's working
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The BC API v2.0 requires line entities (salesInvoiceLines, etc.) to be
accessed through their parent document - they cannot be queried directly.
Use OData $expand on parent documents to include lines inline, e.g.
salesInvoices?$expand=salesInvoiceLines
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Admin Center export API requires an Azure Storage SAS URI which
requires an Azure Subscription - defeating the purpose of an independent
backup. Instead, use BC API v2.0 to extract critical business data
(customers, vendors, items, GL entries, invoices, etc.) as JSON files.
- bc-export.ps1: rewritten to use BC API v2.0 endpoints, extracts 23
entity types per company with OData pagination support
- bc-backup.sh: handles JSON export directory, creates tar.gz archive
before encrypting and uploading to S3
- bc-backup.conf.template: removed Azure Storage SAS config, added
optional BC_COMPANY_NAME filter
- decrypt-backup.sh: updated for tar.gz.gpg format, shows extracted
entity files and metadata after decryption
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix endpoint paths from /applications/businesscentral/environments/{env}/databaseExports
to /exports/applications/BusinessCentral/environments/{env}
- Add Azure Storage SAS URI support (required by current export API)
- Update default API version from v2.15 to v2.21
- Add export metrics check before initiating export
- Use export history endpoint for status polling
- Download BACPAC from Azure Storage instead of expecting blobUri response
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>