The config file sets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as
shell variables, but aws cli needs them as environment variables.
Added explicit export statements in both bc-backup.sh and bc-cleanup.sh.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Incremental backups using BC API's lastModifiedDateTime filter to only
export records changed since the last successful run. Runs every 15
minutes via cron, with a daily full backup for complete snapshots.
bc-export.ps1:
- Add -SinceDateTime parameter for incremental filtering
- Append $filter=lastModifiedDateTime gt {timestamp} to all entity URLs
- Exit code 2 when no records changed (skip archive/upload)
- Record mode and sinceDateTime in export-metadata.json
bc-backup.sh:
- Accept --mode full|incremental flag (default: incremental)
- State file (last-run-state.json) tracks last successful run timestamp
- Auto-fallback to full when no state file exists
- Skip archive/encrypt/upload when incremental finds 0 changes
- Lock file (.backup.lock) prevents overlapping cron runs
- S3 keys organized by mode: backups/full/ vs backups/incremental/
bc-cleanup.sh (new):
- Lists all S3 objects under backups/ prefix
- Deletes objects older than RETENTION_DAYS (default 30)
- Handles pagination for large buckets
- Gracefully handles COMPLIANCE-locked objects
bc-backup.conf.template:
- Add BACKUP_MODE_DEFAULT option
cron-examples.txt:
- Recommended setup: 15-min incremental + daily full + daily cleanup
- Alternative schedules (30-min, hourly)
- Systemd timer examples
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Admin Center export API requires an Azure Storage SAS URI which
requires an Azure Subscription - defeating the purpose of an independent
backup. Instead, use BC API v2.0 to extract critical business data
(customers, vendors, items, GL entries, invoices, etc.) as JSON files.
- bc-export.ps1: rewritten to use BC API v2.0 endpoints, extracts 23
entity types per company with OData pagination support
- bc-backup.sh: handles JSON export directory, creates tar.gz archive
before encrypting and uploading to S3
- bc-backup.conf.template: removed Azure Storage SAS config, added
optional BC_COMPANY_NAME filter
- decrypt-backup.sh: updated for tar.gz.gpg format, shows extracted
entity files and metadata after decryption
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix endpoint paths from /applications/businesscentral/environments/{env}/databaseExports
to /exports/applications/BusinessCentral/environments/{env}
- Add Azure Storage SAS URI support (required by current export API)
- Update default API version from v2.15 to v2.21
- Add export metrics check before initiating export
- Use export history endpoint for status polling
- Download BACPAC from Azure Storage instead of expecting blobUri response
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>