12 Commits

Author SHA1 Message Date
025eb3896c docs: add full restore process documentation
Covers downloading from S3, decryption, data inspection,
restoring entities via BC API with correct dependency order,
point-in-time restore with incrementals, GL entry restoration
via journal posting, and a full entity reference table.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 09:21:48 +01:00
f40e57f576 feat: export all available BC API v2.0 entities
Added 31 missing entities across three categories:

Standalone (16 new): companyInformation, itemCategories,
shipmentMethods, taxAreas, taxGroups, unitsOfMeasure,
timeRegistrationEntries, contacts, generalProductPostingGroups,
inventoryPostingGroups, itemLedgerEntries, opportunities,
locations, projects, journalLines, irs1099

Financial reports (10 new, always full export): agedAccountsPayable,
agedAccountsReceivable, balanceSheet, cashFlowStatement,
incomeStatement, retainedEarningsStatement, trialBalance,
customerFinancialDetails, customerSales, vendorPurchases

Document+lines (5 new): salesQuotes, salesShipments,
purchaseReceipts, customerPaymentJournals, vendorPaymentJournals

Total entities: 19 → 50

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 09:18:24 +01:00
1ea3127a1d fix: tighten table lock detection regex and log actual errors
The broad "locked" pattern was matching non-lock errors, causing
false retries. Now only matches specific BC lock messages. Also
surfaces the actual error text in WARN log lines for diagnostics.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 20:17:28 +01:00
c81f4c51fb fix: export AWS credentials so aws cli can locate them
The config file sets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as
shell variables, but aws cli needs them as environment variables.
Added explicit export statements in both bc-backup.sh and bc-cleanup.sh.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 19:41:07 +01:00
3bad3ad171 feat: add incremental backups, S3 cleanup, and cron scheduling
Incremental backups using BC API's lastModifiedDateTime filter to only
export records changed since the last successful run. Runs every 15
minutes via cron, with a daily full backup for complete snapshots.

bc-export.ps1:
- Add -SinceDateTime parameter for incremental filtering
- Append $filter=lastModifiedDateTime gt {timestamp} to all entity URLs
- Exit code 2 when no records changed (skip archive/upload)
- Record mode and sinceDateTime in export-metadata.json

bc-backup.sh:
- Accept --mode full|incremental flag (default: incremental)
- State file (last-run-state.json) tracks last successful run timestamp
- Auto-fallback to full when no state file exists
- Skip archive/encrypt/upload when incremental finds 0 changes
- Lock file (.backup.lock) prevents overlapping cron runs
- S3 keys organized by mode: backups/full/ vs backups/incremental/

bc-cleanup.sh (new):
- Lists all S3 objects under backups/ prefix
- Deletes objects older than RETENTION_DAYS (default 30)
- Handles pagination for large buckets
- Gracefully handles COMPLIANCE-locked objects

bc-backup.conf.template:
- Add BACKUP_MODE_DEFAULT option

cron-examples.txt:
- Recommended setup: 15-min incremental + daily full + daily cleanup
- Alternative schedules (30-min, hourly)
- Systemd timer examples

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 10:22:08 +01:00
b407e2aeb7 fix: handle BC table locks with proper retry and backoff
BC returns 500 with "being updated in a transaction done by another
session" when tables are locked by other users - this is normal during
business hours. Previously the script gave up after 3 quick retries.

Changes:
- Invoke-BCApi: increased retries to 10, table lock errors get longer
  waits (30s + 15s per attempt, up to 120s) vs generic errors (10s)
- Export-EntityData: retries the whole entity up to 5 times on table
  lock with 60s/120s/180s/240s waits between attempts
- Export-DocumentWithLines: same entity-level retry, restarts the
  full document+lines export on table lock rather than giving up
- Also handles 429 rate limiting with longer backoff (30s per attempt)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 09:41:17 +01:00
d08250a479 fix: drop $expand, fetch document lines per-document instead
The $expand approach had two fatal problems:
1. $top=50 with $expand made BC treat it as a hard limit with no
   @odata.nextLink, so only 50 docs were exported total
2. salesOrders with $expand timed out even at 50 docs when orders
   have many lines

New approach: fetch document headers normally (BC paginates fine on
its own), then for each document fetch its lines separately via
/salesInvoices({id})/salesInvoiceLines. More API calls but each is
small, fast, and reliable.

Also added:
- Invoke-BCApi with retry logic (backoff on 429/5xx/timeout)
- Separate output files: headers in {entity}.jsonl, lines in
  {lineEntity}.jsonl
- Partial data is preserved if export fails mid-way
- Progress logged every 100 documents

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 09:20:52 +01:00
5ebfc3f443 fix: stream document+lines export to disk in small batches
The previous $expand approach tried to load all documents with lines
into memory at once, causing hangs/OOM on companies with hundreds of
thousands of records.

Changes:
- Fetch documents with $expand in small pages ($top=50) instead of
  loading everything into memory
- Stream each document to disk immediately as JSONL (one JSON object
  per line) instead of accumulating in an array
- Add automatic token refresh for long-running exports (tokens expire
  after ~60 min)
- Add 300s timeout per API request to detect stalls
- Log progress after each batch so you can see it's working

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 09:09:47 +01:00
4af9cd7f11 fix: fetch document lines via $expand instead of standalone queries
The BC API v2.0 requires line entities (salesInvoiceLines, etc.) to be
accessed through their parent document - they cannot be queried directly.
Use OData $expand on parent documents to include lines inline, e.g.
salesInvoices?$expand=salesInvoiceLines

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 07:57:46 +01:00
77f48f326b feat: switch from Admin Center database export to BC API v2.0 data extraction
The Admin Center export API requires an Azure Storage SAS URI which
requires an Azure Subscription - defeating the purpose of an independent
backup. Instead, use BC API v2.0 to extract critical business data
(customers, vendors, items, GL entries, invoices, etc.) as JSON files.

- bc-export.ps1: rewritten to use BC API v2.0 endpoints, extracts 23
  entity types per company with OData pagination support
- bc-backup.sh: handles JSON export directory, creates tar.gz archive
  before encrypting and uploading to S3
- bc-backup.conf.template: removed Azure Storage SAS config, added
  optional BC_COMPANY_NAME filter
- decrypt-backup.sh: updated for tar.gz.gpg format, shows extracted
  entity files and metadata after decryption

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 07:33:32 +01:00
96237787da fix: correct BC Admin Center API endpoints for database export
- Fix endpoint paths from /applications/businesscentral/environments/{env}/databaseExports
  to /exports/applications/BusinessCentral/environments/{env}
- Add Azure Storage SAS URI support (required by current export API)
- Update default API version from v2.15 to v2.21
- Add export metrics check before initiating export
- Use export history endpoint for status polling
- Download BACPAC from Azure Storage instead of expecting blobUri response

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 19:21:06 +01:00
d35806b8e1 Initial commit: BC backup project 2026-02-09 18:57:39 +01:00