Commit Graph

6 Commits

Author SHA1 Message Date
d08250a479 fix: drop $expand, fetch document lines per-document instead
The $expand approach had two fatal problems:
1. $top=50 with $expand made BC treat it as a hard limit with no
   @odata.nextLink, so only 50 docs were exported total
2. salesOrders with $expand timed out even at 50 docs when orders
   have many lines

New approach: fetch document headers normally (BC paginates fine on
its own), then for each document fetch its lines separately via
/salesInvoices({id})/salesInvoiceLines. More API calls but each is
small, fast, and reliable.

Also added:
- Invoke-BCApi with retry logic (backoff on 429/5xx/timeout)
- Separate output files: headers in {entity}.jsonl, lines in
  {lineEntity}.jsonl
- Partial data is preserved if export fails mid-way
- Progress logged every 100 documents

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 09:20:52 +01:00
5ebfc3f443 fix: stream document+lines export to disk in small batches
The previous $expand approach tried to load all documents with lines
into memory at once, causing hangs/OOM on companies with hundreds of
thousands of records.

Changes:
- Fetch documents with $expand in small pages ($top=50) instead of
  loading everything into memory
- Stream each document to disk immediately as JSONL (one JSON object
  per line) instead of accumulating in an array
- Add automatic token refresh for long-running exports (tokens expire
  after ~60 min)
- Add 300s timeout per API request to detect stalls
- Log progress after each batch so you can see it's working

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 09:09:47 +01:00
4af9cd7f11 fix: fetch document lines via $expand instead of standalone queries
The BC API v2.0 requires line entities (salesInvoiceLines, etc.) to be
accessed through their parent document - they cannot be queried directly.
Use OData $expand on parent documents to include lines inline, e.g.
salesInvoices?$expand=salesInvoiceLines

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 07:57:46 +01:00
77f48f326b feat: switch from Admin Center database export to BC API v2.0 data extraction
The Admin Center export API requires an Azure Storage SAS URI which
requires an Azure Subscription - defeating the purpose of an independent
backup. Instead, use BC API v2.0 to extract critical business data
(customers, vendors, items, GL entries, invoices, etc.) as JSON files.

- bc-export.ps1: rewritten to use BC API v2.0 endpoints, extracts 23
  entity types per company with OData pagination support
- bc-backup.sh: handles JSON export directory, creates tar.gz archive
  before encrypting and uploading to S3
- bc-backup.conf.template: removed Azure Storage SAS config, added
  optional BC_COMPANY_NAME filter
- decrypt-backup.sh: updated for tar.gz.gpg format, shows extracted
  entity files and metadata after decryption

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 07:33:32 +01:00
96237787da fix: correct BC Admin Center API endpoints for database export
- Fix endpoint paths from /applications/businesscentral/environments/{env}/databaseExports
  to /exports/applications/BusinessCentral/environments/{env}
- Add Azure Storage SAS URI support (required by current export API)
- Update default API version from v2.15 to v2.21
- Add export metrics check before initiating export
- Use export history endpoint for status polling
- Download BACPAC from Azure Storage instead of expecting blobUri response

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 19:21:06 +01:00
d35806b8e1 Initial commit: BC backup project 2026-02-09 18:57:39 +01:00