BC returns 500 with "being updated in a transaction done by another
session" when tables are locked by other users - this is normal during
business hours. Previously the script gave up after 3 quick retries.
Changes:
- Invoke-BCApi: increased retries to 10, table lock errors get longer
waits (30s + 15s per attempt, up to 120s) vs generic errors (10s)
- Export-EntityData: retries the whole entity up to 5 times on table
lock with 60s/120s/180s/240s waits between attempts
- Export-DocumentWithLines: same entity-level retry, restarts the
full document+lines export on table lock rather than giving up
- Also handles 429 rate limiting with longer backoff (30s per attempt)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The $expand approach had two fatal problems:
1. $top=50 with $expand made BC treat it as a hard limit with no
@odata.nextLink, so only 50 docs were exported total
2. salesOrders with $expand timed out even at 50 docs when orders
have many lines
New approach: fetch document headers normally (BC paginates fine on
its own), then for each document fetch its lines separately via
/salesInvoices({id})/salesInvoiceLines. More API calls but each is
small, fast, and reliable.
Also added:
- Invoke-BCApi with retry logic (backoff on 429/5xx/timeout)
- Separate output files: headers in {entity}.jsonl, lines in
{lineEntity}.jsonl
- Partial data is preserved if export fails mid-way
- Progress logged every 100 documents
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The previous $expand approach tried to load all documents with lines
into memory at once, causing hangs/OOM on companies with hundreds of
thousands of records.
Changes:
- Fetch documents with $expand in small pages ($top=50) instead of
loading everything into memory
- Stream each document to disk immediately as JSONL (one JSON object
per line) instead of accumulating in an array
- Add automatic token refresh for long-running exports (tokens expire
after ~60 min)
- Add 300s timeout per API request to detect stalls
- Log progress after each batch so you can see it's working
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The BC API v2.0 requires line entities (salesInvoiceLines, etc.) to be
accessed through their parent document - they cannot be queried directly.
Use OData $expand on parent documents to include lines inline, e.g.
salesInvoices?$expand=salesInvoiceLines
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Admin Center export API requires an Azure Storage SAS URI which
requires an Azure Subscription - defeating the purpose of an independent
backup. Instead, use BC API v2.0 to extract critical business data
(customers, vendors, items, GL entries, invoices, etc.) as JSON files.
- bc-export.ps1: rewritten to use BC API v2.0 endpoints, extracts 23
entity types per company with OData pagination support
- bc-backup.sh: handles JSON export directory, creates tar.gz archive
before encrypting and uploading to S3
- bc-backup.conf.template: removed Azure Storage SAS config, added
optional BC_COMPANY_NAME filter
- decrypt-backup.sh: updated for tar.gz.gpg format, shows extracted
entity files and metadata after decryption
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix endpoint paths from /applications/businesscentral/environments/{env}/databaseExports
to /exports/applications/BusinessCentral/environments/{env}
- Add Azure Storage SAS URI support (required by current export API)
- Update default API version from v2.15 to v2.21
- Add export metrics check before initiating export
- Use export history endpoint for status polling
- Download BACPAC from Azure Storage instead of expecting blobUri response
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>