docs: add full restore process documentation

Covers downloading from S3, decryption, data inspection,
restoring entities via BC API with correct dependency order,
point-in-time restore with incrementals, GL entry restoration
via journal posting, and a full entity reference table.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-17 09:21:48 +01:00
parent f40e57f576
commit 025eb3896c

426
RESTORE.md Normal file
View File

@@ -0,0 +1,426 @@
# Restore Process
This document describes how to restore Business Central data from backups created by this tool.
## Backup Structure
Each backup is a GPG-encrypted tar.gz archive stored in S3:
```
s3://BUCKET/backups/full/bc_backup_ENVIRONMENT_YYYYMMDD_HHMMSS_full.tar.gz.gpg
s3://BUCKET/backups/incremental/bc_backup_ENVIRONMENT_YYYYMMDD_HHMMSS_incremental.tar.gz.gpg
```
Inside each archive:
```
bc_backup_ENVIRONMENT_YYYYMMDD_HHMMSS_MODE/
export-metadata.json # Export timestamp, mode, record counts
companies.json # List of all companies in the environment
CompanyName/
accounts.json # Standalone entity data
customers.json
vendors.json
items.json
generalLedgerEntries.json
...
salesInvoices.jsonl # Document headers (one JSON object per line)
salesInvoiceLines.jsonl # Document lines (one JSON object per line)
...
```
**File formats:**
- `.json` files contain a JSON array of all records for that entity
- `.jsonl` files contain one JSON object per line (used for document entities with large record counts)
## Prerequisites
- `aws` CLI configured with access to the S3 bucket
- `gpg` installed
- The `ENCRYPTION_PASSPHRASE` from `bc-backup.conf`
- For restoring data into BC: PowerShell (`pwsh`) with network access to the BC API
## Step 1: Identify Which Backups to Restore
### Full Restore (latest snapshot)
You only need the most recent **full** backup:
```bash
source bc-backup.conf
export AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
export AWS_PAGER=""
aws s3api list-objects-v2 \
--bucket "$S3_BUCKET" \
--prefix "backups/full/" \
--endpoint-url "$S3_ENDPOINT" \
--query 'sort_by(Contents, &LastModified)[-1].Key' \
--output text
```
### Point-in-Time Restore (full + incrementals)
You need the most recent **full** backup plus all **incremental** backups taken after it:
```bash
# Find the latest full backup and its timestamp
LATEST_FULL=$(aws s3api list-objects-v2 \
--bucket "$S3_BUCKET" \
--prefix "backups/full/" \
--endpoint-url "$S3_ENDPOINT" \
--query 'sort_by(Contents, &LastModified)[-1].[Key,LastModified]' \
--output text)
echo "Latest full: $LATEST_FULL"
# List all incrementals after that date
FULL_DATE=$(echo "$LATEST_FULL" | awk '{print $2}')
aws s3api list-objects-v2 \
--bucket "$S3_BUCKET" \
--prefix "backups/incremental/" \
--endpoint-url "$S3_ENDPOINT" \
--query "Contents[?LastModified>='$FULL_DATE'].Key" \
--output text
```
## Step 2: Download from S3
```bash
mkdir -p restore-work && cd restore-work
# Download a specific backup
aws s3api get-object \
--bucket "$S3_BUCKET" \
--key "backups/full/bc_backup_Production_20260216_020000_full.tar.gz.gpg" \
--endpoint-url "$S3_ENDPOINT" \
backup.tar.gz.gpg
# Or download all needed files at once
aws s3 cp "s3://$S3_BUCKET/backups/full/" . \
--endpoint-url "$S3_ENDPOINT" \
--recursive \
--exclude "*" \
--include "*20260216*"
```
## Step 3: Decrypt and Extract
### Using the included utility
```bash
./decrypt-backup.sh backup.tar.gz.gpg ./restored/
```
You will be prompted for the encryption passphrase.
### Manual decrypt and extract
```bash
# Decrypt (will prompt for passphrase)
gpg --decrypt --output backup.tar.gz backup.tar.gz.gpg
# Or provide passphrase non-interactively
echo "$ENCRYPTION_PASSPHRASE" | gpg --batch --passphrase-fd 0 \
--decrypt --output backup.tar.gz backup.tar.gz.gpg
# Extract
tar -xzf backup.tar.gz
```
## Step 4: Inspect the Data
```bash
# View export metadata
cat restored/bc_backup_*/export-metadata.json | python3 -m json.tool
# List companies
cat restored/bc_backup_*/companies.json | python3 -m json.tool
# Count records per entity
for f in restored/bc_backup_*/CompanyName/*.json; do
entity=$(basename "$f" .json)
count=$(python3 -c "import json; d=json.load(open('$f')); print(len(d) if isinstance(d,list) else 1)")
echo "$entity: $count records"
done
# For JSONL files (document entities)
for f in restored/bc_backup_*/CompanyName/*.jsonl; do
entity=$(basename "$f" .jsonl)
count=$(wc -l < "$f")
echo "$entity: $count records"
done
```
## Step 5: Restore Data into Business Central
### Important Notes
- The BC API v2.0 supports **POST** (create) and **PATCH** (update) operations for most entities
- **Read-only entities cannot be restored** via API: generalLedgerEntries, agedAccountsPayable, agedAccountsReceivable, balanceSheet, cashFlowStatement, incomeStatement, retainedEarningsStatement, trialBalance, customerFinancialDetails, customerSales, vendorPurchases, itemLedgerEntries, salesShipments, salesShipmentLines, purchaseReceipts, purchaseReceiptLines
- GL entries are posted through journals, not created directly
- Restore **master data first** (customers, vendors, items), then **documents** (invoices, orders)
- Each record has an `id` field and often a `number` or `displayName` field you can use to match existing records
### Restore Order
Restore entities in this order to satisfy dependencies:
1. **Setup/reference data:** currencies, countriesRegions, paymentTerms, paymentMethods, shipmentMethods, taxAreas, taxGroups, unitsOfMeasure, itemCategories, dimensions, dimensionValues, locations, generalProductPostingGroups, inventoryPostingGroups
2. **Master data:** customers, vendors, items, employees, bankAccounts, contacts
3. **Transactional documents:** salesInvoices, salesOrders, salesQuotes, salesCreditMemos, purchaseInvoices, purchaseOrders
4. **Journal entries:** journals, journalLines, customerPaymentJournals, vendorPaymentJournals
### Example: Restore customers via BC API
```powershell
# Authenticate
$tenantId = "YOUR_TENANT_ID"
$clientId = "YOUR_CLIENT_ID"
$clientSecret = "YOUR_CLIENT_SECRET"
$environment = "YOUR_ENVIRONMENT"
$tokenResponse = Invoke-RestMethod -Method Post `
-Uri "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" `
-Body @{
client_id = $clientId
client_secret = $clientSecret
scope = "https://api.businesscentral.dynamics.com/.default"
grant_type = "client_credentials"
}
$headers = @{
"Authorization" = "Bearer $($tokenResponse.access_token)"
"Content-Type" = "application/json"
}
$baseUrl = "https://api.businesscentral.dynamics.com/v2.0/$tenantId/$environment/api/v2.0"
# Get company ID
$companies = Invoke-RestMethod -Uri "$baseUrl/companies" -Headers $headers
$companyId = $companies.value[0].id
# Load backed-up customers
$customers = Get-Content "restored/bc_backup_*/CompanyName/customers.json" | ConvertFrom-Json
foreach ($customer in $customers) {
# Check if customer already exists
$existing = $null
try {
$existing = Invoke-RestMethod `
-Uri "$baseUrl/companies($companyId)/customers?`$filter=number eq '$($customer.number)'" `
-Headers $headers
} catch {}
# Remove read-only fields that BC won't accept on POST/PATCH
$body = $customer | Select-Object -ExcludeProperty id, lastModifiedDateTime, '@odata*'
$json = $body | ConvertTo-Json -Depth 10
if ($existing.value.Count -gt 0) {
# Update existing customer
$existingId = $existing.value[0].id
$headers["If-Match"] = "*"
Invoke-RestMethod `
-Method Patch `
-Uri "$baseUrl/companies($companyId)/customers($existingId)" `
-Headers $headers `
-Body $json
Write-Host "Updated: $($customer.displayName)"
} else {
# Create new customer
Invoke-RestMethod `
-Method Post `
-Uri "$baseUrl/companies($companyId)/customers" `
-Headers $headers `
-Body $json
Write-Host "Created: $($customer.displayName)"
}
}
```
### Example: Restore document with lines (sales invoices)
```powershell
# Load backed-up invoices (JSONL format - one JSON object per line)
$invoices = Get-Content "restored/bc_backup_*/CompanyName/salesInvoices.jsonl" |
ForEach-Object { $_ | ConvertFrom-Json }
$lines = Get-Content "restored/bc_backup_*/CompanyName/salesInvoiceLines.jsonl" |
ForEach-Object { $_ | ConvertFrom-Json }
foreach ($invoice in $invoices) {
$invoiceBody = $invoice | Select-Object -ExcludeProperty id, lastModifiedDateTime, '@odata*'
$json = $invoiceBody | ConvertTo-Json -Depth 10
# Create the invoice header
$created = Invoke-RestMethod `
-Method Post `
-Uri "$baseUrl/companies($companyId)/salesInvoices" `
-Headers $headers `
-Body $json
$newInvoiceId = $created.id
# Find and create matching lines
$invoiceLines = $lines | Where-Object {
$_.documentId -eq $invoice.id
}
foreach ($line in $invoiceLines) {
$lineBody = $line | Select-Object -ExcludeProperty id, documentId, lastModifiedDateTime, '@odata*'
$lineJson = $lineBody | ConvertTo-Json -Depth 10
Invoke-RestMethod `
-Method Post `
-Uri "$baseUrl/companies($companyId)/salesInvoices($newInvoiceId)/salesInvoiceLines" `
-Headers $headers `
-Body $lineJson
}
Write-Host "Restored invoice $($invoice.number) with $($invoiceLines.Count) lines"
}
```
### Restoring GL Entries via Journals
General ledger entries cannot be created directly. Instead, create journal lines and post them:
```powershell
# Create a journal for the restore
$journal = Invoke-RestMethod -Method Post `
-Uri "$baseUrl/companies($companyId)/journals" `
-Headers $headers `
-Body '{"displayName": "RESTORE", "templateDisplayName": "GENERAL"}'
$journalId = $journal.id
# Load GL entries from backup
$glEntries = Get-Content "restored/bc_backup_*/CompanyName/generalLedgerEntries.json" |
ConvertFrom-Json
foreach ($entry in $glEntries) {
$line = @{
accountNumber = $entry.accountNumber
postingDate = $entry.postingDate
documentNumber = $entry.documentNumber
description = $entry.description
amount = $entry.creditAmount * -1 + $entry.debitAmount
} | ConvertTo-Json
Invoke-RestMethod -Method Post `
-Uri "$baseUrl/companies($companyId)/journals($journalId)/journalLines" `
-Headers $headers `
-Body $line
}
# Post the journal (this creates the actual GL entries)
Invoke-RestMethod -Method Post `
-Uri "$baseUrl/companies($companyId)/journals($journalId)/Microsoft.NAV.post" `
-Headers $headers
```
## Point-in-Time Restore (Full + Incrementals)
To restore to a specific point in time:
1. Restore the **full** backup first (this is your baseline)
2. Apply each **incremental** backup in chronological order
Incremental backups only contain records that changed since the previous run. When applying incrementals, use PATCH (update) for records that already exist from the full restore, and POST (create) for new records.
```bash
# Decrypt all backups in order
./decrypt-backup.sh full_backup.tar.gz.gpg ./restore-base/
./decrypt-backup.sh incremental_1.tar.gz.gpg ./restore-incr1/
./decrypt-backup.sh incremental_2.tar.gz.gpg ./restore-incr2/
# ... apply each in order via the API
```
Check `export-metadata.json` in each backup to verify the chronological order and mode:
```json
{
"mode": "incremental",
"sinceDateTime": "2026-02-16T02:00:00Z",
"exportDate": "2026-02-16 02:15:00 UTC"
}
```
## Entities Reference
### Writable Entities (can be restored via POST/PATCH)
| Entity | API Endpoint | Notes |
|--------|-------------|-------|
| accounts | `companies({id})/accounts` | Chart of accounts |
| customers | `companies({id})/customers` | |
| vendors | `companies({id})/vendors` | |
| items | `companies({id})/items` | |
| bankAccounts | `companies({id})/bankAccounts` | |
| employees | `companies({id})/employees` | |
| contacts | `companies({id})/contacts` | |
| currencies | `companies({id})/currencies` | |
| countriesRegions | `companies({id})/countriesRegions` | |
| paymentTerms | `companies({id})/paymentTerms` | |
| paymentMethods | `companies({id})/paymentMethods` | |
| shipmentMethods | `companies({id})/shipmentMethods` | |
| taxAreas | `companies({id})/taxAreas` | |
| taxGroups | `companies({id})/taxGroups` | |
| unitsOfMeasure | `companies({id})/unitsOfMeasure` | |
| itemCategories | `companies({id})/itemCategories` | |
| dimensions | `companies({id})/dimensions` | |
| dimensionValues | `companies({id})/dimensionValues` | |
| locations | `companies({id})/locations` | |
| opportunities | `companies({id})/opportunities` | |
| projects | `companies({id})/projects` | |
| journals | `companies({id})/journals` | |
| journalLines | `companies({id})/journalLines` | Post journals to create GL entries |
| salesInvoices | `companies({id})/salesInvoices` | + salesInvoiceLines |
| salesOrders | `companies({id})/salesOrders` | + salesOrderLines |
| salesQuotes | `companies({id})/salesQuotes` | + salesQuoteLines |
| salesCreditMemos | `companies({id})/salesCreditMemos` | + salesCreditMemoLines |
| purchaseInvoices | `companies({id})/purchaseInvoices` | + purchaseInvoiceLines |
| purchaseOrders | `companies({id})/purchaseOrders` | + purchaseOrderLines |
| customerPaymentJournals | `companies({id})/customerPaymentJournals` | + customerPayments |
| vendorPaymentJournals | `companies({id})/vendorPaymentJournals` | + vendorPayments |
| timeRegistrationEntries | `companies({id})/timeRegistrationEntries` | |
| irs1099 | `companies({id})/irs1099` | US only |
### Read-Only Entities (backed up for reference, cannot be restored via API)
| Entity | Description |
|--------|-------------|
| generalLedgerEntries | Restore via journal posting instead |
| itemLedgerEntries | Created by posting transactions |
| companyInformation | Update only, not create |
| generalProductPostingGroups | Setup data, usually pre-exists |
| inventoryPostingGroups | Setup data, usually pre-exists |
| agedAccountsPayable | Computed report |
| agedAccountsReceivable | Computed report |
| balanceSheet | Computed report |
| cashFlowStatement | Computed report |
| incomeStatement | Computed report |
| retainedEarningsStatement | Computed report |
| trialBalance | Computed report |
| customerFinancialDetails | Computed report |
| customerSales | Computed report |
| vendorPurchases | Computed report |
| salesShipments / lines | Created by posting sales orders |
| purchaseReceipts / lines | Created by posting purchase orders |
## Troubleshooting
### "The field cannot be modified" error
Some fields are read-only on POST/PATCH. Remove them from the request body. Common read-only fields: `id`, `lastModifiedDateTime`, `@odata.*`, `systemCreatedAt`, `systemModifiedAt`.
### "The record already exists" error
The record with that number/code already exists. Use PATCH to update it instead of POST to create it.
### "The MIME type is not valid" error
Make sure `Content-Type: application/json` is set in the request headers.
### Rate limiting (HTTP 429)
BC API has rate limits. Add a delay between requests (100-200ms) or implement exponential backoff.
### Entity not found (HTTP 404)
The entity endpoint name may differ between BC versions. Check the BC API v2.0 documentation for your BC version.