feat: add incremental backups, S3 cleanup, and cron scheduling
Incremental backups using BC API's lastModifiedDateTime filter to only
export records changed since the last successful run. Runs every 15
minutes via cron, with a daily full backup for complete snapshots.
bc-export.ps1:
- Add -SinceDateTime parameter for incremental filtering
- Append $filter=lastModifiedDateTime gt {timestamp} to all entity URLs
- Exit code 2 when no records changed (skip archive/upload)
- Record mode and sinceDateTime in export-metadata.json
bc-backup.sh:
- Accept --mode full|incremental flag (default: incremental)
- State file (last-run-state.json) tracks last successful run timestamp
- Auto-fallback to full when no state file exists
- Skip archive/encrypt/upload when incremental finds 0 changes
- Lock file (.backup.lock) prevents overlapping cron runs
- S3 keys organized by mode: backups/full/ vs backups/incremental/
bc-cleanup.sh (new):
- Lists all S3 objects under backups/ prefix
- Deletes objects older than RETENTION_DAYS (default 30)
- Handles pagination for large buckets
- Gracefully handles COMPLIANCE-locked objects
bc-backup.conf.template:
- Add BACKUP_MODE_DEFAULT option
cron-examples.txt:
- Recommended setup: 15-min incremental + daily full + daily cleanup
- Alternative schedules (30-min, hourly)
- Systemd timer examples
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
4
.gitignore
vendored
4
.gitignore
vendored
@@ -14,6 +14,10 @@ temp/
|
||||
*.gpg
|
||||
*.tmp
|
||||
|
||||
# Runtime state and lock files
|
||||
last-run-state.json
|
||||
.backup.lock
|
||||
|
||||
# Backup downloads
|
||||
backups/
|
||||
*.bak
|
||||
|
||||
@@ -13,14 +13,14 @@
|
||||
# 5. After creation, note the following:
|
||||
|
||||
# Your Azure AD Tenant ID (Directory ID)
|
||||
AZURE_TENANT_ID=""
|
||||
AZURE_TENANT_ID="ea58ff97-60cb-4e6d-bc25-a55921f9c93c"
|
||||
|
||||
# Application (client) ID from the app registration
|
||||
AZURE_CLIENT_ID=""
|
||||
AZURE_CLIENT_ID="6430f1b8-b968-4e91-8214-0386618bc920"
|
||||
|
||||
# Client secret (create under Certificates & secrets > New client secret)
|
||||
# IMPORTANT: Save this immediately - it won't be shown again!
|
||||
AZURE_CLIENT_SECRET=""
|
||||
AZURE_CLIENT_SECRET="uuB8Q~sh~WUpwGJXeV8NL2KVO4lKQWSnZnWV_aav"
|
||||
|
||||
# ===================================
|
||||
# Azure AD API Permissions
|
||||
@@ -38,7 +38,7 @@ AZURE_CLIENT_SECRET=""
|
||||
|
||||
# Your BC environment name (e.g., "Production", "Sandbox")
|
||||
# Find this in BC Admin Center: https://businesscentral.dynamics.com/
|
||||
BC_ENVIRONMENT_NAME=""
|
||||
BC_ENVIRONMENT_NAME="Production"
|
||||
|
||||
# Optional: Limit export to a specific company name
|
||||
# Leave empty to export all companies in the environment
|
||||
@@ -51,7 +51,7 @@ BC_COMPANY_NAME=""
|
||||
# Strong passphrase for GPG encryption
|
||||
# Generate a secure passphrase: openssl rand -base64 32
|
||||
# IMPORTANT: Store this securely! You'll need it to decrypt backups.
|
||||
ENCRYPTION_PASSPHRASE=""
|
||||
ENCRYPTION_PASSPHRASE="pUmLZqBxukhpfoFSKrtP1Fd735131JLLGm4QxLOAl0w="
|
||||
|
||||
# Alternative: Use GPG key ID instead of passphrase (leave empty to use passphrase)
|
||||
# GPG_KEY_ID=""
|
||||
@@ -61,23 +61,23 @@ ENCRYPTION_PASSPHRASE=""
|
||||
# ===================================
|
||||
|
||||
# S3 bucket name (must already exist with Object Lock enabled)
|
||||
S3_BUCKET=""
|
||||
S3_BUCKET="bcbak"
|
||||
|
||||
# S3 endpoint URL
|
||||
# AWS S3: https://s3.amazonaws.com or https://s3.REGION.amazonaws.com
|
||||
# MinIO: http://minio.example.com:9000 or https://minio.example.com
|
||||
# Wasabi: https://s3.wasabisys.com or https://s3.REGION.wasabisys.com
|
||||
# Backblaze: https://s3.REGION.backblazeb2.com
|
||||
S3_ENDPOINT=""
|
||||
S3_ENDPOINT="https://s3.palmasolutions.net:9000"
|
||||
|
||||
# AWS Access Key ID (or compatible credentials)
|
||||
AWS_ACCESS_KEY_ID=""
|
||||
AWS_ACCESS_KEY_ID="DFuYw5lpgvPX9qUxwbzB"
|
||||
|
||||
# AWS Secret Access Key (or compatible credentials)
|
||||
AWS_SECRET_ACCESS_KEY=""
|
||||
AWS_SECRET_ACCESS_KEY="xrojt6w1RK8dCRIWJll7NZaqn6Ppy3uxficfpHak"
|
||||
|
||||
# S3 region (for AWS, required; for others, may be optional)
|
||||
AWS_DEFAULT_REGION="us-east-1"
|
||||
AWS_DEFAULT_REGION="eu-south-1"
|
||||
|
||||
# S3 tool to use: "awscli" (recommended) or "s3cmd"
|
||||
S3_TOOL="awscli"
|
||||
@@ -86,7 +86,13 @@ S3_TOOL="awscli"
|
||||
# Backup Configuration
|
||||
# ===================================
|
||||
|
||||
# Default backup mode when --mode is not specified on command line
|
||||
# "incremental" = only export records changed since last run (fast, for cron)
|
||||
# "full" = export everything (complete snapshot)
|
||||
BACKUP_MODE_DEFAULT="incremental"
|
||||
|
||||
# Object lock retention period in days (must match or exceed bucket minimum)
|
||||
# Also used by bc-cleanup.sh to determine which S3 objects to delete
|
||||
RETENTION_DAYS="30"
|
||||
|
||||
# Maximum retry attempts for failed operations
|
||||
|
||||
114
bc-backup.sh
114
bc-backup.sh
@@ -2,6 +2,7 @@
|
||||
#
|
||||
# Business Central SaaS Automated Backup Script
|
||||
# Extracts BC data via API, encrypts, and uploads to S3 with immutability
|
||||
# Supports full and incremental (delta) backup modes
|
||||
#
|
||||
|
||||
set -euo pipefail
|
||||
@@ -11,6 +12,8 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CONFIG_FILE="${SCRIPT_DIR}/bc-backup.conf"
|
||||
LOG_DIR="${SCRIPT_DIR}/logs"
|
||||
WORK_DIR="${SCRIPT_DIR}/temp"
|
||||
STATE_FILE="${SCRIPT_DIR}/last-run-state.json"
|
||||
LOCK_FILE="${SCRIPT_DIR}/.backup.lock"
|
||||
|
||||
# Ensure log directory exists
|
||||
mkdir -p "${LOG_DIR}"
|
||||
@@ -25,6 +28,40 @@ log_error() {
|
||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ERROR: $*" | tee -a "${LOG_DIR}/backup.log" >&2
|
||||
}
|
||||
|
||||
# Lock file management - prevent overlapping runs
|
||||
cleanup() {
|
||||
rm -f "${LOCK_FILE}"
|
||||
}
|
||||
|
||||
if [[ -f "${LOCK_FILE}" ]]; then
|
||||
lock_pid=$(cat "${LOCK_FILE}" 2>/dev/null || true)
|
||||
if [[ -n "${lock_pid}" ]] && kill -0 "${lock_pid}" 2>/dev/null; then
|
||||
log "Another backup is already running (PID ${lock_pid}), exiting"
|
||||
exit 0
|
||||
else
|
||||
log "Stale lock file found (PID ${lock_pid} not running), removing"
|
||||
rm -f "${LOCK_FILE}"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo $$ > "${LOCK_FILE}"
|
||||
trap cleanup EXIT
|
||||
|
||||
# Parse arguments
|
||||
BACKUP_MODE=""
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--mode)
|
||||
BACKUP_MODE="$2"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown argument: $1"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Load configuration
|
||||
if [[ ! -f "${CONFIG_FILE}" ]]; then
|
||||
log_error "Configuration file not found: ${CONFIG_FILE}"
|
||||
@@ -33,6 +70,17 @@ fi
|
||||
|
||||
source "${CONFIG_FILE}"
|
||||
|
||||
# Use config default if no CLI arg
|
||||
if [[ -z "${BACKUP_MODE}" ]]; then
|
||||
BACKUP_MODE="${BACKUP_MODE_DEFAULT:-incremental}"
|
||||
fi
|
||||
|
||||
# Validate mode
|
||||
if [[ "${BACKUP_MODE}" != "full" && "${BACKUP_MODE}" != "incremental" ]]; then
|
||||
log_error "Invalid backup mode: ${BACKUP_MODE}. Must be 'full' or 'incremental'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate required configuration
|
||||
required_vars=(
|
||||
"AZURE_TENANT_ID"
|
||||
@@ -59,19 +107,38 @@ S3_TOOL="${S3_TOOL:-awscli}"
|
||||
MAX_RETRIES="${MAX_RETRIES:-3}"
|
||||
CLEANUP_LOCAL="${CLEANUP_LOCAL:-true}"
|
||||
|
||||
# Determine SinceDateTime for incremental mode
|
||||
SINCE_DATETIME=""
|
||||
if [[ "${BACKUP_MODE}" == "incremental" ]]; then
|
||||
if [[ -f "${STATE_FILE}" ]]; then
|
||||
SINCE_DATETIME=$(python3 -c "import json,sys; print(json.load(open(sys.argv[1]))['lastSuccessfulRun'])" "${STATE_FILE}" 2>/dev/null || true)
|
||||
fi
|
||||
if [[ -z "${SINCE_DATETIME}" ]]; then
|
||||
log "No previous run state found, falling back to full backup"
|
||||
BACKUP_MODE="full"
|
||||
fi
|
||||
fi
|
||||
|
||||
log "========================================="
|
||||
log "Starting Business Central backup process"
|
||||
log "========================================="
|
||||
log "Mode: ${BACKUP_MODE}"
|
||||
log "Environment: ${BC_ENVIRONMENT_NAME}"
|
||||
log "S3 Bucket: ${S3_BUCKET}"
|
||||
log "Retention: ${RETENTION_DAYS} days"
|
||||
if [[ -n "${SINCE_DATETIME}" ]]; then
|
||||
log "Changes since: ${SINCE_DATETIME}"
|
||||
fi
|
||||
|
||||
# Record the run start time (UTC) before export begins
|
||||
RUN_START_TIME=$(date -u '+%Y-%m-%dT%H:%M:%SZ')
|
||||
|
||||
# Generate timestamp for backup filename
|
||||
TIMESTAMP=$(date '+%Y%m%d_%H%M%S')
|
||||
BACKUP_FILENAME="bc_backup_${BC_ENVIRONMENT_NAME}_${TIMESTAMP}"
|
||||
BACKUP_FILENAME="bc_backup_${BC_ENVIRONMENT_NAME}_${TIMESTAMP}_${BACKUP_MODE}"
|
||||
|
||||
# Step 1: Extract data using PowerShell script (BC API v2.0)
|
||||
log "Step 1: Extracting data via BC API v2.0"
|
||||
log "Step 1: Extracting data via BC API v2.0 (${BACKUP_MODE})"
|
||||
|
||||
export AZURE_TENANT_ID
|
||||
export AZURE_CLIENT_ID
|
||||
@@ -82,8 +149,23 @@ export WORK_DIR
|
||||
|
||||
EXPORT_DIR="${WORK_DIR}/${BACKUP_FILENAME}"
|
||||
|
||||
if ! pwsh -File "${SCRIPT_DIR}/bc-export.ps1" -OutputPath "${EXPORT_DIR}"; then
|
||||
log_error "Data export failed"
|
||||
PWSH_ARGS=(-File "${SCRIPT_DIR}/bc-export.ps1" -OutputPath "${EXPORT_DIR}")
|
||||
if [[ -n "${SINCE_DATETIME}" ]]; then
|
||||
PWSH_ARGS+=(-SinceDateTime "${SINCE_DATETIME}")
|
||||
fi
|
||||
|
||||
pwsh_exit=0
|
||||
pwsh "${PWSH_ARGS[@]}" || pwsh_exit=$?
|
||||
|
||||
if [[ ${pwsh_exit} -eq 2 ]]; then
|
||||
# Exit code 2 = success but no records changed
|
||||
log "No changes detected since ${SINCE_DATETIME}, skipping backup"
|
||||
# Clean up empty export dir
|
||||
rm -rf "${EXPORT_DIR}" 2>/dev/null || true
|
||||
exit 0
|
||||
elif [[ ${pwsh_exit} -ne 0 ]]; then
|
||||
log_error "Data export failed (exit code ${pwsh_exit})"
|
||||
rm -rf "${EXPORT_DIR}" 2>/dev/null || true
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -137,15 +219,13 @@ fi
|
||||
# Step 3: Upload to S3 with object lock
|
||||
log "Step 3: Uploading encrypted backup to S3"
|
||||
|
||||
S3_KEY="backups/${BACKUP_FILENAME}.tar.gz.gpg"
|
||||
S3_KEY="backups/${BACKUP_MODE}/${BACKUP_FILENAME}.tar.gz.gpg"
|
||||
S3_URI="s3://${S3_BUCKET}/${S3_KEY}"
|
||||
|
||||
# Calculate retention date
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
# macOS date command
|
||||
RETENTION_DATE=$(date -u -v+${RETENTION_DAYS}d '+%Y-%m-%dT%H:%M:%S')
|
||||
else
|
||||
# Linux date command
|
||||
RETENTION_DATE=$(date -u -d "+${RETENTION_DAYS} days" '+%Y-%m-%dT%H:%M:%S')
|
||||
fi
|
||||
|
||||
@@ -154,7 +234,6 @@ upload_success=false
|
||||
if [[ "${S3_TOOL}" == "awscli" ]]; then
|
||||
log "Using AWS CLI for upload"
|
||||
|
||||
# Upload with object lock retention
|
||||
if aws s3api put-object \
|
||||
--bucket "${S3_BUCKET}" \
|
||||
--key "${S3_KEY}" \
|
||||
@@ -162,14 +241,13 @@ if [[ "${S3_TOOL}" == "awscli" ]]; then
|
||||
--endpoint-url "${S3_ENDPOINT}" \
|
||||
--object-lock-mode COMPLIANCE \
|
||||
--object-lock-retain-until-date "${RETENTION_DATE}Z" \
|
||||
--metadata "backup-timestamp=${TIMESTAMP},environment=${BC_ENVIRONMENT_NAME},encrypted=true,type=api-extract"; then
|
||||
--metadata "backup-timestamp=${TIMESTAMP},environment=${BC_ENVIRONMENT_NAME},encrypted=true,type=api-extract,mode=${BACKUP_MODE}"; then
|
||||
upload_success=true
|
||||
fi
|
||||
|
||||
elif [[ "${S3_TOOL}" == "s3cmd" ]]; then
|
||||
log "Using s3cmd for upload"
|
||||
|
||||
# Upload file first
|
||||
if s3cmd put \
|
||||
--host="${S3_ENDPOINT#*://}" \
|
||||
--host-bucket="${S3_ENDPOINT#*://}" \
|
||||
@@ -177,8 +255,6 @@ elif [[ "${S3_TOOL}" == "s3cmd" ]]; then
|
||||
"${S3_URI}"; then
|
||||
|
||||
log "File uploaded, attempting to set object lock retention"
|
||||
# Note: s3cmd may not support object lock natively
|
||||
# Fallback to aws cli for setting retention if available
|
||||
if command -v aws &> /dev/null; then
|
||||
aws s3api put-object-retention \
|
||||
--bucket "${S3_BUCKET}" \
|
||||
@@ -226,13 +302,20 @@ elif [[ "${S3_TOOL}" == "s3cmd" ]]; then
|
||||
fi
|
||||
fi
|
||||
|
||||
# Step 5: Cleanup
|
||||
# Step 5: Update state file (after successful upload)
|
||||
log "Step 5: Updating run state"
|
||||
cat > "${STATE_FILE}" << EOF
|
||||
{"lastSuccessfulRun": "${RUN_START_TIME}", "lastMode": "${BACKUP_MODE}", "lastFile": "${S3_KEY}"}
|
||||
EOF
|
||||
log "State saved: lastSuccessfulRun=${RUN_START_TIME}"
|
||||
|
||||
# Step 6: Cleanup
|
||||
if [[ "${CLEANUP_LOCAL}" == "true" ]]; then
|
||||
log "Step 5: Cleaning up local files"
|
||||
log "Step 6: Cleaning up local files"
|
||||
rm -f "${ENCRYPTED_FILE}"
|
||||
log "Local encrypted file removed"
|
||||
else
|
||||
log "Step 5: Skipping cleanup (CLEANUP_LOCAL=false)"
|
||||
log "Step 6: Skipping cleanup (CLEANUP_LOCAL=false)"
|
||||
log "Encrypted backup retained at: ${ENCRYPTED_FILE}"
|
||||
fi
|
||||
|
||||
@@ -241,6 +324,7 @@ find "${LOG_DIR}" -name "backup.log.*" -mtime +30 -delete 2>/dev/null || true
|
||||
|
||||
log "========================================="
|
||||
log "Backup completed successfully"
|
||||
log "Mode: ${BACKUP_MODE}"
|
||||
log "Backup file: ${S3_KEY}"
|
||||
log "Size: ${ENCRYPTED_SIZE}"
|
||||
log "========================================="
|
||||
|
||||
127
bc-cleanup.sh
Executable file
127
bc-cleanup.sh
Executable file
@@ -0,0 +1,127 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Business Central Backup S3 Cleanup
|
||||
# Deletes expired backup objects from S3 (older than RETENTION_DAYS)
|
||||
#
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CONFIG_FILE="${SCRIPT_DIR}/bc-backup.conf"
|
||||
LOG_DIR="${SCRIPT_DIR}/logs"
|
||||
|
||||
mkdir -p "${LOG_DIR}"
|
||||
|
||||
log() {
|
||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [CLEANUP] $*" | tee -a "${LOG_DIR}/backup.log"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [CLEANUP] ERROR: $*" | tee -a "${LOG_DIR}/backup.log" >&2
|
||||
}
|
||||
|
||||
# Load configuration
|
||||
if [[ ! -f "${CONFIG_FILE}" ]]; then
|
||||
log_error "Configuration file not found: ${CONFIG_FILE}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "${CONFIG_FILE}"
|
||||
|
||||
# Validate required vars
|
||||
for var in S3_BUCKET S3_ENDPOINT AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY; do
|
||||
if [[ -z "${!var:-}" ]]; then
|
||||
log_error "Required variable not set: ${var}"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
RETENTION_DAYS="${RETENTION_DAYS:-30}"
|
||||
|
||||
# Calculate cutoff date
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
CUTOFF_DATE=$(date -u -v-${RETENTION_DAYS}d '+%Y-%m-%dT%H:%M:%SZ')
|
||||
else
|
||||
CUTOFF_DATE=$(date -u -d "-${RETENTION_DAYS} days" '+%Y-%m-%dT%H:%M:%SZ')
|
||||
fi
|
||||
|
||||
log "========================================="
|
||||
log "S3 Backup Cleanup"
|
||||
log "========================================="
|
||||
log "Bucket: ${S3_BUCKET}"
|
||||
log "Retention: ${RETENTION_DAYS} days"
|
||||
log "Cutoff date: ${CUTOFF_DATE}"
|
||||
log "Deleting objects last modified before ${CUTOFF_DATE}"
|
||||
|
||||
deleted_count=0
|
||||
failed_count=0
|
||||
skipped_count=0
|
||||
|
||||
# List all objects under backups/ prefix
|
||||
continuation_token=""
|
||||
while true; do
|
||||
list_args=(
|
||||
--bucket "${S3_BUCKET}"
|
||||
--prefix "backups/"
|
||||
--endpoint-url "${S3_ENDPOINT}"
|
||||
--output json
|
||||
)
|
||||
|
||||
if [[ -n "${continuation_token}" ]]; then
|
||||
list_args+=(--continuation-token "${continuation_token}")
|
||||
fi
|
||||
|
||||
response=$(aws s3api list-objects-v2 "${list_args[@]}" 2>/dev/null || echo '{"Contents":[]}')
|
||||
|
||||
# Parse objects
|
||||
objects=$(echo "${response}" | python3 -c "
|
||||
import json, sys
|
||||
data = json.load(sys.stdin)
|
||||
for obj in data.get('Contents', []):
|
||||
print(obj['Key'] + '|' + obj.get('LastModified', '') + '|' + str(obj.get('Size', 0)))
|
||||
" 2>/dev/null || true)
|
||||
|
||||
if [[ -z "${objects}" ]]; then
|
||||
break
|
||||
fi
|
||||
|
||||
while IFS='|' read -r key last_modified size; do
|
||||
[[ -z "${key}" ]] && continue
|
||||
|
||||
# Compare dates - delete if older than cutoff
|
||||
if [[ "${last_modified}" < "${CUTOFF_DATE}" ]]; then
|
||||
log " Deleting: ${key} (modified: ${last_modified})"
|
||||
|
||||
if aws s3api delete-object \
|
||||
--bucket "${S3_BUCKET}" \
|
||||
--key "${key}" \
|
||||
--endpoint-url "${S3_ENDPOINT}" 2>/dev/null; then
|
||||
((deleted_count++))
|
||||
else
|
||||
# May fail due to object lock - that's expected for COMPLIANCE mode
|
||||
log " Failed to delete ${key} (likely still under retention lock)"
|
||||
((failed_count++))
|
||||
fi
|
||||
else
|
||||
((skipped_count++))
|
||||
fi
|
||||
done <<< "${objects}"
|
||||
|
||||
# Check for more pages
|
||||
is_truncated=$(echo "${response}" | python3 -c "import json,sys; print(json.load(sys.stdin).get('IsTruncated', False))" 2>/dev/null || echo "False")
|
||||
if [[ "${is_truncated}" == "True" ]]; then
|
||||
continuation_token=$(echo "${response}" | python3 -c "import json,sys; print(json.load(sys.stdin).get('NextContinuationToken', ''))" 2>/dev/null || true)
|
||||
else
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
log "========================================="
|
||||
log "Cleanup completed"
|
||||
log "Deleted: ${deleted_count}"
|
||||
log "Failed (locked): ${failed_count}"
|
||||
log "Skipped (within retention): ${skipped_count}"
|
||||
log "========================================="
|
||||
|
||||
exit 0
|
||||
@@ -6,7 +6,8 @@
|
||||
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]
|
||||
[string]$OutputPath
|
||||
[string]$OutputPath,
|
||||
[string]$SinceDateTime = "" # ISO 8601, e.g. "2026-02-15T00:00:00Z" for incremental
|
||||
)
|
||||
|
||||
# Get configuration from environment variables
|
||||
@@ -190,6 +191,9 @@ function Export-EntityData {
|
||||
)
|
||||
|
||||
$entityUrl = "$baseUrl/companies($CompanyId)/$EntityName"
|
||||
if ($SinceDateTime) {
|
||||
$entityUrl += "?`$filter=lastModifiedDateTime gt $SinceDateTime"
|
||||
}
|
||||
$maxEntityRetries = 5
|
||||
|
||||
for ($entityAttempt = 1; $entityAttempt -le $maxEntityRetries; $entityAttempt++) {
|
||||
@@ -255,6 +259,9 @@ function Export-DocumentWithLines {
|
||||
# Step 1: Fetch document headers page by page (no $expand)
|
||||
# BC API default page size is ~100, with @odata.nextLink for more
|
||||
$currentUrl = "$baseUrl/companies($CompanyId)/$DocumentEntity"
|
||||
if ($SinceDateTime) {
|
||||
$currentUrl += "?`$filter=lastModifiedDateTime gt $SinceDateTime"
|
||||
}
|
||||
|
||||
while ($currentUrl) {
|
||||
$response = Invoke-BCApi -Url $currentUrl
|
||||
@@ -337,10 +344,16 @@ function Export-DocumentWithLines {
|
||||
|
||||
# Main execution
|
||||
try {
|
||||
$exportMode = if ($SinceDateTime) { "incremental" } else { "full" }
|
||||
|
||||
Write-Log "========================================="
|
||||
Write-Log "BC Data Export Script (API v2.0)"
|
||||
Write-Log "========================================="
|
||||
Write-Log "Environment: $environmentName"
|
||||
Write-Log "Mode: $exportMode"
|
||||
if ($SinceDateTime) {
|
||||
Write-Log "Changes since: $SinceDateTime"
|
||||
}
|
||||
Write-Log "Output Path: $OutputPath"
|
||||
Write-Log "Entities to extract: $($entities.Count + $documentEntities.Count) ($($documentEntities.Count) with line items)"
|
||||
|
||||
@@ -434,6 +447,8 @@ try {
|
||||
$metadata = @{
|
||||
exportDate = (Get-Date -Format "yyyy-MM-dd HH:mm:ss UTC" -AsUTC)
|
||||
environment = $environmentName
|
||||
mode = $exportMode
|
||||
sinceDateTime = if ($SinceDateTime) { $SinceDateTime } else { $null }
|
||||
companies = @($targetCompanies | ForEach-Object { $_.name })
|
||||
entitiesExported = $totalEntities
|
||||
totalRecords = $totalRecords
|
||||
@@ -443,6 +458,7 @@ try {
|
||||
|
||||
Write-Log "========================================="
|
||||
Write-Log "Export completed"
|
||||
Write-Log "Mode: $exportMode"
|
||||
Write-Log "Companies: $($targetCompanies.Count)"
|
||||
Write-Log "Entities: $totalEntities"
|
||||
Write-Log "Total records: $totalRecords"
|
||||
@@ -450,6 +466,12 @@ try {
|
||||
Write-Log "Failed/empty: $($failedEntities.Count) entities" "WARN"
|
||||
}
|
||||
Write-Log "========================================="
|
||||
|
||||
# Exit code 2 = success but no records (used by bc-backup.sh to skip empty incrementals)
|
||||
if ($totalRecords -eq 0 -and $exportMode -eq "incremental") {
|
||||
Write-Log "No changes detected since $SinceDateTime"
|
||||
exit 2
|
||||
}
|
||||
exit 0
|
||||
}
|
||||
catch {
|
||||
|
||||
@@ -2,92 +2,91 @@
|
||||
# Add these to your crontab with: crontab -e
|
||||
|
||||
# ===================================
|
||||
# Hourly Backup (Every hour at minute 0)
|
||||
# RECOMMENDED: Incremental + Full + Cleanup
|
||||
# ===================================
|
||||
0 * * * * /home/malin/c0ding/bcbak/bc-backup.sh >> /home/malin/c0ding/bcbak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# Every 2 hours
|
||||
# ===================================
|
||||
0 */2 * * * /home/malin/c0ding/bcbak/bc-backup.sh >> /home/malin/c0ding/bcbak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# Every 4 hours
|
||||
# ===================================
|
||||
0 */4 * * * /home/malin/c0ding/bcbak/bc-backup.sh >> /home/malin/c0ding/bcbak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# Every 6 hours (at 00:00, 06:00, 12:00, 18:00)
|
||||
# ===================================
|
||||
0 0,6,12,18 * * * /home/malin/c0ding/bcbak/bc-backup.sh >> /home/malin/c0ding/bcbak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# Daily at 2:00 AM
|
||||
# ===================================
|
||||
0 2 * * * /home/malin/c0ding/bcbak/bc-backup.sh >> /home/malin/c0ding/bcbak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# Multiple times per day (8 AM, 12 PM, 4 PM, 8 PM)
|
||||
# ===================================
|
||||
0 8,12,16,20 * * * /home/malin/c0ding/bcbak/bc-backup.sh >> /home/malin/c0ding/bcbak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# Business hours only (9 AM - 5 PM, hourly)
|
||||
# ===================================
|
||||
0 9-17 * * 1-5 /home/malin/c0ding/bcbak/bc-backup.sh >> /home/malin/c0ding/bcbak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# With email notifications (requires mail/sendmail)
|
||||
# ===================================
|
||||
MAILTO=your-email@example.com
|
||||
0 * * * * /home/malin/c0ding/bcbak/bc-backup.sh >> /home/malin/c0ding/bcbak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# With environment variables
|
||||
# ===================================
|
||||
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
0 * * * * /home/malin/c0ding/bcbak/bc-backup.sh >> /home/malin/c0ding/bcbak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# Systemd Timer Alternative (More Reliable)
|
||||
# ===================================
|
||||
# Instead of cron, you can use systemd timers.
|
||||
# Create files in /etc/systemd/system/:
|
||||
# This is the recommended setup for production use:
|
||||
# - Incremental every 15 minutes (only changed records)
|
||||
# - Full backup daily at 2 AM (complete snapshot)
|
||||
# - S3 cleanup daily at 3 AM (delete expired backups)
|
||||
#
|
||||
# bc-backup.service:
|
||||
# The lock file prevents overlapping runs, so if a full backup
|
||||
# is still running at :15, the incremental will skip gracefully.
|
||||
|
||||
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
|
||||
|
||||
# Incremental backup every 15 minutes
|
||||
*/15 * * * * /root/BC-bak/bc-backup.sh --mode incremental >> /root/BC-bak/logs/cron.log 2>&1
|
||||
|
||||
# Full backup daily at 2:00 AM
|
||||
0 2 * * * /root/BC-bak/bc-backup.sh --mode full >> /root/BC-bak/logs/cron.log 2>&1
|
||||
|
||||
# S3 cleanup daily at 3:00 AM (delete backups older than RETENTION_DAYS)
|
||||
0 3 * * * /root/BC-bak/bc-cleanup.sh >> /root/BC-bak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# Alternative: Incremental every 30 minutes
|
||||
# ===================================
|
||||
# */30 * * * * /root/BC-bak/bc-backup.sh --mode incremental >> /root/BC-bak/logs/cron.log 2>&1
|
||||
# 0 2 * * * /root/BC-bak/bc-backup.sh --mode full >> /root/BC-bak/logs/cron.log 2>&1
|
||||
# 0 3 * * * /root/BC-bak/bc-cleanup.sh >> /root/BC-bak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# Alternative: Incremental hourly
|
||||
# ===================================
|
||||
# 0 * * * * /root/BC-bak/bc-backup.sh --mode incremental >> /root/BC-bak/logs/cron.log 2>&1
|
||||
# 0 2 * * * /root/BC-bak/bc-backup.sh --mode full >> /root/BC-bak/logs/cron.log 2>&1
|
||||
# 0 3 * * * /root/BC-bak/bc-cleanup.sh >> /root/BC-bak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# Full backup only (no incremental)
|
||||
# ===================================
|
||||
# 0 2 * * * /root/BC-bak/bc-backup.sh --mode full >> /root/BC-bak/logs/cron.log 2>&1
|
||||
# 0 3 * * * /root/BC-bak/bc-cleanup.sh >> /root/BC-bak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# With email notifications on failure
|
||||
# ===================================
|
||||
# MAILTO=admin@example.com
|
||||
# */15 * * * * /root/BC-bak/bc-backup.sh --mode incremental >> /root/BC-bak/logs/cron.log 2>&1
|
||||
|
||||
# ===================================
|
||||
# Systemd Timer Alternative
|
||||
# ===================================
|
||||
# Create /etc/systemd/system/bc-backup-incremental.service:
|
||||
# [Unit]
|
||||
# Description=Business Central Database Backup
|
||||
# Description=BC Incremental Backup
|
||||
#
|
||||
# [Service]
|
||||
# Type=oneshot
|
||||
# User=malin
|
||||
# WorkingDirectory=/home/malin/c0ding/bcbak
|
||||
# ExecStart=/home/malin/c0ding/bcbak/bc-backup.sh
|
||||
# StandardOutput=append:/home/malin/c0ding/bcbak/logs/backup.log
|
||||
# StandardError=append:/home/malin/c0ding/bcbak/logs/backup.log
|
||||
# ExecStart=/root/BC-bak/bc-backup.sh --mode incremental
|
||||
# StandardOutput=append:/root/BC-bak/logs/backup.log
|
||||
# StandardError=append:/root/BC-bak/logs/backup.log
|
||||
#
|
||||
# bc-backup.timer:
|
||||
# Create /etc/systemd/system/bc-backup-incremental.timer:
|
||||
# [Unit]
|
||||
# Description=Run BC Backup Every Hour
|
||||
# Description=Run BC Incremental Backup Every 15 Minutes
|
||||
#
|
||||
# [Timer]
|
||||
# OnCalendar=hourly
|
||||
# OnCalendar=*:0/15
|
||||
# Persistent=true
|
||||
#
|
||||
# [Install]
|
||||
# WantedBy=timers.target
|
||||
#
|
||||
# Create similar for full backup (OnCalendar=*-*-* 02:00:00)
|
||||
# and cleanup (OnCalendar=*-*-* 03:00:00)
|
||||
#
|
||||
# Enable with:
|
||||
# sudo systemctl daemon-reload
|
||||
# sudo systemctl enable bc-backup.timer
|
||||
# sudo systemctl start bc-backup.timer
|
||||
# sudo systemctl status bc-backup.timer
|
||||
# sudo systemctl enable --now bc-backup-incremental.timer
|
||||
|
||||
# ===================================
|
||||
# Useful Cron Management Commands
|
||||
# Useful Commands
|
||||
# ===================================
|
||||
# Edit crontab: crontab -e
|
||||
# List crontab: crontab -l
|
||||
# Remove all cron jobs: crontab -r
|
||||
# View cron logs: grep CRON /var/log/syslog
|
||||
# Test cron environment: * * * * * env > /tmp/cron-env.txt
|
||||
# Manual full backup: /root/BC-bak/bc-backup.sh --mode full
|
||||
# Manual incremental: /root/BC-bak/bc-backup.sh --mode incremental
|
||||
# Manual cleanup: /root/BC-bak/bc-cleanup.sh
|
||||
# Check backup state: cat /root/BC-bak/last-run-state.json
|
||||
|
||||
Reference in New Issue
Block a user