Compare commits
1 Commits
ci-add-tes
...
refactor-t
| Author | SHA1 | Date | |
|---|---|---|---|
| e5cd26ef16 |
34
.gitignore
vendored
Normal file
34
.gitignore
vendored
Normal file
@@ -0,0 +1,34 @@
|
||||
# dependencies (bun install)
|
||||
node_modules
|
||||
|
||||
# output
|
||||
out
|
||||
dist
|
||||
*.tgz
|
||||
|
||||
# code coverage
|
||||
coverage
|
||||
*.lcov
|
||||
|
||||
# logs
|
||||
logs
|
||||
_.log
|
||||
report.[0-9]_.[0-9]_.[0-9]_.[0-9]_.json
|
||||
|
||||
# dotenv environment variable files
|
||||
.env
|
||||
.env.development.local
|
||||
.env.test.local
|
||||
.env.production.local
|
||||
.env.local
|
||||
|
||||
# caches
|
||||
.eslintcache
|
||||
.cache
|
||||
*.tsbuildinfo
|
||||
|
||||
# IntelliJ based IDEs
|
||||
.idea
|
||||
|
||||
# Finder (MacOS) folder config
|
||||
.DS_Store
|
||||
162
README.md
162
README.md
@@ -1,6 +1,6 @@
|
||||
# BackupSidecar
|
||||
|
||||
BackupSidecar is a lightweight backup and restore solution designed to run as a cron job in Kubernetes. It automates backups and restores using Restic and supports both directory and PostgreSQL database operations. Optional notifications can be sent via Gotify to keep you informed of operation results.
|
||||
BackupSidecar is a lightweight backup solution designed to run as a cron job in Kubernetes. It automates backups using Restic and supports both directory and PostgreSQL database backups. Notifications are sent via Gotify to keep you informed of backup results.
|
||||
|
||||
## Configuration
|
||||
|
||||
@@ -8,53 +8,34 @@ BackupSidecar is configured through environment variables. Below is a breakdown
|
||||
|
||||
### General Settings
|
||||
|
||||
These variables apply to both backup and restore operations.
|
||||
These variables apply to both directory and PostgreSQL backups.
|
||||
|
||||
- **`OPERATION_MODE`** _(optional)_ - Defines the operation type (`backup` or `restore`). Defaults to `backup`.
|
||||
- **`BACKUP_MODE`** _(optional)_ - Defines the backup type (`directory` or `postgres`). Defaults to `directory`.
|
||||
- **`RESTIC_PASSWORD`** _(required)_ - The encryption password for Restic.
|
||||
- **`RESTIC_REPOSITORY`** _(required)_ - The URI of the Restic repository (e.g., `rest:http://your-rest-server:8000/backup`).
|
||||
- **`RESTIC_REST_USERNAME`** _(optional)_ - The username for REST server authentication.
|
||||
- **`RESTIC_REST_PASSWORD`** _(optional)_ - The password for REST server authentication.
|
||||
- **`ENABLE_GOTIFY`** _(optional)_ - Enable Gotify notifications. Set to `true` to enable, any other value or unset disables notifications. Defaults to `true`.
|
||||
- **`GOTIFYHOST`** _(required when ENABLE_GOTIFY=true)_ - The Gotify server URL.
|
||||
- **`GOTIFYTOKEN`** _(required when ENABLE_GOTIFY=true)_ - The API token for Gotify.
|
||||
- **`GOTIFYTOPIC`** _(required when ENABLE_GOTIFY=true)_ - The topic under which backup notifications will be sent.
|
||||
- **`GOTIFYHOST`** _(required)_ - The Gotify server URL.
|
||||
- **`GOTIFYTOKEN`** _(required)_ - The API token for Gotify.
|
||||
- **`GOTIFYTOPIC`** _(required)_ - The topic under which backup notifications will be sent.
|
||||
|
||||
### Directory Operations
|
||||
### Directory Backup
|
||||
|
||||
When running in `directory` mode, the following variables must be set:
|
||||
|
||||
**For Backup Operations:**
|
||||
When running in `directory` mode, the following variable must be set:
|
||||
|
||||
- **`SOURCEDIR`** _(required)_ - The path of the directory to be backed up.
|
||||
|
||||
**For Restore Operations:**
|
||||
|
||||
- **`RESTOREDIR`** _(required)_ - The path where files should be restored to.
|
||||
- **`RESTORE_SNAPSHOT_ID`** _(optional)_ - The specific snapshot ID to restore (defaults to `latest`).
|
||||
|
||||
### PostgreSQL Operations
|
||||
### PostgreSQL Backup
|
||||
|
||||
For `postgres` mode, the following database-related variables are required:
|
||||
|
||||
**Common Variables:**
|
||||
|
||||
- **`PGHOST`** _(required)_ - The hostname of the PostgreSQL server.
|
||||
- **`PGDATABASE`** _(required)_ - The name of the database.
|
||||
- **`PGDATABASE`** _(required)_ - The name of the database to back up.
|
||||
- **`PGUSER`** _(required)_ - The PostgreSQL username.
|
||||
- **`PGPORT`** _(optional)_ - The port for PostgreSQL (defaults to `5432`).
|
||||
- **`PGPASSWORD`** _(optional)_ - The password for authentication. Setting this prevents interactive prompts.
|
||||
|
||||
**Backup-Specific Variables:**
|
||||
|
||||
- **`PG_DUMP_ARGS`** _(optional)_ - Additional flags for `pg_dump`.
|
||||
|
||||
**Restore-Specific Variables:**
|
||||
|
||||
- **`RESTORE_SNAPSHOT_ID`** _(optional)_ - The specific snapshot ID to restore (defaults to `latest`).
|
||||
- **`PSQL_ARGS`** _(optional)_ - Additional flags for `psql` (e.g., `--single-transaction`).
|
||||
|
||||
## Dependencies
|
||||
|
||||
Ensure the following commands are available in the container:
|
||||
@@ -62,13 +43,10 @@ Ensure the following commands are available in the container:
|
||||
- `restic`
|
||||
- `curl`
|
||||
- `jq`
|
||||
- `pg_dump` _(only required for PostgreSQL backup operations)_
|
||||
- `psql` _(only required for PostgreSQL restore operations)_
|
||||
- `pg_dump` _(only required for `postgres` mode)_
|
||||
|
||||
## Usage
|
||||
|
||||
### Backup Operations
|
||||
|
||||
Example Kubernetes CronJob manifest for running BackupSidecar as a cron job for directory backups in minimal configuration:
|
||||
|
||||
```yaml
|
||||
@@ -104,8 +82,6 @@ spec:
|
||||
value: "directory" # or "postgres"
|
||||
- name: SOURCEDIR
|
||||
value: "/data/source"
|
||||
- name: ENABLE_GOTIFY
|
||||
value: "true"
|
||||
- name: GOTIFYHOST
|
||||
value: "http://gotify.example.com"
|
||||
- name: GOTIFYTOKEN
|
||||
@@ -126,116 +102,12 @@ spec:
|
||||
claimName: source-data-pvc
|
||||
```
|
||||
|
||||
### Restore Operations
|
||||
|
||||
Example Kubernetes Job manifest for running BackupSidecar to restore a directory:
|
||||
|
||||
```yaml
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: backupsidecar-restore
|
||||
namespace: authentik
|
||||
spec:
|
||||
backoffLimit: 3
|
||||
activeDeadlineSeconds: 600
|
||||
template:
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
containers:
|
||||
- name: backupsidecar
|
||||
image: backupsidecar:latest
|
||||
env:
|
||||
- name: OPERATION_MODE
|
||||
value: "restore"
|
||||
- name: BACKUP_MODE
|
||||
value: "directory"
|
||||
- name: RESTOREDIR
|
||||
value: "/data/restore"
|
||||
- name: RESTORE_SNAPSHOT_ID
|
||||
value: "abc123def456" # optional, defaults to latest
|
||||
- name: RESTIC_REPOSITORY
|
||||
value: "rest:http://rest-server:8000/backup"
|
||||
- name: RESTIC_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: backupsidecar-secret
|
||||
key: restic_password
|
||||
- name: GOTIFYHOST
|
||||
value: "http://gotify.example.com"
|
||||
- name: GOTIFYTOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: backupsidecar-secret
|
||||
key: gotify_token
|
||||
- name: GOTIFYTOPIC
|
||||
value: "Restore Notification"
|
||||
volumeMounts:
|
||||
- name: restore-data
|
||||
mountPath: /data/restore
|
||||
volumes:
|
||||
- name: restore-data
|
||||
persistentVolumeClaim:
|
||||
claimName: restore-data-pvc
|
||||
```
|
||||
|
||||
Example Kubernetes Job manifest for running BackupSidecar to restore a PostgreSQL database:
|
||||
|
||||
```yaml
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: backupsidecar-postgres-restore
|
||||
namespace: authentik
|
||||
spec:
|
||||
backoffLimit: 3
|
||||
activeDeadlineSeconds: 600
|
||||
template:
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
containers:
|
||||
- name: backupsidecar
|
||||
image: backupsidecar:latest
|
||||
env:
|
||||
- name: OPERATION_MODE
|
||||
value: "restore"
|
||||
- name: BACKUP_MODE
|
||||
value: "postgres"
|
||||
- name: PGHOST
|
||||
value: "postgres.example.com"
|
||||
- name: PGDATABASE
|
||||
value: "mydatabase"
|
||||
- name: PGUSER
|
||||
value: "myuser"
|
||||
- name: PGPASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: postgres-secret
|
||||
key: password
|
||||
- name: PGPORT
|
||||
value: "5432"
|
||||
- name: RESTORE_SNAPSHOT_ID
|
||||
value: "abc123def456" # optional, defaults to latest
|
||||
- name: PSQL_ARGS
|
||||
value: "--single-transaction" # optional
|
||||
- name: RESTIC_REPOSITORY
|
||||
value: "rest:http://rest-server:8000/backup"
|
||||
- name: RESTIC_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: backupsidecar-secret
|
||||
key: restic_password
|
||||
- name: GOTIFYHOST
|
||||
value: "http://gotify.example.com"
|
||||
- name: GOTIFYTOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: backupsidecar-secret
|
||||
key: gotify_token
|
||||
- name: GOTIFYTOPIC
|
||||
value: "Database Restore Notification"
|
||||
```
|
||||
|
||||
## Notifications
|
||||
|
||||
The script can send success or failure notifications via Gotify when enabled. To enable notifications, set `ENABLE_GOTIFY=true` and provide the required Gotify configuration variables (`GOTIFYHOST`, `GOTIFYTOKEN`, `GOTIFYTOPIC`). When notifications are disabled, backup status messages are still logged to the console.
|
||||
The script sends success or failure notifications via Gotify.
|
||||
|
||||
Example success notification:
|
||||
|
||||
```
|
||||
Backup successful. Snapshot 56ff6a909a44e01f67d2d88f9a76aa713d437809d7ed14a2361e28893f38befb: files new: 1, files changed: 0, data added: 1019 bytes in 0.277535184 sec
|
||||
```
|
||||
|
||||
354
TESTING.md
354
TESTING.md
@@ -1,354 +0,0 @@
|
||||
# Backup Script Testing Guide
|
||||
|
||||
This document provides step-by-step instructions for testing the backup script locally using Docker Compose with a Restic server and PostgreSQL database.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker and Docker Compose installed
|
||||
- The backup script (`src/backup.sh`) and its dependencies (restic, curl, jq, pg_dump, psql)
|
||||
- Basic understanding of bash and PostgreSQL
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Start the Test Environment
|
||||
|
||||
```bash
|
||||
# Start the services
|
||||
docker-compose up -d
|
||||
|
||||
# Wait for services to be ready (about 10-15 seconds)
|
||||
sleep 15
|
||||
|
||||
# Verify services are running
|
||||
docker-compose ps
|
||||
```
|
||||
|
||||
### 3. Initialize Restic Repositories
|
||||
|
||||
Before running any backup operations, you need to initialize the restic repositories:
|
||||
|
||||
```bash
|
||||
# Initialize repository for directory backups
|
||||
RESTIC_PASSWORD="testpassword123" restic -r "rest:http://localhost:8000/test-backup" init
|
||||
|
||||
# Initialize repository for PostgreSQL backups
|
||||
RESTIC_PASSWORD="testpassword123" restic -r "rest:http://localhost:8000/postgres-backup" init
|
||||
```
|
||||
|
||||
**Note**: The repositories only need to be initialized once. If you recreate the restic server container, you'll need to reinitialize the repositories.
|
||||
|
||||
## Directory Backup Testing
|
||||
|
||||
This section tests the directory backup and restore functionality.
|
||||
|
||||
### 1. Prepare Test Data
|
||||
|
||||
```bash
|
||||
# Create a test directory with sample files
|
||||
mkdir -p test_data
|
||||
cd test_data
|
||||
|
||||
# Create various types of files
|
||||
echo "This is a text file" > sample.txt
|
||||
echo '{"name": "test", "value": 123}' > data.json
|
||||
mkdir -p subdir
|
||||
echo "Nested file content" > subdir/nested.txt
|
||||
echo "Binary data" | base64 > binary.dat
|
||||
|
||||
# Create some larger files for testing
|
||||
dd if=/dev/urandom of=large_file.bin bs=1M count=10 2>/dev/null
|
||||
|
||||
# Go back to project root
|
||||
cd ..
|
||||
```
|
||||
|
||||
### 2. Perform Directory Backup
|
||||
|
||||
```bash
|
||||
# Set environment variables for directory backup
|
||||
export OPERATION_MODE="backup"
|
||||
export BACKUP_MODE="directory"
|
||||
export RESTIC_PASSWORD="testpassword123"
|
||||
export RESTIC_REPOSITORY="rest:http://localhost:8000/test-backup"
|
||||
export SOURCEDIR="/tmp/test-data"
|
||||
export ENABLE_GOTIFY="false"
|
||||
|
||||
# Run the backup
|
||||
./src/backup.sh
|
||||
```
|
||||
|
||||
### 3. Verify Backup
|
||||
|
||||
```bash
|
||||
# List snapshots
|
||||
restic -r "rest:http://localhost:8000/test-backup" snapshots --password-file <(echo "testpassword123")
|
||||
|
||||
# Check backup contents
|
||||
restic -r "rest:http://localhost:8000/test-backup" ls latest --password-file <(echo "testpassword123")
|
||||
```
|
||||
|
||||
### 4. Perform Directory Restore
|
||||
|
||||
```bash
|
||||
# Create restore directory
|
||||
RESTOR_DIR="/tmp/restored_data"
|
||||
mkdir -p $RESTOR_DIR
|
||||
|
||||
# Set environment variables for directory restore
|
||||
export OPERATION_MODE="restore"
|
||||
export RESTOREDIR=$RESTOR_DIR
|
||||
export RESTORE_SNAPSHOT_ID="latest"
|
||||
|
||||
# Run the restore
|
||||
./src/backup.sh
|
||||
```
|
||||
|
||||
### 5. Verify Directory Restore
|
||||
|
||||
```bash
|
||||
# Compare original and restored directories
|
||||
diff -r /tmp/test-data $RESTOR_DIR
|
||||
|
||||
# Check file contents
|
||||
echo "Original file:"
|
||||
cat test_data/sample.txt
|
||||
echo "Restored file:"
|
||||
cat restored_data/sample.txt
|
||||
|
||||
# Verify binary file integrity
|
||||
md5sum test_data/large_file.bin
|
||||
md5sum restored_data/large_file.bin
|
||||
|
||||
# Check directory structure
|
||||
tree test_data
|
||||
tree restored_data
|
||||
```
|
||||
|
||||
### 6. Cleanup Directory Test
|
||||
|
||||
```bash
|
||||
# Remove test directories
|
||||
rm -rf test_data restored_data
|
||||
```
|
||||
|
||||
## PostgreSQL Backup Testing
|
||||
|
||||
This section tests the PostgreSQL database backup and restore functionality.
|
||||
|
||||
### 1. Generate Test Data
|
||||
|
||||
```bash
|
||||
# Generate test data in PostgreSQL
|
||||
./generate_test_data.sh
|
||||
```
|
||||
|
||||
### 2. Verify Initial Data
|
||||
|
||||
```bash
|
||||
# Check that data exists
|
||||
docker exec postgres-test psql -U testuser -d testdb -c "
|
||||
SELECT 'customers' as table_name, COUNT(*) as row_count FROM customers
|
||||
UNION ALL
|
||||
SELECT 'orders' as table_name, COUNT(*) as row_count FROM orders;
|
||||
"
|
||||
```
|
||||
|
||||
### 3. Perform PostgreSQL Backup
|
||||
|
||||
```bash
|
||||
# Set environment variables for PostgreSQL backup
|
||||
export OPERATION_MODE="backup"
|
||||
export BACKUP_MODE="postgres"
|
||||
export RESTIC_PASSWORD="testpassword123"
|
||||
export RESTIC_REPOSITORY="rest:http://localhost:8000/postgres-backup"
|
||||
export PGHOST="localhost"
|
||||
export PGPORT="5432"
|
||||
export PGDATABASE="testdb"
|
||||
export PGUSER="testuser"
|
||||
export PGPASSWORD="testpass"
|
||||
export ENABLE_GOTIFY="false"
|
||||
|
||||
# Run the backup
|
||||
./src/backup.sh
|
||||
```
|
||||
|
||||
### 4. Verify PostgreSQL Backup
|
||||
|
||||
```bash
|
||||
# List snapshots
|
||||
restic -r "rest:http://localhost:8000/postgres-backup" snapshots --password-file <(echo "testpassword123")
|
||||
|
||||
# Check backup contents
|
||||
restic -r "rest:http://localhost:8000/postgres-backup" ls latest --password-file <(echo "testpassword123")
|
||||
```
|
||||
|
||||
### 5. Clear the Database
|
||||
|
||||
```bash
|
||||
# Drop all tables to simulate data loss
|
||||
docker exec postgres-test psql -U testuser -d testdb -c "
|
||||
DROP TABLE IF EXISTS orders CASCADE;
|
||||
DROP TABLE IF EXISTS customers CASCADE;
|
||||
"
|
||||
|
||||
# Verify database is empty
|
||||
docker exec postgres-test psql -U testuser -d testdb -c "
|
||||
SELECT table_name FROM information_schema.tables
|
||||
WHERE table_schema = 'public';
|
||||
"
|
||||
```
|
||||
|
||||
### 6. Perform PostgreSQL Restore
|
||||
|
||||
```bash
|
||||
# Set environment variables for PostgreSQL restore
|
||||
export OPERATION_MODE="restore"
|
||||
export RESTORE_SNAPSHOT_ID="latest"
|
||||
|
||||
# Run the restore
|
||||
./src/backup.sh
|
||||
```
|
||||
|
||||
### 7. Verify PostgreSQL Restore
|
||||
|
||||
```bash
|
||||
# Check that data has been restored
|
||||
docker exec postgres-test psql -U testuser -d testdb -c "
|
||||
SELECT 'customers' as table_name, COUNT(*) as row_count FROM customers
|
||||
UNION ALL
|
||||
SELECT 'orders' as table_name, COUNT(*) as row_count FROM orders;
|
||||
"
|
||||
|
||||
# Verify data integrity
|
||||
docker exec postgres-test psql -U testuser -d testdb -c "
|
||||
SELECT
|
||||
c.name as customer_name,
|
||||
COUNT(o.id) as order_count,
|
||||
SUM(o.price * o.quantity) as total_spent
|
||||
FROM customers c
|
||||
LEFT JOIN orders o ON c.id = o.customer_id
|
||||
GROUP BY c.id, c.name
|
||||
ORDER BY total_spent DESC
|
||||
LIMIT 5;
|
||||
"
|
||||
|
||||
# Check foreign key constraints
|
||||
docker exec postgres-test psql -U testuser -d testdb -c "
|
||||
SELECT
|
||||
tc.constraint_name,
|
||||
tc.table_name,
|
||||
kcu.column_name,
|
||||
ccu.table_name AS foreign_table_name,
|
||||
ccu.column_name AS foreign_column_name
|
||||
FROM information_schema.table_constraints AS tc
|
||||
JOIN information_schema.key_column_usage AS kcu
|
||||
ON tc.constraint_name = kcu.constraint_name
|
||||
AND tc.table_schema = kcu.table_schema
|
||||
JOIN information_schema.constraint_column_usage AS ccu
|
||||
ON ccu.constraint_name = tc.constraint_name
|
||||
AND ccu.table_schema = tc.table_schema
|
||||
WHERE tc.constraint_type = 'FOREIGN KEY'
|
||||
AND tc.table_name IN ('customers', 'orders');
|
||||
"
|
||||
```
|
||||
|
||||
## Advanced Testing Scenarios
|
||||
|
||||
### 1. Test with Different Snapshot IDs
|
||||
|
||||
```bash
|
||||
# List all snapshots
|
||||
restic -r "rest:http://localhost:8000/test-backup" snapshots --password-file <(echo "testpassword123")
|
||||
|
||||
# Restore a specific snapshot
|
||||
export RESTORE_SNAPSHOT_ID="<snapshot-id>"
|
||||
./src/backup.sh
|
||||
```
|
||||
|
||||
### 2. Test Error Handling
|
||||
|
||||
```bash
|
||||
# Test with invalid repository
|
||||
export RESTIC_REPOSITORY="rest:http://localhost:8000/nonexistent"
|
||||
./src/backup.sh
|
||||
|
||||
# Test with wrong password
|
||||
export RESTIC_PASSWORD="wrongpassword"
|
||||
./src/backup.sh
|
||||
```
|
||||
|
||||
### 3. Test with Gotify Notifications (Optional)
|
||||
|
||||
```bash
|
||||
# If you have a Gotify server running
|
||||
export ENABLE_GOTIFY="true"
|
||||
export GOTIFYHOST="http://your-gotify-server:80"
|
||||
export GOTIFYTOKEN="your-token"
|
||||
export GOTIFYTOPIC="Backup Test"
|
||||
./src/backup.sh
|
||||
```
|
||||
|
||||
## Cleanup
|
||||
|
||||
### 1. Stop Services
|
||||
|
||||
```bash
|
||||
# Stop and remove containers
|
||||
docker-compose down
|
||||
|
||||
# Remove volumes (optional - this will delete all data)
|
||||
docker-compose down -v
|
||||
```
|
||||
|
||||
### 2. Clean Up Test Files
|
||||
|
||||
```bash
|
||||
# Remove any remaining test files
|
||||
rm -rf test_data restored_data
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Connection refused to restic server**: Wait a bit longer for the container to start up
|
||||
2. **PostgreSQL connection failed**: Ensure the database container is fully initialized
|
||||
3. **Permission denied**: Make sure the backup script is executable (`chmod +x src/backup.sh`)
|
||||
4. **Restic repository not found**: Check that the repository URL is correct and the server is running
|
||||
5. **Script exits early with no output**: The restic repository hasn't been initialized yet. Run the initialization commands in step 3 above.
|
||||
|
||||
### Debug Commands
|
||||
|
||||
```bash
|
||||
# Check container logs
|
||||
docker-compose logs restic-server
|
||||
docker-compose logs postgres
|
||||
|
||||
# Test restic connectivity
|
||||
restic -r "rest:http://localhost:8000/test-backup" snapshots --password-file <(echo "testpassword123")
|
||||
|
||||
# Test PostgreSQL connectivity
|
||||
docker exec postgres-test psql -U testuser -d testdb -c "SELECT 1;"
|
||||
```
|
||||
|
||||
## Expected Results
|
||||
|
||||
### Directory Backup Test
|
||||
|
||||
- ✅ Backup completes successfully
|
||||
- ✅ Files are backed up to restic repository
|
||||
- ✅ Restore completes successfully
|
||||
- ✅ Restored files match original files exactly
|
||||
- ✅ Directory structure is preserved
|
||||
|
||||
### PostgreSQL Backup Test
|
||||
|
||||
- ✅ Database backup completes successfully
|
||||
- ✅ Database dump is backed up to restic repository
|
||||
- ✅ Database can be cleared successfully
|
||||
- ✅ Database restore completes successfully
|
||||
- ✅ All data is restored correctly
|
||||
- ✅ Foreign key relationships are maintained
|
||||
- ✅ Data integrity is preserved
|
||||
|
||||
This testing procedure ensures that both directory and PostgreSQL backup/restore functionality works correctly and can be used as a foundation for automated testing in CI/CD pipelines.
|
||||
90
bun.lock
Normal file
90
bun.lock
Normal file
@@ -0,0 +1,90 @@
|
||||
{
|
||||
"lockfileVersion": 1,
|
||||
"workspaces": {
|
||||
"": {
|
||||
"name": "backupsidecar",
|
||||
"dependencies": {
|
||||
"env-var": "^7.5.0",
|
||||
"pino": "^9.9.0",
|
||||
"pino-pretty": "^13.1.1",
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bun": "latest",
|
||||
},
|
||||
"peerDependencies": {
|
||||
"typescript": "^5",
|
||||
},
|
||||
},
|
||||
},
|
||||
"packages": {
|
||||
"@types/bun": ["@types/bun@1.2.21", "", { "dependencies": { "bun-types": "1.2.21" } }, "sha512-NiDnvEqmbfQ6dmZ3EeUO577s4P5bf4HCTXtI6trMc6f6RzirY5IrF3aIookuSpyslFzrnvv2lmEWv5HyC1X79A=="],
|
||||
|
||||
"@types/node": ["@types/node@24.3.0", "", { "dependencies": { "undici-types": "~7.10.0" } }, "sha512-aPTXCrfwnDLj4VvXrm+UUCQjNEvJgNA8s5F1cvwQU+3KNltTOkBm1j30uNLyqqPNe7gE3KFzImYoZEfLhp4Yow=="],
|
||||
|
||||
"@types/react": ["@types/react@19.1.12", "", { "dependencies": { "csstype": "^3.0.2" } }, "sha512-cMoR+FoAf/Jyq6+Df2/Z41jISvGZZ2eTlnsaJRptmZ76Caldwy1odD4xTr/gNV9VLj0AWgg/nmkevIyUfIIq5w=="],
|
||||
|
||||
"atomic-sleep": ["atomic-sleep@1.0.0", "", {}, "sha512-kNOjDqAh7px0XWNI+4QbzoiR/nTkHAWNud2uvnJquD1/x5a7EQZMJT0AczqK0Qn67oY/TTQ1LbUKajZpp3I9tQ=="],
|
||||
|
||||
"bun-types": ["bun-types@1.2.21", "", { "dependencies": { "@types/node": "*" }, "peerDependencies": { "@types/react": "^19" } }, "sha512-sa2Tj77Ijc/NTLS0/Odjq/qngmEPZfbfnOERi0KRUYhT9R8M4VBioWVmMWE5GrYbKMc+5lVybXygLdibHaqVqw=="],
|
||||
|
||||
"colorette": ["colorette@2.0.20", "", {}, "sha512-IfEDxwoWIjkeXL1eXcDiow4UbKjhLdq6/EuSVR9GMN7KVH3r9gQ83e73hsz1Nd1T3ijd5xv1wcWRYO+D6kCI2w=="],
|
||||
|
||||
"csstype": ["csstype@3.1.3", "", {}, "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw=="],
|
||||
|
||||
"dateformat": ["dateformat@4.6.3", "", {}, "sha512-2P0p0pFGzHS5EMnhdxQi7aJN+iMheud0UhG4dlE1DLAlvL8JHjJJTX/CSm4JXwV0Ka5nGk3zC5mcb5bUQUxxMA=="],
|
||||
|
||||
"end-of-stream": ["end-of-stream@1.4.5", "", { "dependencies": { "once": "^1.4.0" } }, "sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg=="],
|
||||
|
||||
"env-var": ["env-var@7.5.0", "", {}, "sha512-mKZOzLRN0ETzau2W2QXefbFjo5EF4yWq28OyKb9ICdeNhHJlOE/pHHnz4hdYJ9cNZXcJHo5xN4OT4pzuSHSNvA=="],
|
||||
|
||||
"fast-copy": ["fast-copy@3.0.2", "", {}, "sha512-dl0O9Vhju8IrcLndv2eU4ldt1ftXMqqfgN4H1cpmGV7P6jeB9FwpN9a2c8DPGE1Ys88rNUJVYDHq73CGAGOPfQ=="],
|
||||
|
||||
"fast-redact": ["fast-redact@3.5.0", "", {}, "sha512-dwsoQlS7h9hMeYUq1W++23NDcBLV4KqONnITDV9DjfS3q1SgDGVrBdvvTLUotWtPSD7asWDV9/CmsZPy8Hf70A=="],
|
||||
|
||||
"fast-safe-stringify": ["fast-safe-stringify@2.1.1", "", {}, "sha512-W+KJc2dmILlPplD/H4K9l9LcAHAfPtP6BY84uVLXQ6Evcz9Lcg33Y2z1IVblT6xdY54PXYVHEv+0Wpq8Io6zkA=="],
|
||||
|
||||
"help-me": ["help-me@5.0.0", "", {}, "sha512-7xgomUX6ADmcYzFik0HzAxh/73YlKR9bmFzf51CZwR+b6YtzU2m0u49hQCqV6SvlqIqsaxovfwdvbnsw3b/zpg=="],
|
||||
|
||||
"joycon": ["joycon@3.1.1", "", {}, "sha512-34wB/Y7MW7bzjKRjUKTa46I2Z7eV62Rkhva+KkopW7Qvv/OSWBqvkSY7vusOPrNuZcUG3tApvdVgNB8POj3SPw=="],
|
||||
|
||||
"minimist": ["minimist@1.2.8", "", {}, "sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA=="],
|
||||
|
||||
"on-exit-leak-free": ["on-exit-leak-free@2.1.2", "", {}, "sha512-0eJJY6hXLGf1udHwfNftBqH+g73EU4B504nZeKpz1sYRKafAghwxEJunB2O7rDZkL4PGfsMVnTXZ2EjibbqcsA=="],
|
||||
|
||||
"once": ["once@1.4.0", "", { "dependencies": { "wrappy": "1" } }, "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="],
|
||||
|
||||
"pino": ["pino@9.9.0", "", { "dependencies": { "atomic-sleep": "^1.0.0", "fast-redact": "^3.1.1", "on-exit-leak-free": "^2.1.0", "pino-abstract-transport": "^2.0.0", "pino-std-serializers": "^7.0.0", "process-warning": "^5.0.0", "quick-format-unescaped": "^4.0.3", "real-require": "^0.2.0", "safe-stable-stringify": "^2.3.1", "sonic-boom": "^4.0.1", "thread-stream": "^3.0.0" }, "bin": { "pino": "bin.js" } }, "sha512-zxsRIQG9HzG+jEljmvmZupOMDUQ0Jpj0yAgE28jQvvrdYTlEaiGwelJpdndMl/MBuRr70heIj83QyqJUWaU8mQ=="],
|
||||
|
||||
"pino-abstract-transport": ["pino-abstract-transport@2.0.0", "", { "dependencies": { "split2": "^4.0.0" } }, "sha512-F63x5tizV6WCh4R6RHyi2Ml+M70DNRXt/+HANowMflpgGFMAym/VKm6G7ZOQRjqN7XbGxK1Lg9t6ZrtzOaivMw=="],
|
||||
|
||||
"pino-pretty": ["pino-pretty@13.1.1", "", { "dependencies": { "colorette": "^2.0.7", "dateformat": "^4.6.3", "fast-copy": "^3.0.2", "fast-safe-stringify": "^2.1.1", "help-me": "^5.0.0", "joycon": "^3.1.1", "minimist": "^1.2.6", "on-exit-leak-free": "^2.1.0", "pino-abstract-transport": "^2.0.0", "pump": "^3.0.0", "secure-json-parse": "^4.0.0", "sonic-boom": "^4.0.1", "strip-json-comments": "^5.0.2" }, "bin": { "pino-pretty": "bin.js" } }, "sha512-TNNEOg0eA0u+/WuqH0MH0Xui7uqVk9D74ESOpjtebSQYbNWJk/dIxCXIxFsNfeN53JmtWqYHP2OrIZjT/CBEnA=="],
|
||||
|
||||
"pino-std-serializers": ["pino-std-serializers@7.0.0", "", {}, "sha512-e906FRY0+tV27iq4juKzSYPbUj2do2X2JX4EzSca1631EB2QJQUqGbDuERal7LCtOpxl6x3+nvo9NPZcmjkiFA=="],
|
||||
|
||||
"process-warning": ["process-warning@5.0.0", "", {}, "sha512-a39t9ApHNx2L4+HBnQKqxxHNs1r7KF+Intd8Q/g1bUh6q0WIp9voPXJ/x0j+ZL45KF1pJd9+q2jLIRMfvEshkA=="],
|
||||
|
||||
"pump": ["pump@3.0.3", "", { "dependencies": { "end-of-stream": "^1.1.0", "once": "^1.3.1" } }, "sha512-todwxLMY7/heScKmntwQG8CXVkWUOdYxIvY2s0VWAAMh/nd8SoYiRaKjlr7+iCs984f2P8zvrfWcDDYVb73NfA=="],
|
||||
|
||||
"quick-format-unescaped": ["quick-format-unescaped@4.0.4", "", {}, "sha512-tYC1Q1hgyRuHgloV/YXs2w15unPVh8qfu/qCTfhTYamaw7fyhumKa2yGpdSo87vY32rIclj+4fWYQXUMs9EHvg=="],
|
||||
|
||||
"real-require": ["real-require@0.2.0", "", {}, "sha512-57frrGM/OCTLqLOAh0mhVA9VBMHd+9U7Zb2THMGdBUoZVOtGbJzjxsYGDJ3A9AYYCP4hn6y1TVbaOfzWtm5GFg=="],
|
||||
|
||||
"safe-stable-stringify": ["safe-stable-stringify@2.5.0", "", {}, "sha512-b3rppTKm9T+PsVCBEOUR46GWI7fdOs00VKZ1+9c1EWDaDMvjQc6tUwuFyIprgGgTcWoVHSKrU8H31ZHA2e0RHA=="],
|
||||
|
||||
"secure-json-parse": ["secure-json-parse@4.0.0", "", {}, "sha512-dxtLJO6sc35jWidmLxo7ij+Eg48PM/kleBsxpC8QJE0qJICe+KawkDQmvCMZUr9u7WKVHgMW6vy3fQ7zMiFZMA=="],
|
||||
|
||||
"sonic-boom": ["sonic-boom@4.2.0", "", { "dependencies": { "atomic-sleep": "^1.0.0" } }, "sha512-INb7TM37/mAcsGmc9hyyI6+QR3rR1zVRu36B0NeGXKnOOLiZOfER5SA+N7X7k3yUYRzLWafduTDvJAfDswwEww=="],
|
||||
|
||||
"split2": ["split2@4.2.0", "", {}, "sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg=="],
|
||||
|
||||
"strip-json-comments": ["strip-json-comments@5.0.3", "", {}, "sha512-1tB5mhVo7U+ETBKNf92xT4hrQa3pm0MZ0PQvuDnWgAAGHDsfp4lPSpiS6psrSiet87wyGPh9ft6wmhOMQ0hDiw=="],
|
||||
|
||||
"thread-stream": ["thread-stream@3.1.0", "", { "dependencies": { "real-require": "^0.2.0" } }, "sha512-OqyPZ9u96VohAyMfJykzmivOrY2wfMSf3C5TtFJVgN+Hm6aj+voFhlK+kZEIv2FBh1X6Xp3DlnCOfEQ3B2J86A=="],
|
||||
|
||||
"typescript": ["typescript@5.9.2", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-CWBzXQrc/qOkhidw1OzBTQuYRbfyxDXJMVJ1XNwUHGROVmuaeiEm3OslpZ1RV96d7SKKjZKrSJu3+t/xlw3R9A=="],
|
||||
|
||||
"undici-types": ["undici-types@7.10.0", "", {}, "sha512-t5Fy/nfn+14LuOc2KNYg75vZqClpAiqscVvMygNnlsHBFpSXdJaYtXMcdNLpl/Qvc3P2cB3s6lOV51nqsFq4ag=="],
|
||||
|
||||
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
|
||||
}
|
||||
}
|
||||
19
compose.yaml
19
compose.yaml
@@ -1,19 +0,0 @@
|
||||
services:
|
||||
restic-server:
|
||||
image: restic/rest-server:latest
|
||||
container_name: restic-server
|
||||
ports:
|
||||
- "8000:8000"
|
||||
command: rest-server --no-auth --path /data
|
||||
restart: unless-stopped
|
||||
|
||||
postgres:
|
||||
image: postgres:17
|
||||
container_name: postgres-test
|
||||
ports:
|
||||
- "5432:5432"
|
||||
environment:
|
||||
POSTGRES_DB: testdb
|
||||
POSTGRES_USER: testuser
|
||||
POSTGRES_PASSWORD: testpass
|
||||
restart: unless-stopped
|
||||
@@ -1,51 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
# Script to generate test data for Directory backup testing
|
||||
# This script creates a few directories with a few files and directories in each and populates it with test data
|
||||
|
||||
# Create base test directory
|
||||
TEST_DIR="/tmp/test-data"
|
||||
echo "Creating test directory structure in $TEST_DIR..."
|
||||
|
||||
# Remove existing test directory if it exists
|
||||
rm -rf "$TEST_DIR"
|
||||
mkdir -p "$TEST_DIR"
|
||||
|
||||
# Create various subdirectories
|
||||
mkdir -p "$TEST_DIR/documents/reports"
|
||||
mkdir -p "$TEST_DIR/documents/contracts"
|
||||
mkdir -p "$TEST_DIR/data/logs"
|
||||
mkdir -p "$TEST_DIR/data/backups"
|
||||
|
||||
# Create text files with content
|
||||
echo "This is the annual report for 2023" > "$TEST_DIR/documents/reports/annual_2023.txt"
|
||||
echo "Q4 financial summary" > "$TEST_DIR/documents/reports/q4_summary.txt"
|
||||
echo "Contract terms and conditions" > "$TEST_DIR/documents/contracts/agreement.txt"
|
||||
|
||||
# Create JSON files
|
||||
cat << 'EOF' > "$TEST_DIR/data/config.json"
|
||||
{
|
||||
"app_name": "TestApp",
|
||||
"version": "1.0.0",
|
||||
"settings": {
|
||||
"debug": true,
|
||||
"max_retries": 3,
|
||||
"timeout": 30
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Create some log files
|
||||
for i in {1..3}; do
|
||||
echo "$(date) - Log entry $i" >> "$TEST_DIR/data/logs/app.log"
|
||||
echo "$(date) - Error $i: Sample error message" >> "$TEST_DIR/data/logs/error.log"
|
||||
done
|
||||
|
||||
# Create symbolic links
|
||||
ln -s "../reports/annual_2023.txt" "$TEST_DIR/documents/contracts/report_link"
|
||||
ln -s "../../data/config.json" "$TEST_DIR/documents/reports/config_link"
|
||||
|
||||
echo "Test data generation completed successfully!"
|
||||
echo "Created directory structure:"
|
||||
tree "$TEST_DIR"
|
||||
@@ -1,124 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
# Script to generate test data for PostgreSQL backup testing
|
||||
# This script creates two tables with a foreign key relationship and populates them with test data
|
||||
|
||||
# Database connection parameters
|
||||
PGHOST="${PGHOST:-localhost}"
|
||||
PGPORT="${PGPORT:-5432}"
|
||||
PGDATABASE="${PGDATABASE:-testdb}"
|
||||
PGUSER="${PGUSER:-testuser}"
|
||||
PGPASSWORD="${PGPASSWORD:-testpass}"
|
||||
|
||||
# Export password for psql
|
||||
export PGPASSWORD
|
||||
|
||||
echo "Generating test data for PostgreSQL database..."
|
||||
|
||||
# Create tables
|
||||
echo "Creating tables..."
|
||||
psql -h "$PGHOST" -p "$PGPORT" -U "$PGUSER" -d "$PGDATABASE" << 'EOF'
|
||||
-- Drop tables if they exist
|
||||
DROP TABLE IF EXISTS orders CASCADE;
|
||||
DROP TABLE IF EXISTS customers CASCADE;
|
||||
|
||||
-- Create customers table
|
||||
CREATE TABLE customers (
|
||||
id SERIAL PRIMARY KEY,
|
||||
name VARCHAR(100) NOT NULL,
|
||||
email VARCHAR(100) UNIQUE NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
-- Create orders table with foreign key to customers
|
||||
CREATE TABLE orders (
|
||||
id SERIAL PRIMARY KEY,
|
||||
customer_id INTEGER NOT NULL REFERENCES customers(id) ON DELETE CASCADE,
|
||||
product_name VARCHAR(100) NOT NULL,
|
||||
quantity INTEGER NOT NULL CHECK (quantity > 0),
|
||||
price DECIMAL(10,2) NOT NULL CHECK (price >= 0),
|
||||
order_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
-- Create indexes for better performance
|
||||
CREATE INDEX idx_orders_customer_id ON orders(customer_id);
|
||||
CREATE INDEX idx_customers_email ON customers(email);
|
||||
CREATE INDEX idx_orders_order_date ON orders(order_date);
|
||||
EOF
|
||||
|
||||
# Insert test data
|
||||
echo "Inserting test data..."
|
||||
psql -h "$PGHOST" -p "$PGPORT" -U "$PGUSER" -d "$PGDATABASE" << 'EOF'
|
||||
-- Insert customers
|
||||
INSERT INTO customers (name, email) VALUES
|
||||
('John Doe', 'john.doe@example.com'),
|
||||
('Jane Smith', 'jane.smith@example.com'),
|
||||
('Bob Johnson', 'bob.johnson@example.com'),
|
||||
('Alice Brown', 'alice.brown@example.com'),
|
||||
('Charlie Wilson', 'charlie.wilson@example.com'),
|
||||
('Diana Davis', 'diana.davis@example.com'),
|
||||
('Eve Miller', 'eve.miller@example.com'),
|
||||
('Frank Garcia', 'frank.garcia@example.com'),
|
||||
('Grace Lee', 'grace.lee@example.com'),
|
||||
('Henry Taylor', 'henry.taylor@example.com');
|
||||
|
||||
-- Insert orders
|
||||
INSERT INTO orders (customer_id, product_name, quantity, price) VALUES
|
||||
(1, 'Laptop', 1, 999.99),
|
||||
(1, 'Mouse', 2, 25.50),
|
||||
(2, 'Keyboard', 1, 75.00),
|
||||
(2, 'Monitor', 1, 299.99),
|
||||
(3, 'Headphones', 1, 150.00),
|
||||
(3, 'Webcam', 1, 89.99),
|
||||
(4, 'Tablet', 1, 399.99),
|
||||
(4, 'Stylus', 1, 49.99),
|
||||
(5, 'Smartphone', 1, 699.99),
|
||||
(5, 'Case', 1, 19.99),
|
||||
(6, 'Desktop', 1, 1299.99),
|
||||
(6, 'RAM', 2, 79.99),
|
||||
(7, 'SSD', 1, 199.99),
|
||||
(7, 'Graphics Card', 1, 599.99),
|
||||
(8, 'Motherboard', 1, 199.99),
|
||||
(8, 'CPU', 1, 399.99),
|
||||
(9, 'Power Supply', 1, 149.99),
|
||||
(9, 'Cooling Fan', 2, 29.99),
|
||||
(10, 'Cable Set', 1, 39.99),
|
||||
(10, 'USB Hub', 1, 24.99);
|
||||
EOF
|
||||
|
||||
# Verify data
|
||||
echo "Verifying test data..."
|
||||
psql -h "$PGHOST" -p "$PGPORT" -U "$PGUSER" -d "$PGDATABASE" << 'EOF'
|
||||
-- Show table counts
|
||||
SELECT 'customers' as table_name, COUNT(*) as row_count FROM customers
|
||||
UNION ALL
|
||||
SELECT 'orders' as table_name, COUNT(*) as row_count FROM orders;
|
||||
|
||||
-- Show sample data
|
||||
SELECT 'Sample customers:' as info;
|
||||
SELECT id, name, email FROM customers LIMIT 5;
|
||||
|
||||
SELECT 'Sample orders:' as info;
|
||||
SELECT o.id, c.name as customer_name, o.product_name, o.quantity, o.price
|
||||
FROM orders o
|
||||
JOIN customers c ON o.customer_id = c.id
|
||||
LIMIT 5;
|
||||
|
||||
-- Show foreign key relationship
|
||||
SELECT 'Foreign key relationship check:' as info;
|
||||
SELECT
|
||||
c.name as customer_name,
|
||||
COUNT(o.id) as order_count,
|
||||
SUM(o.price * o.quantity) as total_spent
|
||||
FROM customers c
|
||||
LEFT JOIN orders o ON c.id = o.customer_id
|
||||
GROUP BY c.id, c.name
|
||||
ORDER BY total_spent DESC;
|
||||
EOF
|
||||
|
||||
echo "Test data generation completed successfully!"
|
||||
echo "Database contains:"
|
||||
echo "- 10 customers"
|
||||
echo "- 20 orders with foreign key relationships"
|
||||
echo "- Various data types and constraints"
|
||||
17
package.json
Normal file
17
package.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"name": "backupsidecar",
|
||||
"module": "src/main.ts",
|
||||
"type": "module",
|
||||
"private": true,
|
||||
"devDependencies": {
|
||||
"@types/bun": "latest"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"typescript": "^5"
|
||||
},
|
||||
"dependencies": {
|
||||
"env-var": "^7.5.0",
|
||||
"pino": "^9.9.0",
|
||||
"pino-pretty": "^13.1.1"
|
||||
}
|
||||
}
|
||||
288
src/backup.sh
Executable file → Normal file
288
src/backup.sh
Executable file → Normal file
@@ -1,6 +1,71 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
#######################################
|
||||
# Determine backup mode from the environment only.
|
||||
# Valid values: "directory" or "postgres".
|
||||
# Default to "directory" if not provided.
|
||||
#######################################
|
||||
BACKUP_MODE="${BACKUP_MODE:-directory}"
|
||||
|
||||
#######################################
|
||||
# Check for required external commands.
|
||||
#######################################
|
||||
REQUIRED_CMDS=(restic curl jq)
|
||||
if [ "$BACKUP_MODE" = "postgres" ]; then
|
||||
REQUIRED_CMDS+=(pg_dump)
|
||||
fi
|
||||
|
||||
for cmd in "${REQUIRED_CMDS[@]}"; do
|
||||
if ! command -v "$cmd" &>/dev/null; then
|
||||
echo "Error: Required command '$cmd' is not installed." >&2
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
#######################################
|
||||
# Validate common required environment variables.
|
||||
#######################################
|
||||
# Gotify notification settings.
|
||||
: "${GOTIFYHOST:?Environment variable GOTIFYHOST is not set}"
|
||||
: "${GOTIFYTOKEN:?Environment variable GOTIFYTOKEN is not set}"
|
||||
: "${GOTIFYTOPIC:?Environment variable GOTIFYTOPIC is not set}"
|
||||
|
||||
# Restic encryption password.
|
||||
: "${RESTIC_PASSWORD:?Environment variable RESTIC_PASSWORD is not set}"
|
||||
|
||||
# Use the repository URI directly from the environment.
|
||||
# Example: export RESTIC_REPOSITORY="rest:http://your-rest-server:8000/backup"
|
||||
: "${RESTIC_REPOSITORY:?Environment variable RESTIC_REPOSITORY is not set}"
|
||||
|
||||
#######################################
|
||||
# Validate mode-specific environment variables.
|
||||
#######################################
|
||||
case "$BACKUP_MODE" in
|
||||
directory)
|
||||
: "${SOURCEDIR:?Environment variable SOURCEDIR is not set (required for directory backup mode)}"
|
||||
;;
|
||||
postgres)
|
||||
: "${PGHOST:?Environment variable PGHOST is not set (required for PostgreSQL backup mode)}"
|
||||
: "${PGDATABASE:?Environment variable PGDATABASE is not set (required for PostgreSQL backup mode)}"
|
||||
: "${PGUSER:?Environment variable PGUSER is not set (required for PostgreSQL backup mode)}"
|
||||
# Optional: default PGPORT to 5432.
|
||||
: "${PGPORT:=5432}"
|
||||
if [ -z "${PGPASSWORD:-}" ]; then
|
||||
echo "Warning: Environment variable PGPASSWORD is not set. pg_dump may fail if authentication is required."
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
echo "Error: Unknown backup mode '$BACKUP_MODE'. Valid modes are 'directory' and 'postgres'." >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
#######################################
|
||||
# Build the Gotify URL.
|
||||
#######################################
|
||||
GOTIFYURL="${GOTIFYHOST}/message?token=${GOTIFYTOKEN}"
|
||||
|
||||
#######################################
|
||||
# Date format for logging.
|
||||
#######################################
|
||||
@@ -15,111 +80,6 @@ log() {
|
||||
echo "$(date +"$LOG_DATE_FORMAT") - $*"
|
||||
}
|
||||
|
||||
#######################################
|
||||
# Determine operation mode from the environment only.
|
||||
# Valid values: "backup" or "restore".
|
||||
# Default to "backup" if not provided.
|
||||
#######################################
|
||||
OPERATION_MODE="${OPERATION_MODE:-backup}"
|
||||
|
||||
#######################################
|
||||
# Determine backup mode from the environment only.
|
||||
# Valid values: "directory" or "postgres".
|
||||
# Default to "directory" if not provided.
|
||||
#######################################
|
||||
BACKUP_MODE="${BACKUP_MODE:-directory}"
|
||||
|
||||
#######################################
|
||||
# Check for required external commands.
|
||||
#######################################
|
||||
REQUIRED_CMDS=(restic curl jq)
|
||||
if [ "$BACKUP_MODE" = "postgres" ]; then
|
||||
if [ "$OPERATION_MODE" = "backup" ]; then
|
||||
REQUIRED_CMDS+=(pg_dump)
|
||||
elif [ "$OPERATION_MODE" = "restore" ]; then
|
||||
REQUIRED_CMDS+=(psql)
|
||||
fi
|
||||
fi
|
||||
|
||||
for cmd in "${REQUIRED_CMDS[@]}"; do
|
||||
if ! command -v "$cmd" &>/dev/null; then
|
||||
log "Error: Required command '$cmd' is not installed."
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
#######################################
|
||||
# Validate common required environment variables.
|
||||
#######################################
|
||||
# Gotify notification settings (optional).
|
||||
# Set ENABLE_GOTIFY to "true" to enable notifications, any other value or unset disables them.
|
||||
ENABLE_GOTIFY="${ENABLE_GOTIFY:-true}"
|
||||
|
||||
if [ "$ENABLE_GOTIFY" = "true" ]; then
|
||||
: "${GOTIFYHOST:?Environment variable GOTIFYHOST is not set (required when ENABLE_GOTIFY=true)}"
|
||||
: "${GOTIFYTOKEN:?Environment variable GOTIFYTOKEN is not set (required when ENABLE_GOTIFY=true)}"
|
||||
: "${GOTIFYTOPIC:?Environment variable GOTIFYTOPIC is not set (required when ENABLE_GOTIFY=true)}"
|
||||
else
|
||||
log "Gotify notifications disabled. Backup status will be logged to console only."
|
||||
fi
|
||||
|
||||
# Restic encryption password.
|
||||
: "${RESTIC_PASSWORD:?Environment variable RESTIC_PASSWORD is not set}"
|
||||
|
||||
# Use the repository URI directly from the environment.
|
||||
# Example: export RESTIC_REPOSITORY="rest:http://your-rest-server:8000/backup"
|
||||
: "${RESTIC_REPOSITORY:?Environment variable RESTIC_REPOSITORY is not set}"
|
||||
|
||||
#######################################
|
||||
# Validate operation mode.
|
||||
#######################################
|
||||
case "$OPERATION_MODE" in
|
||||
backup|restore)
|
||||
;;
|
||||
*)
|
||||
echo "Error: Unknown operation mode '$OPERATION_MODE'. Valid modes are 'backup' and 'restore'." >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
#######################################
|
||||
# Validate mode-specific environment variables.
|
||||
#######################################
|
||||
case "$BACKUP_MODE" in
|
||||
directory)
|
||||
if [ "$OPERATION_MODE" = "backup" ]; then
|
||||
: "${SOURCEDIR:?Environment variable SOURCEDIR is not set (required for directory backup mode)}"
|
||||
elif [ "$OPERATION_MODE" = "restore" ]; then
|
||||
: "${RESTOREDIR:?Environment variable RESTOREDIR is not set (required for directory restore mode)}"
|
||||
fi
|
||||
;;
|
||||
postgres)
|
||||
: "${PGHOST:?Environment variable PGHOST is not set (required for PostgreSQL mode)}"
|
||||
: "${PGDATABASE:?Environment variable PGDATABASE is not set (required for PostgreSQL mode)}"
|
||||
: "${PGUSER:?Environment variable PGUSER is not set (required for PostgreSQL mode)}"
|
||||
# Optional: default PGPORT to 5432.
|
||||
: "${PGPORT:=5432}"
|
||||
if [ -z "${PGPASSWORD:-}" ]; then
|
||||
if [ "$OPERATION_MODE" = "backup" ]; then
|
||||
echo "Warning: Environment variable PGPASSWORD is not set. pg_dump may fail if authentication is required."
|
||||
elif [ "$OPERATION_MODE" = "restore" ]; then
|
||||
echo "Warning: Environment variable PGPASSWORD is not set. psql may fail if authentication is required."
|
||||
fi
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
echo "Error: Unknown backup mode '$BACKUP_MODE'. Valid modes are 'directory' and 'postgres'." >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
#######################################
|
||||
# Build the Gotify URL (only if Gotify is enabled).
|
||||
#######################################
|
||||
if [ "$ENABLE_GOTIFY" = "true" ]; then
|
||||
GOTIFYURL="${GOTIFYHOST}/message?token=${GOTIFYTOKEN}"
|
||||
fi
|
||||
|
||||
#######################################
|
||||
# Send a notification via Gotify.
|
||||
# Arguments:
|
||||
@@ -127,13 +87,6 @@ fi
|
||||
#######################################
|
||||
send_notification() {
|
||||
local message="$1"
|
||||
|
||||
# Only send notification if Gotify is enabled
|
||||
if [ "$ENABLE_GOTIFY" != "true" ]; then
|
||||
log "$message"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if ! curl -s -X POST "$GOTIFYURL" -F "title=${GOTIFYTOPIC}" -F "message=${message}" >/dev/null; then
|
||||
log "Warning: Failed to send notification with message: ${message}"
|
||||
fi
|
||||
@@ -204,89 +157,6 @@ backup_postgres() {
|
||||
run_restic_backup "${TEMP_BACKUP_DIR}"
|
||||
}
|
||||
|
||||
#######################################
|
||||
# Run the restore using restic.
|
||||
# Arguments:
|
||||
# $1 - The target directory to restore to.
|
||||
# $2 - Optional snapshot ID to restore (defaults to latest).
|
||||
#######################################
|
||||
run_restic_restore() {
|
||||
local target_dir="$1"
|
||||
local snapshot_id="$2"
|
||||
|
||||
log "Starting restore from repository ${RESTIC_REPOSITORY} to '${target_dir}'"
|
||||
log "Using snapshot: ${snapshot_id}"
|
||||
|
||||
# Create target directory if it doesn't exist
|
||||
mkdir -p "${target_dir}"
|
||||
|
||||
# Capture both stdout and stderr in a variable
|
||||
restore_output=$(restic -r "${RESTIC_REPOSITORY}" restore "${snapshot_id}" --target "${target_dir}" --no-cache --json --verbose 2>&1)
|
||||
# Optionally, also print the output to the console:
|
||||
echo "$restore_output"
|
||||
|
||||
# Parse the JSON lines output for the summary message
|
||||
summary=$(echo "$restore_output" | jq -r 'select(.message_type=="summary") | "Restore completed: " + (.files_restored|tostring) + " files restored, " + (.bytes_restored|tostring) + " bytes in " + (.total_duration|tostring) + " sec"' 2>/dev/null || echo "Restore completed")
|
||||
|
||||
# Check exit code of restic restore
|
||||
if [ $? -eq 0 ]; then
|
||||
msg="Restore successful. $summary"
|
||||
log "$msg"
|
||||
send_notification "$msg"
|
||||
else
|
||||
exit_code=$?
|
||||
msg="Restore failed with error code ${exit_code}. $restore_output"
|
||||
log "$msg"
|
||||
send_notification "$msg"
|
||||
exit "$exit_code"
|
||||
fi
|
||||
}
|
||||
|
||||
#######################################
|
||||
# Restore a directory (regular mode).
|
||||
#######################################
|
||||
restore_directory() {
|
||||
local snapshot_id="${RESTORE_SNAPSHOT_ID:-latest}"
|
||||
run_restic_restore "${RESTOREDIR}" "${snapshot_id}"
|
||||
}
|
||||
|
||||
#######################################
|
||||
# Restore a PostgreSQL database.
|
||||
# Restores the database dump from the backup and applies it to the database.
|
||||
#######################################
|
||||
restore_postgres() {
|
||||
local snapshot_id="${RESTORE_SNAPSHOT_ID:-latest}"
|
||||
log "Starting PostgreSQL restore for database '${PGDATABASE}' on host '${PGHOST}'"
|
||||
|
||||
# Create a temporary directory for the restore.
|
||||
TEMP_RESTORE_DIR=$(mktemp -d)
|
||||
log "Created temporary directory: ${TEMP_RESTORE_DIR}"
|
||||
|
||||
# Restore the backup to the temporary directory
|
||||
run_restic_restore "${TEMP_RESTORE_DIR}" "${snapshot_id}"
|
||||
|
||||
local dump_file="${TEMP_RESTORE_DIR}/dump.sql"
|
||||
if [ ! -f "${dump_file}" ]; then
|
||||
local msg="PostgreSQL restore failed. Database dump file not found at ${dump_file}"
|
||||
log "$msg"
|
||||
send_notification "$msg"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log "Restoring PostgreSQL database from ${dump_file}..."
|
||||
if psql -h "${PGHOST}" -p "${PGPORT}" -U "${PGUSER}" -d "${PGDATABASE}" ${PSQL_ARGS:-} < "${dump_file}"; then
|
||||
local msg="PostgreSQL database restored successfully"
|
||||
log "$msg"
|
||||
send_notification "$msg"
|
||||
else
|
||||
local exit_code=$?
|
||||
local msg="PostgreSQL restore failed with error code ${exit_code}"
|
||||
log "$msg"
|
||||
send_notification "$msg"
|
||||
exit "$exit_code"
|
||||
fi
|
||||
}
|
||||
|
||||
#######################################
|
||||
# Cleanup temporary resources.
|
||||
#######################################
|
||||
@@ -295,10 +165,6 @@ cleanup() {
|
||||
rm -rf "${TEMP_BACKUP_DIR}"
|
||||
log "Removed temporary directory ${TEMP_BACKUP_DIR}"
|
||||
fi
|
||||
if [ -n "${TEMP_RESTORE_DIR:-}" ] && [ -d "${TEMP_RESTORE_DIR}" ]; then
|
||||
rm -rf "${TEMP_RESTORE_DIR}"
|
||||
log "Removed temporary directory ${TEMP_RESTORE_DIR}"
|
||||
fi
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
@@ -306,26 +172,12 @@ trap cleanup EXIT
|
||||
# Main routine.
|
||||
#######################################
|
||||
main() {
|
||||
case "$OPERATION_MODE" in
|
||||
backup)
|
||||
case "$BACKUP_MODE" in
|
||||
directory)
|
||||
backup_directory
|
||||
;;
|
||||
postgres)
|
||||
backup_postgres
|
||||
;;
|
||||
esac
|
||||
case "$BACKUP_MODE" in
|
||||
directory)
|
||||
backup_directory
|
||||
;;
|
||||
restore)
|
||||
case "$BACKUP_MODE" in
|
||||
directory)
|
||||
restore_directory
|
||||
;;
|
||||
postgres)
|
||||
restore_postgres
|
||||
;;
|
||||
esac
|
||||
postgres)
|
||||
backup_postgres
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
33
src/backup.ts
Normal file
33
src/backup.ts
Normal file
@@ -0,0 +1,33 @@
|
||||
import {
|
||||
BACKUP_MODE,
|
||||
globalLogger,
|
||||
GOTIFY_HOST,
|
||||
GOTIFY_TOKEN,
|
||||
GOTIFY_TOPIC,
|
||||
} from "./env";
|
||||
import { directoryBackup } from "./directoryBackup";
|
||||
import { postgresBackup } from "./postgresBackup";
|
||||
import { gotifyClientFactory } from "./gotify";
|
||||
import { createBackupContext, reHomeContext } from "./backupContext";
|
||||
|
||||
export default async function backup() {
|
||||
const context = createBackupContext(
|
||||
"backup",
|
||||
BACKUP_MODE,
|
||||
globalLogger,
|
||||
gotifyClientFactory(GOTIFY_HOST, GOTIFY_TOKEN, GOTIFY_TOPIC)
|
||||
);
|
||||
context.logger.debug("Starting backup");
|
||||
|
||||
switch (BACKUP_MODE) {
|
||||
case "directory":
|
||||
context.logger.debug("Starting directory backup");
|
||||
return await directoryBackup(reHomeContext(context, "directoryBackup"));
|
||||
case "postgres":
|
||||
context.logger.debug("Starting postgres backup");
|
||||
return await postgresBackup(reHomeContext(context, "postgresBackup"));
|
||||
default:
|
||||
context.logger.error("Invalid backup mode");
|
||||
throw new Error("Invalid backup mode");
|
||||
}
|
||||
}
|
||||
33
src/backupContext.ts
Normal file
33
src/backupContext.ts
Normal file
@@ -0,0 +1,33 @@
|
||||
import type { Logger } from "pino";
|
||||
import { type NotificationClient } from "./gotify";
|
||||
|
||||
export interface BackupContext {
|
||||
logger: Logger;
|
||||
notificationClient: NotificationClient;
|
||||
resticRepository: string;
|
||||
}
|
||||
|
||||
export function createBackupContext(
|
||||
module: string,
|
||||
resticRepository: string,
|
||||
globalLogger: Logger,
|
||||
notificationClient: NotificationClient
|
||||
): BackupContext {
|
||||
const logger = globalLogger.child({ module });
|
||||
|
||||
return {
|
||||
logger,
|
||||
notificationClient,
|
||||
resticRepository,
|
||||
};
|
||||
}
|
||||
|
||||
export function reHomeContext(
|
||||
context: BackupContext,
|
||||
module: string
|
||||
): BackupContext {
|
||||
return {
|
||||
...context,
|
||||
logger: context.logger.child({ module }),
|
||||
};
|
||||
}
|
||||
100
src/backupUtils.ts
Normal file
100
src/backupUtils.ts
Normal file
@@ -0,0 +1,100 @@
|
||||
import type { BackupContext } from "./backupContext";
|
||||
|
||||
export function parseResticSummary(output: string): string | null {
|
||||
try {
|
||||
const lines = output.split("\n").filter((line) => line.trim());
|
||||
for (const line of lines) {
|
||||
try {
|
||||
const parsed = JSON.parse(line);
|
||||
if (parsed.message_type === "summary") {
|
||||
return `Snapshot ${parsed.snapshot_id || "none"}: files new: ${
|
||||
parsed.files_new || 0
|
||||
}, files changed: ${parsed.files_changed || 0}, data added: ${
|
||||
parsed.data_added || 0
|
||||
} bytes in ${parsed.total_duration || 0} sec`;
|
||||
}
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.warn(`Failed to parse restic output: ${error}`);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
export function runResticBackup(
|
||||
sourceDir: string,
|
||||
context: BackupContext
|
||||
): { success: boolean; output: string; summary: string | null } {
|
||||
const { logger, resticRepository } = context;
|
||||
|
||||
logger.info(
|
||||
`Starting backup of '${sourceDir}' to repository ${resticRepository}`
|
||||
);
|
||||
|
||||
const result = Bun.spawnSync(
|
||||
[
|
||||
"restic",
|
||||
"-r",
|
||||
resticRepository,
|
||||
"backup",
|
||||
"--no-cache",
|
||||
"--json",
|
||||
"--verbose",
|
||||
".",
|
||||
],
|
||||
{
|
||||
cwd: sourceDir,
|
||||
stdio: ["pipe", "pipe", "pipe"],
|
||||
}
|
||||
);
|
||||
|
||||
const output = result.stdout?.toString() + result.stderr?.toString() || "";
|
||||
const success = result.success;
|
||||
const summary = parseResticSummary(output);
|
||||
|
||||
return { success, output, summary };
|
||||
}
|
||||
|
||||
export async function executeBackup(
|
||||
backupType: string,
|
||||
backupFn: () => Promise<{
|
||||
success: boolean;
|
||||
output: string;
|
||||
summary: string | null;
|
||||
}>,
|
||||
context: BackupContext
|
||||
): Promise<void> {
|
||||
const { logger, notificationClient } = context;
|
||||
|
||||
try {
|
||||
logger.info(`Starting ${backupType} backup process`);
|
||||
|
||||
const { success, output, summary } = await backupFn();
|
||||
|
||||
console.log(output);
|
||||
|
||||
if (success) {
|
||||
const message = `${backupType} backup successful. ${
|
||||
summary || "No summary available"
|
||||
}`;
|
||||
logger.info(message);
|
||||
await notificationClient.sendNotification(message);
|
||||
} else {
|
||||
const message = `${backupType} backup failed: ${
|
||||
summary || "Unknown error"
|
||||
}`;
|
||||
logger.error(message);
|
||||
await notificationClient.sendNotification(message);
|
||||
throw new Error(`${backupType} backup failed: ${message}`);
|
||||
}
|
||||
|
||||
logger.info(`${backupType} backup completed successfully`);
|
||||
} catch (error) {
|
||||
const errorMessage = `${backupType} backup failed: ${error}`;
|
||||
logger.error(errorMessage);
|
||||
await notificationClient.sendNotification(errorMessage);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
13
src/directoryBackup.ts
Normal file
13
src/directoryBackup.ts
Normal file
@@ -0,0 +1,13 @@
|
||||
import { SOURCEDIR } from "./env";
|
||||
import { executeBackup, runResticBackup } from "./backupUtils";
|
||||
import type { BackupContext } from "./backupContext";
|
||||
|
||||
export async function directoryBackup(context: BackupContext): Promise<void> {
|
||||
await executeBackup(
|
||||
"Directory",
|
||||
async () => {
|
||||
return runResticBackup(SOURCEDIR, context);
|
||||
},
|
||||
context
|
||||
);
|
||||
}
|
||||
61
src/env.ts
Normal file
61
src/env.ts
Normal file
@@ -0,0 +1,61 @@
|
||||
import { from } from "env-var";
|
||||
import pino from "pino";
|
||||
|
||||
const initialEnv = from(process.env, {});
|
||||
const LOG_LEVEL = initialEnv
|
||||
.get("LOG_LEVEL")
|
||||
.default("info")
|
||||
.asEnum(["fatal", "error", "warn", "info", "debug", "trace"]);
|
||||
|
||||
export const globalLogger = pino({
|
||||
level: LOG_LEVEL,
|
||||
transport: {
|
||||
target: "pino-pretty",
|
||||
options: {
|
||||
colorize: true,
|
||||
translateTime: "SYS:standard",
|
||||
ignore: "pid,hostname",
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
const env = from(process.env, {}, (msg: string) => globalLogger.info(msg));
|
||||
|
||||
export const BACKUP_MODE = env
|
||||
.get("BACKUP_MODE")
|
||||
.required()
|
||||
.asEnum(["directory", "postgres"]);
|
||||
|
||||
export const GOTIFY_HOST = env.get("GOTIFY_HOST").required().asUrlString();
|
||||
export const GOTIFY_TOKEN = env.get("GOTIFY_TOKEN").required().asString();
|
||||
export const GOTIFY_TOPIC = env.get("GOTIFY_TOPIC").required().asString();
|
||||
|
||||
export const RESTIC_PASSWORD = env.get("RESTIC_PASSWORD").required().asString();
|
||||
export const RESTIC_REPOSITORY = env
|
||||
.get("RESTIC_REPOSITORY")
|
||||
.required()
|
||||
.asString();
|
||||
export const RESTIC_REST_USERNAME = env.get("RESTIC_REST_USERNAME").asString();
|
||||
export const RESTIC_REST_PASSWORD = env.get("RESTIC_REST_PASSWORD").asString();
|
||||
|
||||
export const SOURCEDIR = env
|
||||
.get("SOURCEDIR")
|
||||
.required(BACKUP_MODE === "directory")
|
||||
.asString();
|
||||
|
||||
export const PGDATABASE = env
|
||||
.get("PGDATABASE")
|
||||
.required(BACKUP_MODE === "postgres")
|
||||
.asString();
|
||||
export const PGHOST = env
|
||||
.get("PGHOST")
|
||||
.required(BACKUP_MODE === "postgres")
|
||||
.asString();
|
||||
export const PGUSER = env
|
||||
.get("PGUSER")
|
||||
.required(BACKUP_MODE === "postgres")
|
||||
.asString();
|
||||
export const PGPORT = env
|
||||
.get("PGPORT")
|
||||
.required(BACKUP_MODE === "postgres")
|
||||
.asString();
|
||||
23
src/gotify.ts
Normal file
23
src/gotify.ts
Normal file
@@ -0,0 +1,23 @@
|
||||
export interface NotificationClient {
|
||||
sendNotification(message: string): Promise<void>;
|
||||
}
|
||||
|
||||
export function gotifyClientFactory(
|
||||
gotifyHost: string,
|
||||
gotifyToken: string,
|
||||
gotifyTopic: string
|
||||
): NotificationClient {
|
||||
const sendNotification = async (message: string) => {
|
||||
await fetch(`${gotifyHost}/message?token=${gotifyToken}`, {
|
||||
method: "POST",
|
||||
body: JSON.stringify({
|
||||
title: gotifyTopic,
|
||||
message: message,
|
||||
}),
|
||||
});
|
||||
};
|
||||
|
||||
return {
|
||||
sendNotification,
|
||||
};
|
||||
}
|
||||
15
src/main.ts
Normal file
15
src/main.ts
Normal file
@@ -0,0 +1,15 @@
|
||||
import { globalLogger } from "./env";
|
||||
import backup from "./backup";
|
||||
|
||||
const logger = globalLogger.child({ module: "main" });
|
||||
|
||||
(async () => {
|
||||
try {
|
||||
logger.info("Starting backup application");
|
||||
await backup();
|
||||
logger.info("Backup application completed successfully");
|
||||
} catch (error) {
|
||||
logger.error(`Backup application failed: ${error}`);
|
||||
process.exit(1);
|
||||
}
|
||||
})();
|
||||
78
src/postgresBackup.ts
Normal file
78
src/postgresBackup.ts
Normal file
@@ -0,0 +1,78 @@
|
||||
import { writeFileSync, mkdtempSync, rmSync } from "fs";
|
||||
import { join } from "path";
|
||||
import { tmpdir } from "os";
|
||||
import { PGHOST, PGDATABASE, PGUSER, PGPORT } from "./env";
|
||||
import { executeBackup, runResticBackup } from "./backupUtils";
|
||||
import type { BackupContext } from "./backupContext";
|
||||
|
||||
function dumpPostgresDatabase(context: BackupContext): {
|
||||
success: boolean;
|
||||
tempDir: string;
|
||||
dumpFile: string;
|
||||
} {
|
||||
const { logger } = context;
|
||||
|
||||
const tempDir = mkdtempSync(join(tmpdir(), "postgres-backup-"));
|
||||
const dumpFile = join(tempDir, "dump.sql");
|
||||
|
||||
logger.info(`Created temporary directory: ${tempDir}`);
|
||||
logger.info(`Dumping PostgreSQL database to ${dumpFile}...`);
|
||||
|
||||
const result = Bun.spawnSync(
|
||||
["pg_dump", "-h", PGHOST, "-p", PGPORT, "-U", PGUSER, PGDATABASE],
|
||||
{
|
||||
stdio: ["pipe", "pipe", "pipe"],
|
||||
}
|
||||
);
|
||||
|
||||
if (result.success) {
|
||||
writeFileSync(dumpFile, result.stdout?.toString() || "");
|
||||
logger.info("Database dump created successfully.");
|
||||
return { success: true, tempDir, dumpFile };
|
||||
} else {
|
||||
logger.error(`PostgreSQL dump failed`);
|
||||
logger.error(`stderr: ${result.stderr?.toString() || ""}`);
|
||||
return { success: false, tempDir, dumpFile };
|
||||
}
|
||||
}
|
||||
|
||||
export async function postgresBackup(context: BackupContext): Promise<void> {
|
||||
let tempDir: string | null = null;
|
||||
|
||||
try {
|
||||
context.logger.info(
|
||||
`Starting PostgreSQL backup for database '${PGDATABASE}' on host '${PGHOST}'`
|
||||
);
|
||||
|
||||
const { success, tempDir: dir, dumpFile } = dumpPostgresDatabase(context);
|
||||
tempDir = dir;
|
||||
|
||||
if (!success) {
|
||||
throw new Error("PostgreSQL dump failed");
|
||||
}
|
||||
|
||||
await executeBackup(
|
||||
"PostgreSQL",
|
||||
async () => {
|
||||
if (!tempDir) {
|
||||
throw new Error("Temporary directory not created");
|
||||
}
|
||||
return runResticBackup(tempDir, context);
|
||||
},
|
||||
context
|
||||
);
|
||||
} catch (error) {
|
||||
throw error;
|
||||
} finally {
|
||||
if (tempDir) {
|
||||
try {
|
||||
rmSync(tempDir, { recursive: true, force: true });
|
||||
context.logger.info(`Removed temporary directory ${tempDir}`);
|
||||
} catch (cleanupError) {
|
||||
context.logger.warn(
|
||||
`Failed to cleanup temporary directory ${tempDir}: ${cleanupError}`
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
29
tsconfig.json
Normal file
29
tsconfig.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
// Environment setup & latest features
|
||||
"lib": ["ESNext"],
|
||||
"target": "ESNext",
|
||||
"module": "Preserve",
|
||||
"moduleDetection": "force",
|
||||
"jsx": "react-jsx",
|
||||
"allowJs": true,
|
||||
|
||||
// Bundler mode
|
||||
"moduleResolution": "bundler",
|
||||
"allowImportingTsExtensions": true,
|
||||
"verbatimModuleSyntax": true,
|
||||
"noEmit": true,
|
||||
|
||||
// Best practices
|
||||
"strict": true,
|
||||
"skipLibCheck": true,
|
||||
"noFallthroughCasesInSwitch": true,
|
||||
"noUncheckedIndexedAccess": true,
|
||||
"noImplicitOverride": true,
|
||||
|
||||
// Some stricter flags (disabled by default)
|
||||
"noUnusedLocals": false,
|
||||
"noUnusedParameters": false,
|
||||
"noPropertyAccessFromIndexSignature": false
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user