Backup

Backups protect your sites, databases, and mail so you can roll back fast when something goes sideways (oops moments included). You’ll set up schedules, choose what to back up, how often, how many copies to keep, and whether to run it for every account or just a few.

What this page does
  • Create schedules (daily/weekly/monthly patterns with cron-style timing).
  • Choose scope: files, databases, mail — and whether it applies to all accounts or selected ones.
  • Retention: keep N recent backups, plus optional weekly and monthly “long-term” points.
  • Run now / Stop a schedule, or just let it run at the next planned time.
  • Guardrails: pause/stop if the server is too busy or backup disk is too full.
Quick start (create a schedule)
  1. Go to Create Schedule.
  2. Pick a name and a backup partition (e.g. /backup).
  3. Set the frequency with the Minute / Hour / Day / Month / Weekday pickers.
    Not sure? The helper text links to crontab examples.
  4. Choose what to include: Files, Databases, Mail.
  5. Set Backup Points (how many recent runs to keep). Optionally enable Weekly/Monthly points for long-term copies.
  6. Decide between Backup all accounts or select accounts.
  7. (Optional) Add excludes — one pattern per line (e.g. public_html/*.zip, mail/*, sess_*).
  8. Click Create Schedule.
Simple vs. Advanced backup
  • Simple — creates compressed archives each run:
    • Files → files.tar.gz (mail is excluded here to keep things tidy).
    • Mail → mail.tar.gz (if you toggle Mail on).
    • Databases → dumps each DB, then bundles them into databases.tar.gz.
    Great when you want one self-contained snapshot per run.
  • Advanced — incremental style using hard links (rsync link-dest):
    • Only changed files take space; unchanged files are hard-linked from prior runs.
    • Databases are dumped table-by-table so restores can be granular.
    Best when you care about storage efficiency and faster rolling backups.
What gets backed up (beyond files/DB/mail)
  • A copy of the account’s metadata (account YAML).
  • SSL certificate definitions (if present).
  • All DNS zones the account owns.
  • User crontab is captured and stored with the account’s info.
Retention: how points roll
  • Backup Points — keep the last N regular runs. When a new one arrives:
    • If Weekly is enabled and it’s the weekly day, extra regulars are moved into weekly.
    • Otherwise extra regulars are removed.
  • Weekly points — keep W weeks. On your weekly day (e.g., Monday), one old regular becomes a weekly point. If there are too many, oldest weeklies are deleted (or moved to monthly on the monthly day if monthly is enabled).
  • Monthly points — keep M months. On the first of the month, one old point (weekly if present, else regular) becomes a monthly point. If there are too many, oldest monthlies are deleted.

Translation: regular = short term, weekly = medium term, monthly = long term. You choose the caps for each.

Run control & status
  • Run now kicks off a schedule immediately.
  • Stop halts a running schedule (we’ll try a clean stop).
  • Next run shows the precise upcoming time based on your cron pattern.
  • Statuses: Scheduled, Running, Completed, or Failed/Stopped — you’ll see which one per schedule.
Smart safety limits
  • Server Load Limit — if system load is above this, we wait and retry up to 10 times, then skip that account for this run.
  • Partition Disk Usage % Limit — if the backup partition passes this threshold, we stop the whole run to avoid filling the disk.
  • You can set both in Threshold Limits.
Excludes (good defaults)
  • Exclude big, temporary, or unnecessary data. Examples:
    • mail/* (only if you’re backing mail separately)
    • public_html/*.zip
    • *.log, cache/*, tmp/*, sess_*
  • One pattern per line. Wildcards are OK (*).
Where things go
  • Each schedule writes to your backup partition under:
    /backup/<ScheduleID>/<YYYY-MM-DD_HH-mm>/<user>/{files,databases,mail,structure}
  • Retention renames (for weekly/monthly) or removes older timestamp folders as needed.
Logs
  • Each run writes a log file you can review: /var/log/shm/backup/backup_<ScheduleID>_<timestamp>.log.
  • The schedule also keeps an in-memory log you can surface in the UI (where available).
Tips that save headaches
  • Point backups to a separate disk/partition (e.g., /backup), not the root volume.
  • Start with Files + Databases. Add Mail only for mail-heavy tenants.
  • Use Advanced for busy servers — it’s kinder on storage and faster after the first run.
  • Keep weekly/monthly retention modest (e.g., 7 weekly, 3 monthly) unless you truly need more history.
  • Test a restore path periodically so you know the steps before you actually need them.

Enterprise Backup

This bridges your server to a remote Synconix Backup Manager schedule. In plain terms: you point this box at a backup vault living on another Synconix server, and your local admins/users can browse those off-site restore points and pull back what they need — files, databases, or mail — without opening tickets or shuffling tarballs around.

What it does
  • Links a remote Synconix backup schedule to this server (read-only).
  • Discovers restore points (by date) for mapped accounts and shows what’s inside.
  • Restores on demand: entire accounts, selected folders/files, specific databases/tables, or mailboxes.
  • No drama: restores can go to a staging path first, or merge into the live account with conflict rules you set.
Typical use cases
  • Pull a single site folder from last night’s remote backup.
  • Recover one database table after a bad deploy.
  • Restore a mailbox a user “cleaned up” a little too aggressively.
  • Spin up a full account snapshot into ~/restore/<timestamp> for manual cherry-picking.
How to link a remote schedule
  1. Open Enterprise Backup and click Link remote schedule.
  2. Enter the remote server details and the schedule identifier you want to expose here.
    (Tip: use credentials with read-only access to the backup store.)
  3. Map remote accounts → local accounts (we’ll suggest matches by username/domain; you can adjust).
  4. Save. The schedule appears with its restore points listed by date/time.
Restoring from a remote backup
  1. Pick a restore point (date/time).
  2. Choose what to restore:
    • Files: whole account, a directory, or specific files.
    • Databases: full DB or one/more tables.
    • Mail: a domain’s mailboxes or a single mailbox.
  3. Select the target: stage into ~/restore/<timestamp> (safe) or merge into the live path.
  4. Pick a conflict rule: skip existing, overwrite, or keep both (adds .restored suffix).
  5. Click Restore. You’ll see progress and a summary when it’s done.
Permissions & visibility
  • Root: sees/links all remote schedules, restores anything.
  • Resellers: see/restore for their own accounts.
  • Users: can restore only their account data (if enabled for them).
Safety rails you’ll appreciate
  • Read-only remote: we never modify or prune the source backups from here.
  • Staging first option to inspect before merging back.
  • Load & disk checks prevent restores from kicking off if the server is under stress or space is tight.
  • Audit trail: who restored what, when, and where it landed.
Requirements
  • Remote Synconix server with a valid Backup Manager schedule and network reachability from this host.
  • Credentials (or key) permitted to read the backup store for the linked schedule.
  • Sufficient free space on this server for staging and/or merged data.
Good-to-know details
  • Restores are granular: you don’t need to pull the whole snapshot if you only need a table or folder.
  • We keep original perms/ownership where sensible; anything ambiguous is clearly logged.
  • If versions of PHP/DB differ between remote and local, the data still restores; app runtime tweaks are on you.
Common snags (and fixes)
  • Can’t see remote points: double-check schedule ID and network path/credentials; confirm the remote vault is mounted/online.
  • Permission denied on merge: stage first, then compare perms/UIDs; use the panel’s ownership fix if needed.
  • No space left: free space on the target partition or stage to a larger volume.

Navigation