I’m planning on setting up a nas/home server (primarily storage with some jellyfin and nextcloud and such mixed in) and since it is primarily for data storage I’d like to follow the data preservation rules of 3-2-1 backups. 3 copies on 2 mediums with 1 offsite - well actually I’m more trying to go for a 2-1 with 2 copies and one offsite, but that’s besides the point. Now I’m wondering how to do the offsite backup properly.

My main goal would be to have an automatic system that does full system backups at a reasonable rate (I assume daily would be a bit much considering it’s gonna be a few TB worth of HDDs which aren’t exactly fast, but maybe weekly?) and then have 2-3 of those backups offsite at once as a sort of version control, if possible.

This has two components, the local upload system and the offsite storage provider. First the local system:

What is good software to encrypt the data before/while it’s uploaded?

While I’d preferably upload the data to a provider I trust, accidents happen, and since they don’t need to access the data, I’d prefer them not being able to, maliciously or not, so what is a good way to encrypt the data before it leaves my system?

What is a good way to upload the data?

After it has been encrypted, it needs to be sent. Is there any good software that can upload backups automatically on regular intervals? Maybe something that also handles the encryption part on the way?

Then there’s the offsite storage provider. Personally I’d appreciate as many suggestions as possible, as there is of course no one size fits all, so if you’ve got good experiences with any, please do send their names. I’m basically just looking for network attached drives. I send my data to them, I leave it there and trust it stays there, and in case too many drives in my system fail for RAID-Z to handle, so 2, I’d like to be able to get the data off there after I’ve replaced my drives. That’s all I really need from them.

For reference, this is gonna be my first NAS/Server/Anything of this sort. I realize it’s mostly a regular computer and am familiar enough with Linux, so I can handle that basic stuff, but for the things you wouldn’t do with a normal computer I am quite unfamiliar, so if any questions here seem dumb, I apologize. Thank you in advance for any information!

  • Psychadelligoat@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    24 minutes ago

    Put brand new drive into system, begin clone

    When clone is done, pull drive out and place in a cardboard box

    Take that box to my off-site storage (neighbors house) and bury it

    (In truth I couldn’t afford to get to the 1 off-site in time and have potentially tragically lost almost 4TB of data that, while replacable, will take time because I don’t fucking remember what I even had lol. Gonna take the drives to a specialist tho cuz I think the plates are fine and it’s the actual reading mechanism that’s busted)

  • traches@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 hours ago

    NAS at the parents’ house. Restic nightly job, with some plumbing scripts to automate it sensibly.

  • Matt The Horwood@lemmy.horwood.cloud
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 hours ago

    There’s some really good options in this thread, just remember that whatever you pick. Unless you test your backups, they are as good as not existing.

      • huquad@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 hour ago

        Agreed. I have it configured on a delay and with multiple file versions. I also have another pi running rsnapshot (rsync tool).

  • harsh3466@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 hours ago

    Right now I sneaker net it. I stash a luks encrypted drive in my locker at work and bring it home once a week or so to update the backup.

    At some point I’m going to set up a RPI at a friend’s house, but that’s down the road a bit.

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    so if any questions here seem dumb

    Not dumb. I say the same, but I have a severe inferiority complex and imposter syndrome. Most artists do.

    1 local backup 1 cloud back up 1 offsite backup to my tiny house at the lake.

    I use Synchthing.

  • LandedGentry@lemmy.zip
    link
    fedilink
    English
    arrow-up
    17
    ·
    5 hours ago

    Cloud is kind of the default these days but given you’re on this community, I’m guessing you want to keep third parties out of it.

    Traditionally, at least in the video editing world, we would keep LTO or some other format offsite and pay for housing it or if you have multiple locations available to you just have those drives shipped back-and-forth as they are updated at regular intervals.

    I don’t know what you really have access to or what you’re willing to compromise on so it’s kind of hard to answer the question to be honest. Lots of ways to do it

  • rutrum@programming.dev
    link
    fedilink
    English
    arrow-up
    14
    ·
    4 hours ago

    I use borg backup. It, and another tool called restic, are meant for creating encrypted backups. Further, it can create backups regularly and only backup differences. This means you could take a daily backup without making new copies of your entire library. They also allow you to, as part of compressing and encrypting, make a backup to a remote machine over ssh. I think you should start with either of those.

    One provider thats built for being a cloud backup is borgbase. It can be a location you backup a borg (or restic I think) repository. There are others that are made to be easily accessed with these backup tools.

    Lastly, I’ll mention that borg handles making a backup, but doesn’t handle the scheduling. Borgmatic is another tool that, given a yml configuration file, will perform the borgbackup commands on a schedule with the defined arguments. You could also use something like systemd/cron to run a schedule.

    Personally, I use borgbackup configured in NixOS (which makes the systemd units for making daily backups) and I back up to a different computer in my house and to borgbase. I have 3 copies, 1 cloud and 2 in my home.

  • I used to say restic and b2; lately, the b2 part has become more iffy, because of scuttlebutt, but for now it’s still my offsite and will remain so until and unless the situation resolves unfavorably.

    Restic is the core. It supports multiple cloud providers, making configuration and use trivial. It encrypts before sending, so the destination never has access to unencrypted blobs. It does incremental backups, and supports FUSE vfs mounting of backups, making accessing historical versions of individual files extremely easy. It’s OSS, and a single binary executable; IMHO it’s at the top of its class, commercial or OSS.

    B2 has been very good to me, and is a clear winner for this is case: writes and space are pennies a month, and it only gets more expensive if you’re doing a lot of reads. The UI is straightforward and easy to use, the API is good; if it weren’t for their recent legal and financial drama, I’d still unreservedly recommend them. As it is, you’d have you evaluate it yourself.

  • q7mJI7tk1@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    I spend my days working on a MacBook, and have several old external USB drives duplicating my important files, live, off my server (Unraid) via Resilio to my MacBook (yes I know syncthing exists, but Resilio is easier). My off-site backups are to a Hetzner Storage Box using Duplicacy which is amazing and supports encrypted snapshots (a cheap GUI alternative to Borgbackup).

    So for me, Resilio and Duplicacy.

  • Getting6409@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    My automated workflow is to package up backup sources into tars (uncompressed), and encrypt with gpg, then ship the tar.gpg off to backblaze b2 and S3 with rclone. I don’t trust cloud providers so I use two just in case. I’ve not really been in the need for full system backups going off site, rather just the things I’d be severely hurting for if my home exploded.

    But to your main questions, I like gpg because you have good options for encrypting things safely within bash/ash/sh scripting, and the encryption itself is considered strong.

    And, I really like rclone because it covers the main cloud providers and wrangles everything down to an rsync-like experience which also pretty tidy for shell scripting.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    4 hours ago

    Next to paying for cloud storage, I know people who store an external hdd at their parent’s or with friends. I don’t do the whole backup thing for all the recorded TV shows and ripped bluerays… If my house burns down, they’re gone. But that makes the amount of data a bit more manageable. And I can replace those. I currently don’t have a good strategy. My data is somewhat scattered between my laptop, the NAS, an external hdd which is in a different room but not off-site, one cheap virtual server I pay for and critical things like the password manager are synced to the phone as well. Main thing I’m worried about is one of the mobile devices getting stolen so I focus on having that backed up to the NAS or synced to Nextcloud. But I should work on a solid strategy in case something happens to the NAS.

    I don’t think the software is a big issue. We got several good backup tools which can do incremental or full backups, schedules, encryption and whatever someone might need for backups.

    • tburkhol@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      It really depends on what your data is and how hard it would be to recreate. I keep a spare HD in a $40/year bank box & rotate it every 3 months. Most of the content is media - pictures, movies, music. Financial records would be annoying to recreate, but if there’s a big enough disaster to force me to go to the off-site backups, I think that’ll be the least of my troubles. Some data logging has a replica database on a VPS.

      My upload speed is terrible, so I don’t want to put a media library in the cloud. If I did any important daily content creation, I’d probably keep that mirrored offsite with rsync, but I feel like the spirit of an offsite backup is offline and asynchronous, so things like ransomware don’t destroy your backups, too.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Sure. With data that might be skipped, I meant something like the Jellyfin server, which probably consists of pirated TV and music or movie rips. Those tend to be huge in size and easy to recreate. With personal content, pictures and videos there is no chance of getting it back. And I’d argue with a lot of documents and data it’s not even worth the hassle to decide which might be stored somewhere else, maybe in paper form… Just back them up, storage is cheap and most people don’t generate gigabytes worth of content each month. For large data that doesn’t change a lot, something like one or two rotated external disks might do it. And for smaller documents and current projects which see a lot of changes, we have things like Nextcloud, Syncthing and a $80 a year VPS or other cloud storage solutions.

    • ladfrombrad 🇬🇧@lemdro.id
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Yeah me too, photos and videos I’ve recorded are the only things I’m bothered about. Backing up off-site all my arrrrr booty is redundant since I’ve shared it to a 2.1 ratio already and hopefully can download it again from people with larger storage than my family member has.

      It’s how I handle backing up those photos / videos thou. I bought them a 512GB card and shoved that in a GLi AP they have down there which I sync my DCIM folder to (app was removed from Play Store since it didn’t need updating but Googles stupid policies meant it went RIP…), and I also backup that to the old Synology NAS I handed down to them. I suppose I could use Syncthing but I like that old app since the adage if it’s not broke don’t fix it applies.

      Along with them having Tailscale on a Pi4 (on a UPS and is their/my backup TVHeadend server) and their little N100 media box I don’t even bother them with my meager photo collection and works good.

  • WeirdGoesPro@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 hours ago

    My ratchet way of doing it is Backblaze. There is a docker container that lets you run the unlimited personal plan on Linux by emulating a windows environment. They let you set an encryption key so that they can’t access your data.

    I’m sure there are a lot more professional and secure ways to do it, but my way is cheap, easy, and works.

    • BlueÆther@no.lastname.nz
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      I use backblaze as well, got an link to the docker container - that may save me a few dollar bucks a week and thus keep SWMBO happier

      • turmacar@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Probably a me problem but kept having problems with that docker on unraid, it’s just in the community apps ‘store’. The vm seemed to just crash randomly.

        I switched over to their B2 storage and just use rclone to an encrypted bucket and it’s ~<$5/mo which I’m good with. Biggest cost is if I let it run too often and it spends a bunch of their compute time listing files to see if it needs to update them.

    • qjkxbmwvz@startrek.website
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      Same — rsync to a pi 3 with a (single) ZFS drive at family’s house. Retain some daily/weekly/monthly snapshots.

      I have a (free) VPS with static IPv4 which is how I connect everything.

      Both the VPS and the remote site have limited network speed (I think 50Mbps for VPS), so the initial sync was done sneakernet (well…“airplane net”). Nightly rsync is no problem bandwidth-wise, and is mostly just any new videos I’ve uploaded to my local Immich instance.