Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DeepDive] Apple Time Machine backups (sparsebundle, backupdb, etc) #89

Open
0xdevalias opened this issue Aug 21, 2020 · 6 comments
Open
Labels
deep-dive A research deep dive/link dump of various things related to a topic. May eventually become a blog.

Comments

@0xdevalias
Copy link
Owner

0xdevalias commented Aug 21, 2020


Note: the methods listed in the original article may not be the best/most up to date methods anymore.

In particular, I located another method using tmutil restore at the following link:

There's also a lot of useful info about tmutil's capabilities at the following link:

If you're wanting to 'Transfer Time Machine back-ups from one back-up disc to another':

With Parallels we can create a new macOS virtual machine (VM), and then potentially we could 'restore' a TimeMachine backup to that VM:

And while it's not TimeMachine related, I recently discovered SuperDuper and how it can make bootable macOS backups, which could be a useful thing (potentially combined with restoring from an old TimeMachine backup, then creating a bootable backup from that?):


Some more useful blog posts related to sparsebundles and similar:

Hacky ways to speed up initial Time Machine backup on NAS

There are a few tricks/hacks/workaround for creating a local TimeMachine backup, which can then be transferred to a NAS, and used for a networked TimeMachine backup.

Both of these seem to be kind of hacky workarounds for ways to get an appropriate sparsebundle created, using that locally to perform the first backup with TimeMachine, then moving it to a NAS/server to continue making future backups over the network. Therefore.. it may just be more useful to manually create the sparsebundle image ourself (see below).

Manually creating a sparsebundle disk image

I added some notes on manually creating .sparsebundle disk images, which could potentially be used to 'coerce' Time Machine into not directly creating a .backupdb file when backing up to an external hard disk:

Which links to the following references:

Some other resources I found with regards to manually creating sparsebundle images, particularly from the command line:

  • https://www.thegeekpub.com/4184/how-to-create-a-sparsebundle-on-os-x/
    • hdiutil create -library SPUD -size $SIZESPEC -fs Journaled HFS+ -type SPARSEBUNDLE -volname $MACHINENAME_$MAC_ADDRESS.sparsebundle
    • The -size parameter can probably be as large as you want, now that Apple has evidently fixed the sparsebundle issues that were causing all but the most recent backup to be dumped. However, you can also specify a smaller size if you (like me) want to create a hard limit for the amount of space your Time Machine backups will take on your network drive. hdiutil does have a -resize option if you need to utilize that later.

    • you can enable Time Machine to access SMB/Network shares with following command: defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1

  • http://hints.macworld.com/article.php?story=20140415132734925
    • Using Time Machine on unsupported volumes

    • hdiutil create -size 320g -type SPARSEBUNDLE -fs "HFS+J" MacBook-Backup.sparsebundle open MacBook-Backup.sparsebundle
    • diskutil list
    • sudo diskutil enableOwnership /dev/disk2s2
    • sudo tmutil setdestination '/Volumes/MacBook Pro Backup'
  • http://sansumbrella.com/writing/2012/the-reluctant-sysadmin-nas-time-machine/ (2012)
    • Apparently, HFS+ file systems can handle crazy huge numbers of files coexisting in a single folder. Whatever flavor of Linux is running on the NAS cannot. The failed verification results from Apple software trying to dump too many files in the same folder (in this case, the files are sparse bundle stripes). Too many files are created when you start having a large-ish backup. Say 300GB+. For me, that can happen after a week or so of file changes, since my initial backup is in the 290GB range. The solution to enabling more backup space is to increase the band size of the time machine backup so that your NAS’s filesystem doesn’t get overwhelmed by the number of files Time Machine attempts to store in a single folder. Fewer, bigger files.

    • Create a sparse bundle with a 128MB sparse-band-size and then replace the bundle generated by time machine with it. You will also need to copy the com.apple.TimeMachine.*.plist files from the Time Machine-generated bundle. Those plists will let Time Machine identify the sparsebundle as one it can and should use. Don’t copy the .bckup files, token, Info.plist, or bands/.

    • https://gist.github.com/sansumbrella/3997241
      • hdiutil create -size 900g -type SPARSEBUNDLE -nospotlight -volname "Backup of $MACHINE_NAME" -fs "Case-sensitive Journaled HFS+" -imagekey sparse-band-size=262144 -verbose ./$MACHINE_NAME.sparsebundle
      • cp your-machine-name.old.sparsebundle/com.apple.TimeMachine.*.plist your-machine-name.new.sparsebundle
  • https://mike.peay.us/blog/archives/248 (2008)
    • I learned all about Time (Machine) so you don’t have to

    • hdiutil create -library SPUD -size $SIZESPEC -fs HFS+J -type SPARSEBUNDLE \ -tgtimagekey sparse-band-size=262144 -volname "Backup of $MACHINENAME" \ $MACHINENAME_$MAC.sparsebundle
  • https://wiki.turris.cz/doc/en/public/create_time_capsule_for_osx_time_machine
    • hdiutil create -size 500g -type SPARSEBUNDLE -encryption AES-128 -nospotlight -fs "Journaled HFS+" -volname "$TC" $HN\_$MA.sparsebundle
    • defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1

If using the command line and low level tools like hdiutil isn't your thing, Spundle appears to be a GUI tool that wraps up some of the more commonly needed sparsebundle related features:

  • https://eclecticlight.co/2020/05/18/spundle-a-new-utility-for-creating-and-adjusting-sparse-bundles/
    • Among the many complex command tools, hdiutil, for working with disk images, has to be one of the most difficult. Some of its most interesting controls are buried deep in its elaborate system of verbs and options, and its long and opaque man page. Spundle is a new app which provides a convenient wrapper for three of its functions on sparse bundles: their creation, resizing and compaction.

    • It’s a single window utility which runs on all versions of macOS from Sierra to Catalina.

Copy .sparsebundle from NAS to local/external disk

rsync -avp -e "ssh" --progress TODONASNAMEHERE:/mnt/path/to/TimeMachine/TODOMYBACKUPNAME.sparsebundle /Volumes/TODOEXTERNALDISKNAME/

Note: if you're copying over the internet/a slower network, you may benefit from adding -z to the rsync command to enable compression.

To ensure your destination doesn't keep any files that no longer exist on the source, make sure to use --delete (though be careful when doing so, as you could accidentally remove all your destination files if you make a mistake.. maybe combine with --dry-run to see what will be changed first):

You can use --dry-run to see what files have changed/need to be synced without actually sending them, and --itemize-changes to see why those files are considered to have been changed:

Deleting files from Time Machine backups

Reclaim space by defragging / compacting sparsebundle's

It appears there is also some good information about using hdiutil to compact sparsebundles to reclaim unused space, and some tricks for using it to 'defrag' the bundle (and ideally save even more space):

Time Machine changes in macOS Big Sur

DIY Docker / Raspberry Pi / etc Time Machine (Time Capsule)

tmutil

Some useful tmutil subcommands (from man tmutil) for comparing/extracting files from backups, as well as managing backup files:

     compare [-@acdefglmnstuEUX] [-D depth] [-I name] [snapshot_path | path1 path2]
             Perform a backup diff.

             If no arguments are provided, tmutil will compare the computer to the latest snapshot. If a snapshot path is provided as the sole argument, tmutil will compare the com-
             puter to the specified snapshot. If two path arguments are provided, tmutil will compare those two items to each other.  tmutil will attempt to inform you when you have
             asked it to do something that doesn't make sense or isn't supported.

             The compare verb allows you to specify what properties to compare. If you specify no property options, tmutil assumes a default property set of -@gmstu. Specifying any
             property option overrides the default set.

             Options:
                 -a    Compare all supported metadata.
                 -n    No metadata comparison.
                 -@    Compare extended attributes.
                 -c    Compare creation times.
                 -d    Compare file data forks.
                 -e    Compare ACLs.
                 -f    Compare file flags.
                 -g    Compare GIDs.
                 -m    Compare file modes.
                 -s    Compare sizes.
                 -t    Compare modification times.
                 -u    Compare UIDs.
                 -D    Limit traversal depth to depth levels from the beginning of iteration.
                 -E    Don't take exclusions into account when comparing items inside volumes.
                 -I    Ignore paths with a path component equal to name during iteration. This may be specified multiple times.
                 -U    Ignore logical volume identity (volume UUIDs) when directly comparing a local volume or snapshot volume to a snapshot volume.
                 -X    Print output in XML property list format.

     restore [-v] src ... dst
             Restore the item src, which is inside a snapshot, to the location dst. The dst argument mimics the destination path semantics of the cp tool. You may provide multiple
             source paths to restore. The last path argument must be a destination.

             When using the restore verb, tmutil behaves largely like Finder. Custom Time Machine metadata (extended security and other) will be removed from the restored data, and
             other metadata will be preserved.

             Root privileges are not strictly required to perform restores, but tmutil does no permissions preflighting to determine your ability to restore src or its descendants.
             Therefore, depending on what you're restoring, you may need root privileges to perform the restore, and you should know this ahead of time. This is the same behavior you
             would encounter with other copy tools such as cp or ditto. When restoring with tmutil as root, ownership of the restored items will match the state of the items in the
             backup.

     delete path ...
             Delete one or more snapshots, machine directories, or backup stores. This verb can delete items from backups that were not made by, or are not claimed by, the current
             machine. Requires root privileges.

     listbackups
             Print paths for all of this computer's completed snapshots.

     machinedirectory
             Print the path to the current machine directory for this computer.

     calculatedrift machine_directory
             Analyze the snapshots in a machine directory and determine the amount of change between each. Averages are printed after all snapshots have been analyzed. This may
             require root privileges, depending on the contents of the machine directory.

hdiutil

Unsorted

@0xdevalias 0xdevalias added the deep-dive A research deep dive/link dump of various things related to a topic. May eventually become a blog. label Aug 21, 2020
@0xdevalias
Copy link
Owner Author

0xdevalias commented Sep 2, 2020

The following are my notes on copying a networked Time Machine backup from my NAS to a locally attached external hard disk, then (working on the local copy so as not to damage the original backup) attempting to reduce the size of the sparsebundle by compacting and defragmenting it. I may also attempt to reduce the size further by removing large unneeded files from the backup (eg. Applications, VMs, etc); which may then need to be followed by another compact/defrag step.

Copy Time Machine backup sparsebundle from NAS to locally attached external hard disk

rsync -avp -e "ssh" --delete --progress mynas:/mnt/disk4/TimeMachine/mycomputer.sparsebundle /Volumes/MyExternal

Use hdiutil to compact the sparsebundle

⇒  cd /Volumes/MyExternal

⇒  hdiutil compact mycomputer.sparsebundle
Enter password to access "mycomputer.sparsebundle":
Starting to compact…
Reclaiming free space…
hdiutil: compact failed - internal error

It seems that the hdiutil: compact failed - internal error may have been caused by not properly unmounting the sparsebundle. I was able to resolve this by mounting (through finder), then unmounting (again, through finder) the sparsebundle, before trying to run the command again:

After this, I was able to run the command again, and it seemed to work this time (it took a long time to run, and my fan spun up as my CPU was processing pretty heavily during it):

⇒  hdiutil compact mycomputer.sparsebundle
Enter password to access "mycomputer.sparsebundle":
Starting to compact…
Reclaiming free space…
...........................................................................................................................................................................................................
Finishing compaction…
Reclaimed 23.8 GB out of 2.3 TB possible.

I believe the "of 2.3 TB possible" may relate to the size of the underlying hard disk and it's free space or similar, not necessarily an actual amount that you'll be able to reclaim.

To check what files within the sparsebundle were actually changed, we can do a --dry-run of our rsync command from the NAS:

⇒  rsync -avp -e "ssh" --progress --delete --stats --dry-run mynas:/mnt/disk4/TimeMachine/mycomputer.sparsebundle /Volumes/MyExternal
[..snip..]
receiving file list ...
64063 files to consider
mycomputer.sparsebundle/
mycomputer.sparsebundle/bands/
mycomputer.sparsebundle/bands/19
mycomputer.sparsebundle/bands/191d
mycomputer.sparsebundle/bands/1e
[..snip..]
mycomputer.sparsebundle/bands/f9f9
mycomputer.sparsebundle/bands/fc66
mycomputer.sparsebundle/bands/fc99

Number of files: 64063
Number of files transferred: 3210
Total file size: 537260864983 bytes
Total transferred file size: 26892118661 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 868510
File list generation time: 0.088 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 384394
Total bytes received: 1253340

sent 384394 bytes  received 1253340 bytes  83986.36 bytes/sec
total size is 537260864983  speedup is 328051.36

Looking at the Total transferred file size: 26892118661 bytes we can calculate the size of the files that rsync considers needing to be replaced/updated: 26892118661/1024/1024/1024, which roughly equals 25.05GB, which is roughly equal to the amount of 'reclaimed space' as reported by hdiutil.

So far so good! But what happens if we run hdiutil compact again at this point?

⇒  hdiutil compact mycomputer.sparsebundle
Enter password to access "mycomputer.sparsebundle":
Starting to compact…
Reclaiming free space…
...........................................................................................................................................................................................................
Finishing compaction…
Reclaimed 0 bytes out of 2.3 TB possible.

No additional space saved, which means it did all it could the first time around.

Can we 'defrag' the sparsebundle to save more space?

Let's start off by confirming the format of our sparsebundle is UDSB as expected:

⇒  hdiutil imageinfo netherkitsune.sparsebundle | grep 'Format:'
Enter password to access "mycomputer.sparsebundle":
Format: UDSB

Looks good. So now we'll try 'converting' the sparsebundle to the same UDSB format, without changing any of the band size settings. Theoretically, this will allow us to create a new sparsebundle that contains the same data, but 'defragged' so that we don't waste any space within each band. In theory, that should leave us with a smaller sized sparsebundle overall, while still containing the exact same Time Machine backup data inside it (this is probably going to take a fairly long time to run.. so go make a coffee while you wait):

⇒  hdiutil convert mycomputer.sparsebundle -format UDSB -encryption -o ~/Desktop/mycomputer-converted.sparsebundle
Enter password to access "mycomputer.sparsebundle":
Enter a new password to secure "mycomputer-converted.sparsebundle":
Re-enter new password:
Reading Protective Master Boot Record (MBR : 0)…
Reading GPT Header (Primary GPT Header : 1)…
Reading GPT Partition Data (Primary GPT Table : 2)…
Reading  (Apple_Free : 3)…
Reading EFI System Partition (C12A3456-F78F-91D1-BA2B-34A5C67EC89B : 4)…
Reading disk image (Apple_HFSX : 5)…
.............................................................................................................................................................................................................................................................
Reading  (Apple_Free : 6)…
Reading GPT Partition Data (Backup GPT Table : 7)…
.............................................................................................................................................................................................................................................................
Reading GPT Header (Backup GPT Header : 8)…
.............................................................................................................................................................................................................................................................
Elapsed Time:  3h 34m 12.209s
Speed: 37.9Mbytes/sec
Savings: 83.0%
created: /Users/devalias/Desktop/mycomputer-converted.sparsebundle

Looking atmycomputer-converted.sparsebundle's size on disk (511,906,213,864 bytes (511.91 GB on disk)) compared to the original mycomputer.sparsebundle's size on disk is (511,696,816,466 bytes (511.71 GB on disk)), so it seems like we didn't actually save any space by doing this.

Just in case, let's try and compact it again:

⇒  hdiutil compact mycomputer-converted.sparsebundle
Enter password to access "mycomputer-converted.sparsebundle":
Starting to compact…
Reclaiming free space…
.............................................................................................................................................................................................................................................................
Finishing compaction…
Reclaimed 2.7 MB out of 2.3 TB possible.

So a tiny bit extra saved, but all in all, nothing really worthwhile. Triple checking, du reports the same size for both files:

⇒  du -sh mycomputer.sparsebundle ~/Desktop/mycomputer-converted.sparsebundle
477G	mycomputer.sparsebundle
477G	/Users/devalias/Desktop/mycomputer-converted.sparsebundle

There are some notes out there that suggest we may be able to save more space by using hdiutil convert to change the band size from it's default of 8mb. This would likely only end up saving us space if there was space wasted within each band that added up to a significant amount (and is basically what we were hoping to discover with the above 'defrag attempt')

I then came across a really informative blog, that among many other useful/interesting things, contains a lot of information about sparsebundle's, how to work with them, and suggestions for how to optimize their size/performance. I've linked to some of the more interesting/relevant sparsebundle related posts below:

One post in particular gave some useful insight into band sizes, and limits related to them:

The band size is the maximum size of each band file, and determines two things: the number of band files, and how efficiently the whole sparse bundle can change in size. In most cases, the default is 8.4 MB, which generally works well for all but large backups.

There’s one important limit to bear in mind when setting band size. All the bands of a sparse bundle are stored inside a single folder. If the number of bands reaches the maximum for a single folder for the host file system, then it will start to fail, and you could lose part or all of its contents. Currently, in macOS with HFS+ or APFS, that critical number is believed to be 100,000. So whatever you do, ensure that your sparse bundle will never need 100,000 or more band files, which could spell disaster.

It goes on to suggest that to avoid issues related to this limit, you can use the expected maximum size of your backup sparsebundle to calculate a 'critical band size', ensuring that it will stay within the limit and avoid any issues:

The simplest way to approach this is with a critical band size. Calculate that by dividing the absolute maximum size that your sparse bundle might reach (even after resizing) by a number smaller than 100,000, such as 50,000. It’s best to do this in consistent units such as MB: if your sparse bundle could reach 1.1 TB, that’s 1,100 GB, or 1,100,000 MB. Divide that by 50,000 and you should get a critical band size of 22 MB. That’s the smallest band size which will keep the total number of bands below 50,000.

You can then choose any band size from the critical upwards. The smaller you make the bands, the more efficient the sparse bundle’s use of storage space will be, as it can add (or remove) bands to accommodate changes in size.

With this in mind, and the size of our Time Machine backup, it doesn't really make sense for us to choose a smaller band size in hopes of reducing the disk space used; but it did make me curious to see if we could combine our defrag attempts with changing the band size, and what overall effect that would have for us.

The hdiutil man page gives us more information about the specifics of how to change the band size:

By default, UDSP images grow one megabyte at a time. Introduced in 10.5, UDSB images use 8 MB band files which grow as they are written to. -imagekey sparse-band-size=size can be used to specify the number of 512-byte sectors that will be added each time the image grows. Valid values for SPARSEBUNDLE range from 2048 to 16777216 sectors (1 MB to 8 GB).

Since 1mb is 2048*512 byte sectors, we can choose our number in a similar way. Sticking to powers of 2, I figured I would try 32mb (32*2048=65536):

⇒  hdiutil convert mycomputer.sparsebundle -format UDSB -imagekey sparse-band-size=65536 -encryption -o ~/Desktop/mycomputer-32mb-bands.sparsebundle
Enter password to access "mycomputer.sparsebundle":
Enter a new password to secure "mycomputer-32mb-bands.sparsebundle":
Re-enter new password:
Reading Protective Master Boot Record (MBR : 0)…
Reading GPT Header (Primary GPT Header : 1)…
Reading GPT Partition Data (Primary GPT Table : 2)…
Reading  (Apple_Free : 3)…
Reading EFI System Partition (C12A3456-F78F-91D1-BA2B-34A5C67EC89B : 4)…
Reading disk image (Apple_HFSX : 5)…
.............................................................................................................................................................................................................................................................
Reading  (Apple_Free : 6)…
Reading GPT Partition Data (Backup GPT Table : 7)…
.............................................................................................................................................................................................................................................................
Reading GPT Header (Backup GPT Header : 8)…
.............................................................................................................................................................................................................................................................
Elapsed Time:  3h 13m 44.996s
Speed: 42.0Mbytes/sec
Savings: 83.0%
created: /Users/devalias/Desktop/mycomputer-32mb-bands.sparsebundle

This came out as 512,834,629,610 bytes (512.17 GB on disk), so actually grew a little in size (probably because with each band being 32mb, more space may have been wasted than with the original 8mb bands). On the other hand, we dramatically reduced the number of individual band files needed to store this much data down to 15,316 32mb band files, compared to the 61,047 odd 8mb band files used in the original.

Running another compact didn't really save us any space on this new 32mb sparsebundle:

⇒  hdiutil compact mycomputer-32mb-bands.sparsebundle
Enter password to access "mycomputer-32mb-bands.sparsebundle":
Starting to compact…
Reclaiming free space…
.............................................................................................................................................................................................................................................................
Finishing compaction…
Reclaimed 276 KB out of 2.3 TB possible.

To be thorough, I figured I would try 16mb band sizes (16*2048=32768) as well. Since the output is largely the same as above, I've snipped some of the less relevant bits from the output:

⇒  hdiutil convert mycomputer.sparsebundle -format UDSB -imagekey sparse-band-size=32768 -encryption -o ~/Desktop/mycomputer-16mb-bands.sparsebundle
Enter password to access "mycomputer.sparsebundle":
Enter a new password to secure "mycomputer-16mb-bands.sparsebundle":
Re-enter new password:
Reading Protective Master Boot Record (MBR : 0)…
[..snip..]
Elapsed Time:  2h 22m 57.428s
Speed: 56.9Mbytes/sec
Savings: 83.0%
created: /Users/devalias/Desktop/mycomputer-16mb-bands.sparsebundle

This resulted in 512,298,926,058 bytes (512.3 GB on disk), with 30,574 of our 16mb band files. Pretty much as expected.

Looking at the man page for hdiutil create, it suggests that by creating a new disk image, and then using the -srcfolder argument to copy data to the image, it will theoretically defragment the data:

-srcfolder source: copies file-by-file the contents of source into image, creating a fresh (theoretically defragmented) filesystem on the destination.

TODO: another post suggested we could create a new sparsebundle, then mount both and drag/drop to copy the internal file between them.. it might be worth trying that and checking as well?

TODO: how can we delete large files from all of the backup snapshots (eg. VMs, etc)

TODO

TODO: include additional steps/notes here?

TODO

@johgu67
Copy link

johgu67 commented Dec 27, 2020

Thanks @0xdevalias for this info about TM.

I have two images of the same TM bakup, one with extension .backupbundle (>1TB) and another with the extension .sparesebundle (12GB). Neither one of them can mount unfortunately (from macos Catalina).

I suspect this situation happened while upgrading to Catalina and in that process the TM process was interupted for an unknown reason. The NAS was not feeling well probably...

Based on a post in Apple support, some users have solved it by renaming externsion .backupbundle into .sparsebundle, and then it worked. However, I suspect that they hade only the .backupbundle image and not the .sparebundle.
In my case, this renaming did not work.

Any ideas are most welcome.
/Johan

@mikerj1
Copy link

mikerj1 commented Jul 13, 2022

This seems like a good place to post: I just successfully resized a time machine volume hosted on an SMB share backed by ZFS. I increased the quota on the ZFS volume, but in order to get the extra space to show up, I had to extend the backupbundle.

  1. tmutil listbackups # this got the share mounted as well as the time machine volume as can be seen with df. I was for some unknown reason unable to get the share to mount any other way
  2. In another terminal, cd to the share which contains the backupbundle. This ensures the share stays mapped (mounted) after the next steps
  3. diskutil unmount /dev/disk2s2 # in my case, the time machine volume was mounted here. Need to have the backupbundle available but not in use
  4. diskutil eject /dev/disk2 # again, disk2 for me, adjust as needed. The backupbundle should no longer be open. If it is still in use, message "hdiutil: resize: failed. Resource temporarily unavailable (35)" is received. Use lsof if you can't figure out what has it open
  5. hdiutil resize -size <size> <hostname>.backupbundle # now resize the sparse bundle. Size will be something like 5.4t, or 800g. See man hdiutil.
  6. Profit

@0xdevalias
Copy link
Owner Author

0xdevalias commented Jan 20, 2023

Why can't I delete files from an APFS Time Machine backup on macOS Ventura?

Seems the normal ways of deleting a file from all TimeMachine backups no longer exists/works under macOS Ventura / when using APFS for backups?

  • https://apple.stackexchange.com/questions/451063/not-able-to-delete-a-specific-file-in-time-machine
  • https://eclecticlight.co/2022/07/19/what-can-you-do-with-time-machine-backups-on-apfs/
    • What you can’t do

      • Delete items within a backup: The synthetic snapshots containing your backups are read-only. As a general principle, you can never change an APFS snapshot, and Time Machine’s backups are no exception.

    • There are two strategies for avoiding unintentionally large backups:

      • Store the large files in a folder which you put in Time Machine’s exclusion list. That will prevent them from taking up space in your backups, but won’t prevent them making local snapshots very large for the next 24 hours, until Time Machine deletes that snapshot automatically.

      • Store the large files in a volume which you put in Time Machine’s exclusion list. As the whole volume is excluded, those files won’t appear in any local snapshots, nor will they be included in your backup storage.

    • As it’s easy to create special volumes, and they share free space with all the other volumes within the same APFS container, you’ll often find storing such large files on a different volume is a better choice, then both local snapshots and full backups remain small.

  • https://derflounder.wordpress.com/2022/07/01/removing-unwanted-time-machine-backups-from-apfs-formatted-time-machine-backup-drives-on-macos-monterey/
    • This post shows how to delete entire backup snapshot, but you still can't do so for individual files.
    • tmutil listbackups
  • https://eclecticlight.co/2021/11/09/disk-utility-now-has-full-features-for-managing-snapshots/
    • After four years in which it had offered frustratingly limited support for the new features of APFS, Disk Utility is now complete: this version has excellent support for snapshots, no matter which app created them.

    • To engage its new powers, select a volume and use the Show APFS Snapshots command in its View menu. This opens a new table view in the lower part of the main view in which the selected volume’s snapshots are listed.

    • Although their order is fixed, with the most recent at the bottom of that table, you can customise the columns shown. Most important among those are the name, date of creation, and cumulative size.

    • Select a snapshot from the list and you can mount it, show it in the Finder, rename it, and delete it, using the More button and the – tool at the bottom left. Deletion is irreversible, as you’d expect, and only to be undertaken when you’re certain that’s what you want to do.

    • Accompanying this is support for the listing and deletion of snapshots in the diskutil command tool. These commands have been added to the apfs verb there, so diskutil apfs listSnapshots disk13s1 will list all snapshots, no matter what created them, on disk13s1. The command to delete a snapshot is deleteSnapshot. man diskutil, updated in January 2021, provides additional detail.

    • These enable you to maintain snapshots and to copy items from them. Restoring a whole volume from a snapshot should normally be performed in Recovery. The remaining technique using snapshots is copying them to another volume. Although this might appear simple, copying the file system metadata of the snapshot itself is of no use, and it needs to be accompanied by the data referenced by the snapshot. Currently, that’s only available as part of the mechanism by which Time Machine backs up to APFS. Maybe one day that will be mature enough to be added to Disk Utility.

And as a bonus link, how to exclude things from being backed up in the first place:

  • https://eclecticlight.co/2021/08/03/knowing-what-not-to-back-up-and-how/
    • there are four different ways to exclude items from Time Machine backups:

      • add them to the list of exclusions revealed by clicking the Options… button in the Time Machine pane

        • When you use the Time Machine pane’s list of exclusions, it automatically excludes any volumes as if you’d used tmutil addexclusion -v, and other items are excluded as if you’d used tmutil addexclusion -p.

      • use the tmutil addexclusion command to exclude whole volumes by their UUID, with the -v option

        • tmutil addexclusion -v is only used to exclude whole volumes, and recognises them by their UUID rather than volume name or mountpoint, to ensure this is most reliable. To do this in Terminal, you’ll need to give the tmutil command or Terminal Full Disk Access in the Security & Privacy pane, and run the command using sudo. If you erase the volume, its UUID will change, and any existing exclusion will no longer apply.

      • use the tmutil addexclusion command to exclude folders and files by their path, with the -p option

        • tmutil addexclusion -p shouldn’t be used to exclude volumes, but is intended for folders and individual files. It recognises them by path, so when the item moves it’s effectively removed from the exclusion list. It too requires Full Disk Access and sudo.

      • use the tmutil addexclusion command to exclude individual folders and files, which is the default

        • tmutil addexclusion without any options is intended for use with files and folders, and works very differently. Rather than adding an exclusion path, it attaches an extended attribute (xattr) to the item, which ensures that Time Machine ignores it when making backups. The xattr is of type com.apple.metadata:com_apple_backup_excludeItem, and consists of a property list containing one item: <string>com.apple.backupd</string>

    • you don’t even need to use the tmutil addexclusion command to exclude items: you can add the xattr using a utility such as my free app xattred.

    • tmutil also provides the removeexclusion command to remove any of its three different classes of exclusion. You’ll need to use the same option that you invoked when you created the exclusion – -v for volumes, -p for paths, and no option for the xattr.

    • To query whether a volume, folder or file is excluded from being backed up, use the command tmutil isexcluded followed by the path.

    • There are also circumstances in which you might want the active files in an important project backed up frequently while you’re working on them, but don’t want all those files permanently backed up. An excellent solution to that, suggested here by Duncan, is to create a new APFS volume to contain them, then include that in your backups while you need them. When the project is finished and you only want to back up its key files, move those to another volume which is fully backed up, and delete the working volume from your backup storage. You’ll then be able to reclaim a lot of free space. This is the only way in Big Sur’s Time Machine to APFS that you can do this, as its snapshot backups don’t allow you to delete individual files or folders, only complete backups, or volume backup sets.

@cognivator
Copy link

cognivator commented Jan 29, 2023

Regarding band size for TM .sparsebundle on NAS...

With no modifications, my TM band size on a NAS volume defaulted to 256MB.
It's possible TM is better at selecting a band size, now.

For reference:
OS: Monterey (12.5)
NAS: Synology DSM 7.1
TM Volume: 11TB - Synology Hybrid RAID (SHR), btrfs, dedicated volume, compression, data integrity, no recycle bin, no quotas
Backup disk size (max): 500GB

@0xdevalias
Copy link
Owner Author

@cognivator Interesting.. Thanks for sharing! I'll have to get around to creating a new NAS backup from macOS Ventura sometime and see what it defaults to for me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deep-dive A research deep dive/link dump of various things related to a topic. May eventually become a blog.
Projects
None yet
Development

No branches or pull requests

4 participants