archive:
find "2013" -type f -regextype posix-egrep -regex ".*\.(NEF|nef)$" -print0|tar -cvf - --null -T - |openssl aes-256-cbc -a -salt -pass pass:password | aws s3 cp - s3://yours3backupbucket/2013.archive --storage-class DEEP_ARCHIVE --expected-size 1000000000000
restore:
you need to inicialize restore of your archive and wait about 48hours, then issue command:
aws s3 cp s3://yours3backupbucket/2013.archive - | openssl enc -aes-256-cbc -a -d|tar xvf -
only list restore:
aws s3 cp s3://yours3backupbucket/2013.archive - | openssl enc -aes-256-cbc -a -d|tar tvf -
Tips:
- use --expected-size parameter (bytes) of aws s3 cp command, if you need larger archive to put into glacier (bigger than 5GB). glacier supports archive up to 40TB
- you can change s3 storage class, but if you want to keep costs to minimum, you should use DEEP_ARCHIVE option. Choices: STANDARD | REDUCED_REDUNDANCY (dont use, its expensive) | STANDARD_IA | ONEZONE_IA | INTELLIGENT_TIERING | GLACIER | DEEP_ARCHIVE.
EDIT 2021:
I would now add compression to tar and iter+pbkdf2 parameters to openssl and differenct parameters for openssl restore. Will not work with openssl-1.0.2, but with openssl-1.1.1
archive:
find "2021" -type f -regextype posix-egrep -regex ".*\.(NEF|nef)$" -print0|tar -cvz --null -T - |openssl aes-256-cbc -a -salt -pbkdf2 -iter 100000 -pass pass:password | aws s3 cp - s3://yours3backupbucket/2021.archive --storage-class DEEP_ARCHIVE --expected-size 1000000000000
restore:
you need to inicialize restore of your archive and wait about 48hours, then issue command:
aws s3 cp s3://yours3backupbucket/2021.archive - | openssl aes-256-cbc -a -salt -pbkdf2 -iter 100000 -pass pass:password -d|tar -xvzf - --null
list:
aws s3 cp s3://yours3backupbucket/2021.archive - | openssl aes-256-cbc -a -salt -pbkdf2 -iter 100000 -pass pass:password -d|tar -tvzf - --null
No comments:
Post a Comment