s3: Storj provider: fix server-side copy of files bigger than 5GB
Some checks are pending
Docker beta build / Build image job (push) Waiting to run

Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.

This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.

See https://github.com/storj/roadmap/issues/40
This commit is contained in:
Kaloyan Raev 2024-10-16 17:33:01 +03:00 committed by Nick Craig-Wood
parent 53ff3b3b32
commit 75257fc9cd

View File

@ -3470,6 +3470,10 @@ func setQuirks(opt *Options) {
opt.ChunkSize = 64 * fs.Mebi opt.ChunkSize = 64 * fs.Mebi
} }
useAlreadyExists = false // returns BucketAlreadyExists useAlreadyExists = false // returns BucketAlreadyExists
// Storj doesn't support multi-part server side copy:
// https://github.com/storj/roadmap/issues/40
// So make cutoff very large which it does support
opt.CopyCutoff = math.MaxInt64
case "Synology": case "Synology":
useMultipartEtag = false useMultipartEtag = false
useAlreadyExists = false // untested useAlreadyExists = false // untested