CEPH uses a special bucket form `tenant:bucket` for multitentant
access using S3 as documented here:
https://docs.ceph.com/en/reef/radosgw/multitenancy/#s3
However when doing multipart uploads, in the reply from
`CreateMultipart` the `tenant:` was missing from the `Bucket` response
rclone was using to build the `UploadPart` request. This caused a 404
failure return. This may be a CEPH bug, but it is easy to work around.
This changes the code to use the `Bucket` and `Key` that we used in
`CreateMultipart` in `UploadPart` rather than the one returned from
`CreateMultipart` which fixes the problem.
See: https://forum.rclone.org/t/rclone-zcat-does-not-work-with-a-multitenant-ceph-backend/48618
Before this change attempting to download a file with
`Content-Encoding: gzip` from Cloudflare R2 gave this error
corrupted on transfer: sizes differ src 0 vs dst 999
This was caused by the SDK v2 overriding our attempt to set
`Accept-Encoding: gzip`.
This fixes the problem by disabling the middleware that does that
overriding.
Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.
This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.
See https://github.com/storj/roadmap/issues/40
This will ensure no Content-Md5 headers are sent and ensure ETags are not
interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all objects
whether single or multipart uploaded.
This also sets "no_check_bucket = true".
This is enough to make the integration tests pass, but there are some
limitations as noted in the docs.
See: https://forum.rclone.org/t/support-s3-directory-bucket/47653/
The SDKv2 conversion introduced a regression to do with setting
credentials with env_auth=true. The rclone documentation explicitly
states that env_auth only applies if secret_access_key and
access_key_id are blank and users had been relying on that.
However after the SDKv2 conversion we were ignoring static credentials
if env_auth=true.
This fixes the problem by ignoring env_auth=true if secret_access_key
and access_key_id are both provided. This brings rclone back into line
with the documentation and users expectations.
Fixes#8067
SDK v2 conversion
Changes
- `--s3-sts-endpoint` is no longer supported
- `--s3-use-unsigned-payload` to control use of trailer checksums (needed for non AWS)
When getting an object by specifying a versionId in the request, if the
specified version is a delete marker, it returns 405 (Method Not Allowed),
instead of 404 (Not Found) which would be returned without a versionId. See
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html
Before this change, we were only looking for 404 (and not 405) to determine
whether the object exists. This meant that in some circumstances (ex. when
Versioning is enabled for the bucket and we have a non-null X-Amz-Version-Id), we
deemed the object to exist when we should not have.
After this change, 405 (Method Not Allowed) is treated the same as 404 (Not
Found) for the purposes of headObject.
See https://forum.rclone.org/t/bisync-rename-failed-method-not-allowed/45723/13
`remote` has been converted ToStandardPath a few lines above, so `directory`
needs to be converted the same way in order to be compared properly. This was
spotted on `TestBisyncRemoteRemote/extended_filenames` for
`TestS3,directory_markers:` and `TestGoogleCloudStorage,directory_markers:`
which tripped over a directory name containing a Line Feed symbol.
GCS gives NotImplemented errors for multi-part server side copies. The
threshold for these is currently set just below 5G so any files bigger
than 5G that rclone attempts to server side copy will fail.
This patch works around the problem by adding a quirk for GCS raising
--s3-copy-cutoff to the maximum. This means that rclone will never use
multi-part copies for files in GCS. This includes files bigger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. However this seems to work with GCS.
See: https://forum.rclone.org/t/chunker-uploads-to-gcs-s3-fail-if-the-chunk-size-is-greater-than-the-max-part-size/44349/
See: https://issuetracker.google.com/issues/323465186
- Changes
- Rename `--s3-authkey` to `--auth-key` to get it out of the s3 backend namespace
- Enable `Content-MD5` integrity checks
- Remove locking after code audit
- Documentation
- Factor out documentation into seperate file
- Add Quickstart to docs
- Add Bugs section to docs
- Add experimental tag to docs
- Add rclone provider to s3 backend docs
- Fixes
- Correct quirks in s3 backend
- Change fmt.Printlns into fs.Logs
- Make metadata storage per backend not global
- Log on startup if anonymous access is enabled
- Coding style fixes
- rename fs to vfs to save confusion with the rest of rclone code
- rename db to b for *s3Backend
Fixes#7062
Before this change if you tried to create a bucket that already
existed, but someone else owned then rclone did not return an error.
This now will return an error on providers that return the
AlreadyOwnedByYou error code or no error on bucket creation of an
existing bucket owned by you.
This introduces a new provider quirk and this has been set or cleared
for as many providers as can be tested. This can be overridden by the
--s3-use-already-exists flag.
Fixes#7351
In this commit:
5f938fb9ed s3: fix "Entry doesn't belong in directory" errors when using directory markers
We checked that the remote has the prefix and then changed the remote
before removing the prefix. This sometimes causes:
panic: runtime error: slice bounds out of range [56:55]
The fix is to do the modification of the remote after removing the
prefix.
See: https://forum.rclone.org/t/cryptcheck-panic-runtime-error-slice-bounds-out-of-range/41977