Before this change, if the server returned a 302 redirect message when
opening a file rclone would do the redirect but drop the
Authorization: header. This is a sensible thing to do for security
reasons but breaks some setups.
This patch adds the --webdav-auth-redirect flag which makes it
preserve the auth just for this kind of request.
See: https://forum.rclone.org/t/webdav-401-unauthorized-when-server-redirects-to-another-domain/39292
Before this fix the smb backend could panic if a stat call failed.
This fix makes it return an error instead.
It should have the side effect that we do one less stat call on upload
too.
Fixes#8106
This will only search Windows System directory for the DLL if name is a base
name (like "advapi32.dll"), which prevents DLL preloading attacks.
To get access to NewLazySystemDLL imports of syscall needs to be swapped with
golang.org/x/sys/windows.
According to the SDK docs
> FileRequestIntent is required when using TokenCredential for
> authentication. Acceptable value is backup.
This sets the correct option in the SDK. It does it for all types of
authentication but the SDK seems clever enough not to supply it when
it isn't needed.
This fixes the error
> MissingRequiredHeader An HTTP header that's mandatory for this
> request is not specified. x-ms-file-request-intent
Fixes#8241
Before this change the --links flag when using the VFS override the
--links flag for the local backend which meant the local backend
needed explicit config to use links.
This fixes the problem by making the --links flag global and adding a
new --local-links flag and --vfs-links flags to control the features
individually if required.
This adds support for the client credential flow oauth method which
requires some special handling in onedrive:
- Special scopes are required
- The tenant is required
- The tenant needs to be used in the oauth URLs
This also:
- refactors the oauth config creation so it isn't duplicated
- defaults the drive_id to the previous one in the config
- updates the documentation
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
This commit reorganises the oauth code to use our own config struct
which has all the info for the normal oauth method and also the client
credentials flow method.
It updates all backends which use lib/oauthutil to use the new config
struct which shouldn't change any functionality.
It also adds code for dealing with the client credential flow config
which doesn't require the use of a browser and doesn't have or need a
refresh token.
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
This changes the OpenWriterAt implementation to make client/fd
handling atomic.
This PR stabilizes the situation of bigger files and multi-threaded
uploads. The root cause boils down to the old "fun" property of
pclouds fileops API: sessions are bound to TCP connections. This
forces us to use a http client with only a single connection
underneath.
With large files, we reuse the same connection for each chunk. If that
connection interrupts (e.g. because we are talking through the
internet), all chunks will fail. The probability for latter one
increases with larger files.
As the point of the whole multi-threaded feature was to speed-up large
files in the first place, this change pulls the client creation (and
hence connection handling) into each chunk. This should stabilize the
situation, as each chunk (and retry) gets its own connection.
CEPH uses a special bucket form `tenant:bucket` for multitentant
access using S3 as documented here:
https://docs.ceph.com/en/reef/radosgw/multitenancy/#s3
However when doing multipart uploads, in the reply from
`CreateMultipart` the `tenant:` was missing from the `Bucket` response
rclone was using to build the `UploadPart` request. This caused a 404
failure return. This may be a CEPH bug, but it is easy to work around.
This changes the code to use the `Bucket` and `Key` that we used in
`CreateMultipart` in `UploadPart` rather than the one returned from
`CreateMultipart` which fixes the problem.
See: https://forum.rclone.org/t/rclone-zcat-does-not-work-with-a-multitenant-ceph-backend/48618
Before this change attempting to download a file with
`Content-Encoding: gzip` from Cloudflare R2 gave this error
corrupted on transfer: sizes differ src 0 vs dst 999
This was caused by the SDK v2 overriding our attempt to set
`Accept-Encoding: gzip`.
This fixes the problem by disabling the middleware that does that
overriding.
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.
This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.
This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.
See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.
This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.
This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.
See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
We changed the precision of the onedrive personal backend in
c053429b9c from 1mS to 1S.
However the tests did not get updated. This changes the time tests to
use `fstest.AssertTimeEqualWithPrecision` which compares with
precision so hopefully won't break again.
Before this change, upgrading to v1.13.7 caused a deadlock in the tests.
This was caused by additional locking in the sftp package exposing a
bad choice by the rclone code.
See https://github.com/pkg/sftp/issues/603 and thanks to @puellanivis
for the fix suggestion.
Currently rclone allows us to specify the path to a public ssh
certificate file.
That works great for cases where we can specify key path, like local
envs.
If users are using rclone with [volsync](https://github.com/backube/volsync/tree/main/docs/usage/rclone)
there currently is a limitation that users can specify only the rclone config file.
With this change users can pass the public certificate in the same fashion
as they can with `key_file`.
Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.
This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.
See https://github.com/storj/roadmap/issues/40
This reduces the precision advertised by the backend from 1ms to 1s
for OneDrive personal accounts.
The precision was set to 1ms as part of:
1473de3f04 onedrive: add metadata support
which was released in v1.66.0.
However it appears not all OneDrive personal accounts support 1ms time
precision and that Microsoft may be migrating accounts away from this
to backends which only support 1s precision.
Fixes#8101
Before this change, server-side copying a src file over a dst that already exists
gave `Error "item_name_in_use" (409): Item with the same name already exists`.
This change fixes the error by copying to a temporary name first, then moving it
to the real name.
There might be a more graceful way to overwrite a file during a copy, but I
didn't see one in the API docs.
https://developer.box.com/reference/post-files-id-copy/
In the meantime, this workaround is better than a critical error.
This should (hopefully) fix 8 bisync integration tests.