Fix in extreme cases, when the job is executing finish(), the listener added by calling OnFinish() will never be executed.
This change should not cause compatibility issues, as consumers should not make assumptions about whether listeners will be run in a new goroutine
Before this change the --links flag when using the VFS override the
--links flag for the local backend which meant the local backend
needed explicit config to use links.
This fixes the problem by making the --links flag global and adding a
new --local-links flag and --vfs-links flags to control the features
individually if required.
This is somewhat limited in that it only resolves symlinks when files
are opened. This will work fine for the intended use in rclone mount,
but is inadequate for the other servers probably.
An incorrect nil check was spotted while reviewing the code for
CVE-2024-45337.
The nil check failing has never happened as far as we know. The
consequences would be a nil pointer exception.
This commit resolves CVE-2024-45337 which is an a potential auth
bypass for `rclone serve sftp`.
https://nvd.nist.gov/vuln/detail/CVE-2024-45337
However after review of the code, rclone is **not** affected as it
handles the authentication correctly. Rclone already uses the
Extensions field of the Permissions return value from the various
authentication callbacks to record data associated with the
authentication attempt as suggested in the vulnerability report.
This commit includes the recommended update to golang.org/x/crypto
anyway so that this is visible in the changelog.
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.29.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.29.0...v0.31.0)
---
updated-dependencies:
- dependency-name: golang.org/x/crypto
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This adds support for the client credential flow oauth method which
requires some special handling in onedrive:
- Special scopes are required
- The tenant is required
- The tenant needs to be used in the oauth URLs
This also:
- refactors the oauth config creation so it isn't duplicated
- defaults the drive_id to the previous one in the config
- updates the documentation
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
This commit reorganises the oauth code to use our own config struct
which has all the info for the normal oauth method and also the client
credentials flow method.
It updates all backends which use lib/oauthutil to use the new config
struct which shouldn't change any functionality.
It also adds code for dealing with the client credential flow config
which doesn't require the use of a browser and doesn't have or need a
refresh token.
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
This now compiles rclone with CGO_ENABLED=0 which is closer to the
release compile.
It also removes pikpak if testing s3 as the two depend on each
other.
Mounting will always fail when rclone is installed from the snap package manager.
But the error message generated when trying to mount from a snap install was not
very good. Improve the error message.
Fixes#8208
This changes the OpenWriterAt implementation to make client/fd
handling atomic.
This PR stabilizes the situation of bigger files and multi-threaded
uploads. The root cause boils down to the old "fun" property of
pclouds fileops API: sessions are bound to TCP connections. This
forces us to use a http client with only a single connection
underneath.
With large files, we reuse the same connection for each chunk. If that
connection interrupts (e.g. because we are talking through the
internet), all chunks will fail. The probability for latter one
increases with larger files.
As the point of the whole multi-threaded feature was to speed-up large
files in the first place, this change pulls the client creation (and
hence connection handling) into each chunk. This should stabilize the
situation, as each chunk (and retry) gets its own connection.
This proposal expand the current docker volume plugin troubleshooting possible steps to include a state cleanup command and a reminder that a un/reinstall don't clean up those cache files.
Co-authored-by: albertony <12441419+albertony@users.noreply.github.com>
CEPH uses a special bucket form `tenant:bucket` for multitentant
access using S3 as documented here:
https://docs.ceph.com/en/reef/radosgw/multitenancy/#s3
However when doing multipart uploads, in the reply from
`CreateMultipart` the `tenant:` was missing from the `Bucket` response
rclone was using to build the `UploadPart` request. This caused a 404
failure return. This may be a CEPH bug, but it is easy to work around.
This changes the code to use the `Bucket` and `Key` that we used in
`CreateMultipart` in `UploadPart` rather than the one returned from
`CreateMultipart` which fixes the problem.
See: https://forum.rclone.org/t/rclone-zcat-does-not-work-with-a-multitenant-ceph-backend/48618