This commit reorganises the oauth code to use our own config struct
which has all the info for the normal oauth method and also the client
credentials flow method.
It updates all backends which use lib/oauthutil to use the new config
struct which shouldn't change any functionality.
It also adds code for dealing with the client credential flow config
which doesn't require the use of a browser and doesn't have or need a
refresh token.
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
Before this testing any backend which implemented the OpenChunkWriter
gave this error:
ERROR : writer-at-subdir/writer-at-file: Don't know how to set key "chunkSize" on upload
This was due to the ChunkOption incorrectly rendering into HTTP
headers which weren't understood by the backend.
fs.CountError is called when an error is encountered. The method was
calling GlobalStats().Error(err) which incremented the error at the
global stats level. This led to calls to core/stats with group= filter
returning an error count of 0 even if errors actually occured.
This change requires the context to be provided when calling
fs.CountError. Doing so, we can retrieve the correct StatsInfo to
increment the errors from.
Fixes#5865
Before this change, when cache.GetFn was called on a file rather than a
directory, two cache entries would be added (the file + its parent) but only one
of them would get pinned if the caller then called Pin(f). This left the other
one exposed to expiration if the ci.FsCacheExpireDuration was reached. This was
problematic because both entries point to the same Fs, and if one entry expires
while the other is pinned, the Shutdown method gets erroneously called on an Fs
that is still in use.
An example of the problem showed up in the Hasher backend, which uses the
Shutdown method to stop the bolt db used to store hashes. If a command was run
on a Hasher file (ex. `rclone md5sum --download hasher:somelargefile.zip`) and
hashing the file took longer than the --fs-cache-expire-duration (5m by default), the
bolt db was stopped before the hashing operation completed, resulting in an
error.
This change fixes the issue by ensuring that:
1. only one entry is added to the cache (the file's parent, not the file).
2. future lookups correctly find the entry regardless of whether they are called
with the parent name or one of its children.
3. fs.ErrorIsFile is returned when (and only when) fsString points to a file
(preserving the fix from 8d5bc7f28b).
Note that f.Root() should always point to the parent dir as of c69eb84573
After the config re-organisation, the setting of stringArray config
values (eg `--exclude` set with `RCLONE_EXCLUDE`) was broken and gave
a message like this for `RCLONE_EXCLUDE=*.jpg`:
Failed to load "filter" default values: failed to initialise "filter" options:
couldn't parse config item "exclude" = "*.jpg" as []string: parsing "*.jpg" as []string failed:
invalid character '/' looking for beginning of value
This was caused by the parser trying to parse the input string as a
JSON value.
When the config was re-organised it was thought that the internal
representation of stringArray values was not important as it was never
visible externally, however this turned out not to be true.
A defined representation was chosen - a comma separated string and
this was documented and tests were introduced in this patch.
This potentially introduces a very small backwards incompatibility. In
rclone v1.67.0
RCLONE_EXCLUDE=a,b
Would be interpreted as
--exclude "a,b"
Whereas this new code will interpret it as
--exclude "a" --exclude "b"
The benefit of being able to set multiple values with an environment
variable was deemed to outweigh the very small backwards compatibility
risk.
If a value with a `,` is needed, then use CSV escaping, eg
RCLONE_EXCLUDE="a,b"
(Note this needs to have the quotes in so at the unix shell that would be
RCLONE_EXCLUDE='"a,b"'
Fixes#8063
Before this fix, we initialised the options blocks in a random order.
This meant that there was a 50/50 chance whether --dump filters would
show the filters or not as it was depending on the "main" block having
being read first to set the Dump flags.
This initialises the options blocks in a defined order which is
alphabetically but with main first which fixes the problem.
This changes log statements from log to fs package, which is required for --use-json-log
to properly make log output in JSON format. The recently added custom linting rule,
handled by ruleguard via gocritic via golangci-lint, warns about these and suggests
the alternative. Fixing was therefore basically running "golangci-lint run --fix",
although some manual fixup of mainly imports are necessary following that.
Before this change, rclone ignored the --password-command on the
rclone config setting except when decrypting an existing config file.
This change allows for offloading the password storage/generation into
external hardware key or other protected password storage.
Fixes#7859
Before this change we used the repo with an initial uppercase `U`. However it is now canonically spelled with a lower case `u`.
This package is too old to have a go.mod but the README clearly states the desired capitalization.
In 4b0d4b818a the
recommended capitalization was changed to lower case.
Co-authored-by: John Oxley <joxley@meta.com>
The code currently hardcodes `text/srt` for all subtitles.
`text/srt` is wrong, it seems `application/x-subrip` is the official
extension coming from the official mime database, at least (and still
works with the Samsung TV I tested with). Also add that one to `fs/
mimetype.go`.
Compared to previous iterations of this PR, I dropped tests ensuring
certain mime types are present - as detection still seems to be fairly
platform-specific.
Before this change, macOS-specific metadata was not preserved by rclone, even for
local-to-local transfers (it does not use the "user." prefix, nor is Mac metadata
limited to xattrs.) Additionally, rclone did not take advantage of APFS's native
"cloning" functionality for fast and deduplicated transfers.
After this change, local (on macOS only) supports "server-side copy" similarly to
other remotes, and achieves this by using (when possible) macOS's native APFS
"cloning", which is the same underlying mechanism deployed when a user
duplicates a file via the Finder UI. This has several advantages over the
previous behavior:
- It is extremely fast (even large files can be cloned instantly)
- It is very efficient in terms of storage, as it automatically deduplicates when
possible (i.e. so that having two identical files does not consume more storage
than having just one.) (The concept is similar to a "hard link", but subsequent
modifications will not affect the original file.)
- It preserves Mac-specific metadata to the maximum degree, including not only
xattrs but also metadata not easily settable by other methods, including Finder
and Spotlight params.
When server-side "clone" is not available (for example, on non-APFS volumes), it
falls back to server-side "copy" (still preserving metadata but using more disk
storage.) It is only used when both remotes are local (and not wrapped by other
remotes, such as crypt.) The behavior of local on non-mac systems is unchanged.
This adds a new optional parameter to the backend, to specify a path
to a unix domain socket to connect to, instead the specified URL.
The URL itself is still used for the rest of the HTTP client, allowing
host and subpath to stay intact.
This allows using rclone with the webdav backend to connect to a WebDAV
server provided at a Unix Domain socket:
rclone serve webdav --addr unix:///tmp/my.socket remote:path
rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
This converts the ChunkedReader into an interface and provides two
implementations one sequential and one parallel.
This can be used to improve the performance of the VFS on high
bandwidth or high latency links.
Fixes#4760
There were a lot of instances of this lint error
printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)
Which were fixed by re-arranging the arguments and adding "%s".
There were quite a few genuine bugs which were found too.
In corporate environments, client certificates have short life times
for added security, and they get renewed automatically. This means
that client certificate can expire in the middle of long running
command such as `mount`.
This commit attempts to reload the client certificates 30s before they
expire.
This will be active for all backends which use HTTP.