Before this fix rclone would crash with
panic: encoding alphabet includes duplicate symbols
When compiled with go1.22. This was fixed upstream in
https://github.com/t3rm1n4l/go-mega/issues/48
And this just pulls in the fix.
Fixes#7639
mailru is unable to handle filenames with certain combining characters (for
example: йěáñ), and is therefore incapable of testing ApplyTransforms. (It is
also therefore incapable of fully supporting --no-unicode-normalization.)
The same override is applied to chunker when wrapping mailru.
Before this change, moving (renaming) a file or folder to a different name
within the same parent directory would fail, due to using the wrong API
operation ("/file/move_copy.json" and "/folder/move_copy.json", instead of the
separate "/file/rename.json" and "/folder/rename.json" that opendrive has for
this purpose.)
After this change, Move and DirMove check whether the move is within the same
parent dir. If so, "rename" is used. If not, "move_copy" is used, like before.
Before this change, nfsmount ignored the --volname flag. After this change, the --
volname flag is respected, making it possible to set a custom volume name.
macOS users should note that Finder will show the correct volume name in most
places, but a notable exception is the sidebar, which will show "localhost".
This seems to be a system limitation (at least without `sudo`), but see the
discussion at https://github.com/rclone/rclone/issues/7503#issuecomment-1933997678
for some possible workarounds.
Before this change, if a user unmounted externally (for example, via the Finder
UI), rclone would not be aware of this and wait forever to exit -- effectively
causing a deadlock that would require Ctrl+C to terminate.
After this change, when the handler detects an external unmount, it calls a
function which allows rclone to cleanly shutdown the VFS and exit.
Before this change, writing files to an `nfsmount` via Finder on macOS would
cause critical errors, rendering `nfsmount` effectively unusable on macOS. This
change fixes the issue so that writes via Finder should be possible.
The issue was primarily caused by the handler's HandleLimit being set to -1. -1 is
the correct default for a NullAuthHandler, but not for a CachingHandler, which
interprets -1 not as "no limit" but as "no cache".
This change sets a high default of 1000000, and gives the user control over it
with a new --nfs-cache-handle-limit flag (available in both `serve nfs` and
`nfsmount`. A minimum of 5 is enforced, as any lower than this will be
insufficient to support directory listing.
The flushCache() function has a bug that causes it to never actually
flush the cache. Specifically, it checks whether DirCacheFlush is nil,
but never calls it.
The tests are already passing without flushing the dir cache, so this
commit just deletes flushCache() and its call sites.
Fixesrclone/rclone#7623
This should be more efficient for the purposes of --fix-case, as operations.DirMove
accepts `srcRemote` and `dstRemote` arguments, while sync.MoveDir does not.
This also factors the two-step-move logic to operations.DirMoveCaseInsensitive, so
that it is reusable by other commands.
This adds a step to detect whether the backend is capable of supporting the
feature, and skips the test if not. A backend can be incapable if, for example,
it is non-case-preserving or automatically converts NFD to NFC.
This change moves the --retries and --retries-sleep flags/variables from cmd to
config (consistent with --low-level-retries), so that they can be more easily
referenced from subcommands.
Before this change, in the event of a retryable error, bisync would always retry
the maximum number of times allowed by the `--retries` flag, even if one of the
retries was successful. This change fixes the issue, so that bisync moves on
after the first successful retry.
It appears that ci.DryRun = true affects the behavior of r.WriteObject on
chunker only, and no other remotes. This change puts a quick bandaid on it by
setting it later on in the test, but perhaps the underlying issue warrants a
closer look at some point... is chunker checking ci.DryRun itself in a way that
no other remote does? If so, should it? (Does this break encapsulation?)
Before this change, operations.moveOrCopyFile had a special section to detect
and handle changing case of a file on a case insensitive remote, but
operations.Move did not. This caused operations.Move to fail for certain
backends that are incapable of renaming a file in-place to an equal-folding name.
(Not all case-insensitive backends have this limitation -- for example, Dropbox
does but macOS local does not.)
After this change, the special two-part-move section from
operations.moveOrCopyFile is factored out to its own function,
moveCaseInsensitive, which is then called from both operations.moveOrCopyFile
and operations.Move.
GCS gives NotImplemented errors for multi-part server side copies. The
threshold for these is currently set just below 5G so any files bigger
than 5G that rclone attempts to server side copy will fail.
This patch works around the problem by adding a quirk for GCS raising
--s3-copy-cutoff to the maximum. This means that rclone will never use
multi-part copies for files in GCS. This includes files bigger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. However this seems to work with GCS.
See: https://forum.rclone.org/t/chunker-uploads-to-gcs-s3-fail-if-the-chunk-size-is-greater-than-the-max-part-size/44349/
See: https://issuetracker.google.com/issues/323465186
A seafile server can be configured to use a relative URL as
FILE_SERVER_ROOT in order to support more than one hostname/ip. (see
https://github.com/haiwen/seahub/issues/3398#issuecomment-506920360 )
The previous backend implementation always expected an absolute
download/upload URL, resulting in an "unsupported protocol scheme"
error.
With this commit it supports both absolute and relative.
Before this change it wasn't possible to see where transfers were
going from and to in core/stats and core/transferred.
When use in rclone mount in particular this made interpreting the
stats very hard.
* serve restic: return internal error if listing failed
If listing a remote failed, then rclone returned http status "not
found". This has become a problem since restic 0.16.0 which ignores "not
found"-errors while listing a directory.
Just return internal server error, if something unexpected happens while
listing a directory.
* serve restic: fix error handling if getting a file fails
If the call to `newObject` in `serveObject` fails, then rclone always
returned a "not found" error. This prevents restic from distinguishing
permanent "not found" errors from everything else.
Thus, only return "not found" if the object is not found and an internal
server error otherwise.
It was reported that rclone copy occasionally uploaded corrupted data
to azure blob.
This turned out to be a race condition updating the block count which
caused blocks to be duplicated.
This bug was introduced in this commit in v1.64.0 and will be fixed in v1.65.2
0427177857 azureblob: implement OpenChunkWriter and multi-thread uploads #7056
This race only seems to happen if `--checksum` is used but can happen otherwise.
Unfortunately Azure blob does not check the MD5 that we send them so
despite sending incorrect data this corruption is not detected. The
corruption is detected when rclone tries to download the file, so
attempting to copy the files back to local disk will result in errors
such as:
ERROR : file.pokosuf5.partial: corrupted on transfer: md5 hash differ "XXX" vs "YYY"
This adds a check to test the blocklist we upload is as we expected
which would have caught the problem had it been in place earlier.