Before this change we used the same datastructure for managing empty
directories for both --create-empty-src-dirs in sync/copy/move and for
the --delete-empty-src-dirs flag in move.
These two uses are subtly incompatible and this change uses a separate
datastructure for both uses. This makes it more accurate and easier to
understand.
Before this change, the MoveCaseInsensitive logic in operations.move made the
assumption that dst != nil && remote != "". After this change, it should work
correctly when either one is present without the other.
Before this change when the sync routine attempted to normalise a
case, say from "FiLe.txt" to "file.txt" this caused a 400 Bad Request
error:
> This copy request is illegal because it is trying to copy an object
> to itself without changing the object's metadata, storage class,
> website redirect location or encryption attributes.
This was caused by passing the same object as the source and
destination to the move routine, whereas the destination object had a
different case and didn't exist, so should have been passed as nil.
See: https://github.com/rclone/rclone/pull/7743#discussion_r1557345906
Before this fix if more than one retry happened on a file that rclone
had opened for read with a backend that uses fs.FixRangeOption then
rclone would read too much data and the transfer would fail.
Backends affected:
- azureblob, azurefiles, b2, box, dropbox, fichier, filefabric
- googlecloudstorage, hidrive, imagekit, jottacloud, koofr, netstorage
- onedrive, opendrive, oracleobjectstorage, pikpak, premiumizeme
- protondrive, qingstor, quatrix, s3, sharefile, sugarsync, swift
- uptobox, webdav, zoho
This was because rclone was emitting Range requests for the wrong data
range on the second and subsequent retries.
This was caused by fs.FixRangeOption modifying the options and the
reopen code relying on them not being modified.
This fix makes a copy of the fs.FixRangeOption in the reopen code to
fix the problem.
In future it might be best to change fs.FixRangeOption so it returns a
new options slice.
Fixes#7759
Before this change, the --metadata-mapper was called twice if an object was
uploaded via multipart upload with --metadata and --onedrive-metadata-permissions
"write" or "read,write". This change fixes the issue.
This change officially adds bisync to the nightly integration tests for all
backends.
This will be part of giving us the confidence to take bisync out of beta.
A number of fixes have been added to account for features which can differ on
different backends -- for example, hash types / modtime support, empty
directories, unicode normalization, and unimportant differences in log output.
We will likely find that more of these are needed once we start running these
with the full set of remotes.
Additionally, bisync's extremely sensitive tests revealed a few bugs in other
backends that weren't previously covered by other tests. Fixes for those issues
have been submitted on the following separate PRs (and bisync test failures will
be expected until they are merged):
- #7670 memory: fix deadlock in operations.Purge
- #7688 memory: fix incorrect list entries when rooted at subdirectory
- #7690 memory: fix dst mutating src after server-side copy
- #7692 dropbox: fix chunked uploads when size <= chunkSize
Relatedly, workarounds have been put in place for the following backend
limitations that are unsolvable for the time being:
- #3262 drive is sometimes aware of trashed files/folders when it shouldn't be
- #6199 dropbox can't handle emojis and certain other characters
- #4590 onedrive API has longstanding bug for conflictBehavior=replace in
server-side copy/move
Before this change operations.SetDirModTime could return the error
"optional feature not implemented" when attempting to set modification
times on crypted sftp backends.
This was because crypt wraps the directories using fs.DirWrapper but
these return fs.ErrorNotImplemented for the SetModTime method.
The fix is to recognise that error and fall back to using the
DirSetModTime method on the backend which does work.
Fixes#7673
Enhanced the UnmarshalJSON method for the Duration type to correctly
handle the special string 'off' and ensure large integers are parsed
accurately without floating-point rounding errors. This resolves
issues with setting and removing the MinAge filter through the rclone
rc command.
Fixes#3783
Co-authored-by: Kyle Reynolds <kyle.reynolds@bridgerphotonics.com>
Some backends (like s3, swift, gcs, azureblob) don't have directories
(this can be overridden on some using the directory markers feature).
It therefore makes no sense to sync directory times from them as they
will all be a value made up by rclone (--default-time)
We use the feature flag CanHaveEmptyDirectories to mark backends
without real directory support and disable the directory modification
time syncing on those.
This change adds support for metadata on OneDrive. Metadata (including
permissions) is supported for both files and directories.
OneDrive supports System Metadata (not User Metadata, as of this writing.) Much
of the metadata is read-only, and there are some differences between OneDrive
Personal and Business (see table in OneDrive backend docs for details).
Permissions are also supported, if --onedrive-metadata-permissions is set. The
accepted values for --onedrive-metadata-permissions are read, write, read,write, and
off (the default). write supports adding new permissions, updating the "role" of
existing permissions, and removing permissions. Updating and removing require
the Permission ID to be known, so it is recommended to use read,write instead of
write if you wish to update/remove permissions.
Permissions are read/written in JSON format using the same schema as the
OneDrive API, which differs slightly between OneDrive Personal and Business.
(See OneDrive backend docs for examples.)
To write permissions, pass in a "permissions" metadata key using this same
format. The --metadata-mapper tool can be very helpful for this.
When adding permissions, an email address can be provided in the User.ID or
DisplayName properties of grantedTo or grantedToIdentities. Alternatively, an
ObjectID can be provided in User.ID. At least one valid recipient must be
provided in order to add a permission for a user. Creating a Public Link is also
supported, if Link.Scope is set to "anonymous".
Note that adding a permission can fail if a conflicting permission already
exists for the file/folder.
To update an existing permission, include both the Permission ID and the new
roles to be assigned. roles is the only property that can be changed.
To remove permissions, pass in a blob containing only the permissions you wish
to keep (which can be empty, to remove all.)
Note that both reading and writing permissions requires extra API calls, so if
you don't need to read or write permissions it is recommended to omit --onedrive-
metadata-permissions.
Metadata and permissions are supported for Folders (directories) as well as
Files. Note that setting the mtime or btime on a Folder requires one extra API
call on OneDrive Business only.
OneDrive does not currently support User Metadata. When writing metadata, only
writeable system properties will be written -- any read-only or unrecognized keys
passed in will be ignored.
TIP: to see the metadata and permissions for any file or folder, run:
rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read
See the OneDrive backend docs for a table of all the supported metadata
properties.
Before this change, operations.DirMove would fail when moving a directory, if
the src and dest were on different upstreams of a combine remote.
The issue only affected operations.DirMove, and not sync.MoveDir, because they
checked for server-side-move support in different ways.
MoveDir checks by just trying it and seeing what error comes back. This works
fine for combine because combine returns fs.ErrorCantDirMove which MoveDir
understands what to do with.
DirMove, however, only checked whether the function pointer is nil. This is an
unreliable way to check for combine, because combine does advertise support for
DirMove, despite not always being able to do it.
This change fixes the issue by checking the returned error in a manner similar
to sync.MoveDir and falling back to individual file moves (copy + delete)
depending on which error was returned.
Before this change, operations.CopyDirMetadata would fail with: `internal error:
expecting directory string from combine root '' to have SetMetadata method:
optional feature not implemented` if the dst was the root directory of a combine
upstream. This is because combine was returning a *fs.Dir, which does not
satisfy the fs.SetMetadataer interface.
While it is true that combine cannot set metadata on the root of an upstream
(see also #7652), this should not be considered an error that causes sync to do
high-level retries, abort without doing deletes, etc.
This change addresses the issue by creating a new type of DirWrapper that is
allowed to fail silently, for exceptional cases such as this where certain
special directories have more limited abilities than what the Fs usually
supports.
It is possible that other similar wrapping backends (Union?) may need this same
fix.
Before this change, directory modtimes (and metadata) were always synced from
src to dst, even if already in sync (i.e. their modtimes already matched.) This
potentially required excessive API calls, made logs noisy, and was potentially
problematic for backends that create "versions" or otherwise log activity
updates when modtime/metadata is updated.
After this change, a new DirsEqual function is added to check whether dirs are
equal based on a number of factors such as ModifyWindow and sync flags in use.
If the dirs are equal, the modtime/metadata update is skipped.
For backends that require setDirModTimeAfter, the "after" sync is performed only
for dirs that could have been changed by the sync (i.e. dirs containing files
that were created/updated.)
Note that dir metadata (other than modtime) is not currently considered by
DirsEqual, consistent with how object metadata is synced (only when objects are
unequal for reasons other than metadata).
To sync dir modtimes and metadata unconditionally (the previous behavior), use
--ignore-times.
Before this change, the VFS layer did not properly handle unicode normalization,
which caused problems particularly for users of macOS. While attempts were made
to handle it with various `-o modules=iconv` combinations, this was an imperfect
solution, as no one combination allowed both NFC and NFD content to
simultaneously be both visible and editable via Finder.
After this change, the VFS supports `--no-unicode-normalization` (default `false`)
via the existing `--vfs-case-insensitive` logic, which is extended to apply to both
case insensitivity and unicode normalization form.
This change also adds an additional flag, `--vfs-block-norm-dupes`, to address a
probably rare but potentially possible scenario where a directory contains
multiple duplicate filenames after applying case and unicode normalization
settings. In such a scenario, this flag (disabled by default) hides the
duplicates. This comes with a performance tradeoff, as rclone will have to scan
the entire directory for duplicates when listing a directory. For this reason,
it is recommended to leave this disabled if not needed. However, macOS users may
wish to consider using it, as otherwise, if a remote directory contains both NFC
and NFD versions of the same filename, an odd situation will occur: both
versions of the file will be visible in the mount, and both will appear to be
editable, however, editing either version will actually result in only the NFD
version getting edited under the hood. `--vfs-block-norm-dupes` prevents this
confusion by detecting this scenario, hiding the duplicates, and logging an
error, similar to how this is handled in `rclone sync`.
Directory mod times are synced by default if the backend is capable
and directory metadata is synced if the --metadata flag is provided
and the backend is capable.
This updates the bisync golden tests also which were affected by
--dry-run setting of directory modtimes.
Fixes#6685
A consequence of this is that fs.Directory returned by the local
backend will now have a correct size in (rather than -1). Some tests
depended on this and have been fixed by this commit too.
This involved adding the Fs() method to DirEntry as it is needed in
the metadata mapper.
Unspecialised fs.Dir objects will return a new fs.Unknown from their
Fs() methods as they are not specific to any given Fs.
This should be more efficient for the purposes of --fix-case, as operations.DirMove
accepts `srcRemote` and `dstRemote` arguments, while sync.MoveDir does not.
This also factors the two-step-move logic to operations.DirMoveCaseInsensitive, so
that it is reusable by other commands.
This adds a step to detect whether the backend is capable of supporting the
feature, and skips the test if not. A backend can be incapable if, for example,
it is non-case-preserving or automatically converts NFD to NFC.
This change moves the --retries and --retries-sleep flags/variables from cmd to
config (consistent with --low-level-retries), so that they can be more easily
referenced from subcommands.
It appears that ci.DryRun = true affects the behavior of r.WriteObject on
chunker only, and no other remotes. This change puts a quick bandaid on it by
setting it later on in the test, but perhaps the underlying issue warrants a
closer look at some point... is chunker checking ci.DryRun itself in a way that
no other remote does? If so, should it? (Does this break encapsulation?)
Before this change, operations.moveOrCopyFile had a special section to detect
and handle changing case of a file on a case insensitive remote, but
operations.Move did not. This caused operations.Move to fail for certain
backends that are incapable of renaming a file in-place to an equal-folding name.
(Not all case-insensitive backends have this limitation -- for example, Dropbox
does but macOS local does not.)
After this change, the special two-part-move section from
operations.moveOrCopyFile is factored out to its own function,
moveCaseInsensitive, which is then called from both operations.moveOrCopyFile
and operations.Move.
Before this change it wasn't possible to see where transfers were
going from and to in core/stats and core/transferred.
When use in rclone mount in particular this made interpreting the
stats very hard.
Before this change, bisync could only detect changes based on modtime, and
would refuse to run if either path lacked modtime support. This made bisync
unavailable for many of rclone's backends. Additionally, bisync did not account
for the Fs's precision when comparing modtimes, meaning that they could only be
reliably compared within the same side -- not against the opposite side. Size
and checksum (even when available) were ignored completely for deltas.
After this change, bisync now fully supports comparing based on any combination
of size, modtime, and checksum, lifting the prior restriction on backends
without modtime support. The comparison logic considers the backend's
precision, hash types, and other features as appropriate.
The comparison features optionally use a new --compare flag (which takes any
combination of size,modtime,checksum) and even supports some combinations not
otherwise supported in `sync` (like comparing all three at the same time.) By
default (without the --compare flag), bisync inherits the same comparison
options as `sync` (that is: size and modtime by default, unless modified with
flags such as --checksum or --size-only.) If the --compare flag is set, it will
override these defaults.
If --compare includes checksum and both remotes support checksums but have no
hash types in common with each other, checksums will be considered only for
comparisons within the same side (to determine what has changed since the prior
sync), but not for comparisons against the opposite side. If one side supports
checksums and the other does not, checksums will only be considered on the side
that supports them. When comparing with checksum and/or size without modtime,
bisync cannot determine whether a file is newer or older -- only whether it is
changed or unchanged. (If it is changed on both sides, bisync still does the
standard equality-check to avoid declaring a sync conflict unless it absolutely
has to.)
Also included are some new flags to customize the checksum comparison behavior
on backends where hashes are slow or unavailable. --no-slow-hash and
--slow-hash-sync-only allow selectively ignoring checksums on backends such as
local where they are slow. --download-hash allows computing them by downloading
when (and only when) they're otherwise not available. Of course, this option
probably won't be practical with large files, but may be a good option for
syncing small-but-important files with maximum accuracy (for example, a source
code repo on a crypt remote.) An additional advantage over methods like
cryptcheck is that the original file is not required for comparison (for
example, --download-hash can be used to bisync two different crypt remotes with
different passwords.)
Additionally, all of the above are now considered during the final --check-sync
for much-improved accuracy (before this change, it only compared filenames!)
Many other details are explained in the included docs.
Before this change, a file would sometimes be silently deleted instead of
renamed on macOS, due to its unique handling of unicode normalization. Rclone
already had a SameObject check in place for case insensitivity before deleting
the source (for example if "hello.txt" was renamed to "HELLO.txt"), but had no
such check for unicode normalization. After this change, the delete is skipped
on macOS if the src and dst filenames normalize to the same NFC string.
Example of the previous behavior:
~ % rclone touch /Users/nielash/rename_test/ö
~ % rclone lsl /Users/nielash/rename_test/ö
0 2023-11-21 17:28:06.170486000 ö
~ % rclone moveto /Users/nielash/rename_test/ö /Users/nielash/rename_test/ö -vv
2023/11/21 17:28:51 DEBUG : rclone: Version "v1.64.0" starting with parameters ["rclone" "moveto" "/Users/nielash/rename_test/ö" "/Users/nielash/rename_test/ö" "-vv"]
2023/11/21 17:28:51 DEBUG : Creating backend with remote "/Users/nielash/rename_test/ö"
2023/11/21 17:28:51 DEBUG : Using config file from "/Users/nielash/.config/rclone/rclone.conf"
2023/11/21 17:28:51 DEBUG : fs cache: adding new entry for parent of "/Users/nielash/rename_test/ö", "/Users/nielash/rename_test"
2023/11/21 17:28:51 DEBUG : Creating backend with remote "/Users/nielash/rename_test/"
2023/11/21 17:28:51 DEBUG : fs cache: renaming cache item "/Users/nielash/rename_test/" to be canonical "/Users/nielash/rename_test"
2023/11/21 17:28:51 DEBUG : ö: Size and modification time the same (differ by 0s, within tolerance 1ns)
2023/11/21 17:28:51 DEBUG : ö: Unchanged skipping
2023/11/21 17:28:51 INFO : ö: Deleted
2023/11/21 17:28:51 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Checks: 1 / 1, 100%
Deleted: 1 (files), 0 (dirs)
Elapsed time: 0.0s
2023/11/21 17:28:51 DEBUG : 5 go routines active
~ % rclone lsl /Users/nielash/rename_test/
~ %
Similar to
acf1e2df84,
go1.21.4 appears to have broken sync.MoveDir on Windows because
filepath.VolumeName() returns `\\?` instead of `\\?\C:` in cleanRootPath. It
looks like the Go team is aware of the issue and planning a fix, so this may
only be needed temporarily.
Before this change, a sync to a case insensitive dest (such as macOS / Windows)
would not result in a matching filename if the source and dest had casing
differences but were otherwise equal. For example, syncing `hello.txt` to
`HELLO.txt` would result in the dest filename remaining `HELLO.txt`.
Furthermore, `--local-case-sensitive` did not solve this, as it actually caused
`HELLO.txt` to get deleted!
After this change, `HELLO.txt` is renamed to `hello.txt` to match the source,
only if the `--fix-case` flag is specified. (The old behavior remains the
default.)
Before this change, changing the case of a file on a case insensitive remote
would fatally panic when `--dry-run` was set, due to `moveOrCopyFile`
attempting to access the non-existent `tmpObj` it (would normally have)
created. After this change, the panic is avoided by skipping this step during
a `--dry-run` (with the usual "skipped as --dry-run is set" log message.)
Allows rclone sync to accept the same output file flags as rclone check,
for the purpose of writing results to a file.
A new --dest-after option is also supported, which writes a list file using
the same ListFormat flags as lsf (including customizable options for hash,
modtime, etc.) Conceptually it is similar to rsync's --itemize-changes, but
not identical -- it should output an accurate list of what will be on the
destination after the sync.
Note that it has a few limitations, and certain scenarios
are not currently supported:
--max-duration / CutoffModeHard
--compare-dest / --copy-dest (because equal() is called multiple times for the
same file)
server-side moves of an entire dir at once (because we never get the individual
file objects in the dir)
High-level retries, because there would be dupes
Possibly some error scenarios that didn't come up on the tests
Note also that each file is logged during the sync, as opposed to after, so it
is most useful as a predictor of what SHOULD happen to each file
(which may or may not match what actually DID.)
Only rclone sync is currently supported -- support for copy and move may be
added in the future.
Logger instruments the Sync routine with a status report for each file pair,
making it possible to output a list of the synced files, along with their
attributes and sigil categorization (match/differ/missing/etc.)
It is very customizable by passing in a custom LoggerFn, options, and
io.Writers to be written to. Possible uses include:
- allow sync to write path lists to a file, in the same format as rclone check
- allow sync to output a --dest-after file using the same format flags as lsf
- receive results as JSON when calling sync from an internal function
- predict the post-sync state of the destination
For usage examples, see bisync.WriteResults() or sync.SyncLoggerFn()
Before this change, --no-unicode-normalization and --ignore-case-sync
were respected for rclone check but not for rclone check --checkfile,
causing them to give different results.
This change adds support for --checkfile so that the behavior is consistent.
Before this change, lsf's time format was hard-coded to "2006-01-02 15:04:05",
regardless of the Fs's precision. After this change, a new optional
--time-format flag is added to allow customizing the format (the default is
unchanged).
Examples:
rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)'
rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000'
rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00'
rclone lsf remote:path --format pt --time-format RFC3339
rclone lsf remote:path --format pt --time-format DateOnly
rclone lsf remote:path --format pt --time-format max
--time-format max will automatically truncate '2006-01-02 15:04:05.000000000'
to the maximum precision supported by the remote.
Before this change StatsInfo.ResetCounters() and stopAverageLoop()
(when called from time.AfterFunc) could race on StatsInfo.average.
This was because the deferred stopAverageLoop accessed
StatsInfo.average without locking.
For some reason this only ever happened on macOS. This caused the CI
to fail on macOS thus causing the macOS builds not to appear.
This commit fixes the problem with a bit of extra locking.
It also renames all StatsInfo methods that should be called without
the lock to start with an initial underscore as this is the convention
we use elsewhere.
Fixes#7567
Before this change we were only counting moves as checks. This means
that when using `rclone move` the `Transfers` stat did not count up
like it should do.
This changes introduces a new primitive operations.MoveTransfers which
counts moves as Transfers for use where that is appropriate, such as
rclone move/moveto. Otherwise moves are counted as checks and their
bytes are not accounted.
See: #7183
See: https://forum.rclone.org/t/stats-one-line-date-broken-in-1-64-0-and-later/43263/
Before this fix we were not counting transferred files nor transferred
bytes for server side moves/copies.
If the server side move/copy has been marked as a transfer and not a
checker then this accounts transferred files and transferred bytes.
The transferred bytes are not accounted to the network though so this
should not affect the network stats.
The following command will block for 60s(default) when the network is slow or unavailable:
```
rclone --contimeout 10s --low-level-retries 0 lsd dropbox:
```
This change will make it timeout after the expected 10s.
Signed-off-by: rkonfj <rkonfj@gmail.com>
Before this change, if a multithread upload failed (let's say the
source became unavailable) rclone would finalise the file first before
aborting the transfer.
This caused the partial file to be written which would overwrite any
existing files.
This was fixed by making sure we Abort the transfer before Close-ing
it.
This updates the docs to encourage calling of Abort before Close and
updates writerAtChunkWriter to make sure that works properly.
This also reworks the tests to detect this and to make sure we upload
and download to each multi-thread capable backend (we were only
downloading before which isn't a full test).
Fixes#7071
For uploads which are coming from disk or going to disk or going to a
backend which doesn't need to seek except for retries this doesn't
buffer the input.
This dramatically reduces rclone's memory usage.
Fixes#7350
When using `--no-traverse` the march routines call NewObject on each
potential object in the destination.
The concurrency limiter was accidentally arranged so that there were
`--checkers` * `--checkers` NewObject calls going on at once.
This became obvious when using the sftp backend which used too many
connections.
Fixes#5824
After the copy refactor:
179f978f75 operations: refactor Copy into methods on an temporary object
There was some confusion in the code about server side copies - should
they or shouldn't they use partials?
This manifested in unit test failures for remotes which supported
server side Copy and PartialUploads. This combination is rare and only
exists in the sftp backend with the --sftp-copy-is-hardlink flag.
This fix makes the choice that backends which set PartialUploads
always use partials even for server side copies.
operations.Copy had become very unwieldy. This refactors it into
methods on a copy object which is created for the duration of the
copy. This makes it much easier to read and reason about.
This almost 100% backwards compatible. The only difference being that
in the rc options/get output DumpMode will be output as strings
instead of integers. This is a lot more convenient for the user. They
still accept integer inputs though so the fallout from this should be
minimal.
This almost 100% backwards compatible. The only difference being that
in the rc options/get output CutoffMode, LogLevel, TerminalColorMode
will be output as strings instead of integers. This is a lot more
convenient for the user. They still accept integer inputs though so
the fallout from this should be minimal.
Before this change backend types were printing incorrectly as the name
of the type, not what was defined by the Type() method.
This was not working due to not calling the Type() method. However
this needed to be defined on a non-pointer type due to the way the
options are handled.
Before this change, the maximum number of connections was set to 10.
This means that b2 could deadlock while uploading multipart uploads
due to a lock being held longer than it should have been.
Before this change the concurrency used for an upload was rather
inconsistent.
- if size below `--backend-upload-cutoff` (default 200M) do single part upload.
- if size below `--multi-thread-cutoff` (default 256M) or using streaming
uploads (eg `rclone rcat) do multipart upload using
`--backend-upload-concurrency` to set the concurrency used by the uploader.
- otherwise do multipart upload using `--multi-thread-streams` to set the
concurrency.
This change makes the default for the concurrency used be the
`--backend-upload-concurrency`. If `--multi-thread-streams` is set and larger
than the `--backend-upload-concurrency` then that will be used instead.
This means that if the user sets `--backend-upload-concurrency` then it will be
obeyed for all multipart/multi-thread transfers and the user can override them
all with `--multi-thread-streams`.
See: #7056
- fix docs and error messages for multithread
- use sync/errgroup built in concurrency limiting
- re-arrange multithread code
- don't continue multi-thread uploads if one part fails
Before this change, when using --cutoff-mode=soft and --max-duration
rclone deadlocked when the cutoff limit was reached.
This was because the sync objects Pipe became full and nothing was
emptying it because the cutoff was reached.
This changes the context for putting items into the pipe to be the one
that gets cancelled when the cutoff is reached.
See: https://forum.rclone.org/t/sync-command-hanging-using-cutoff-mode-soft-with-max-duration-time-flags/40866
Currently, the average transfer speed will stop calculating 1 minute
after the last queued transfer completes. This causes the average to
stop calculating when checking is slow and the transfer queue becomes
empty.
This change will require all checks to complete before stopping the
average speed calculation.
In this commit:
432d5d1e20 operations: fix overlapping check on case insensitive file systems
We introduced a test that makes no sense. This happens to pass without --fast-list and fail with it.
This removes the test.
Before this change we showed both server side moves and server side
copies as bytes transferred.
This made a nice easy to use stats display, but also caused confusion
for users who saw unrealistic transfer times. It also caused a problem
with --max-transfer and chunker which renames each chunk after
uploading which was counted as a transfer byte.
This patch instead accounts the server side move and copy statistics
as a seperate lines in the stats display which will only appear if
there are any server side moves / copies. This is also output in the
rc.
This gives users something to look at when transfers are running which
was the point of the original change but it now means that transfer
bytes represents data transfers through this rclone instance only.
Fixes#7183
This adds an additional parameter to the creation of each flag. This
specifies one or more flag groups. This **must** be set for global
flags and **must not** be set for local flags.
This causes flags.md to be built with sections to aid comprehension
and it causes the documentation pages for each command (and the
`--help`) to be built showing the flags groups as specified in the
`groups` annotation on the command.
See: https://forum.rclone.org/t/make-docs-for-mortals-not-only-rclone-gurus/39476/
Some changes about test cases:
Because MiddlewareCORS will return early on OPTIONS request,
this middleware should only be used once at NewServer function.
Test cases should pass AllowOrigin config instead of adding
this middleware again.
A new test case was added to test CORS preflight request with
an authenticator. Preflight request should always return 200 OK
regardless of autentications.
Co-authored-by: yuudi <yuudi@users.noreply.github.com>
Before this change, the overlapping check could erroneously give this
error on case insensitive file systems:
Failed to sync: destination and parameter to --backup-dir mustn't overlap
The code was fixed and re-worked to be simpler and more reliable.
See: https://forum.rclone.org/t/backup-dir-cannot-be-in-root-even-when-excluded/39844/
Before this change the new partial downloads code was causing symlinks
to be copied as regular files.
This was because the partial isn't named .rclonelink so the local
backend saves it as a normal file and renaming it to .rclonelink
doesn't cause it to become a symlink.
This fixes the problem by not copying .rclonelink files using the
partials mechanism but reverting to the previous --inplace behaviour.
This could potentially be fixed better in the future by changing the
local backend Move to change files to and from symlinks depending on
their name. However this was deemed too complicated for a point
release.
This also adds a test in the local backend. This test should ideally
be in operations but it isn't easy to put it there as operations knows
nothing of symlinks.
Fixes#7101
See: https://forum.rclone.org/t/reggression-in-v1-63-0-links-drops-the-rclonelink-extension/39483
This introduces a new fs.Option flag, Sensitive and uses this along
with IsPassword to redact the info in the config file for support
purposes.
It adds this flag into backends where appropriate. It was necessary to
add oauthutil.SharedOptions to some backends as they were missing
them.
Fixes#5209
The --progress flag overrides operations.SyncPrintf in order to do its
magic on stdout without interfering with other output.
Before this change the syncFprintf routine in operations (which is
used to print all output to stdout) was taking the
operations.StdoutMutex and the printProgress function in the
--progress routine was also attempting to take the same mutex causing
a deadlock.
This patch fixes the problem by moving the locking from the
syncFprintf function to SyncPrintf. It is then up to the function
overriding this to lock the StdoutMutex. This ensures the StdoutMutex
can never cause a deadlock.
Before this change if using --fast-list on a directory with more than
a few thousand directories in it DirTree.CheckParents became very slow
taking up to 24 hours for a directory with 1,000,000 directories in
it.
This is because it becomes an O(N²) operation as DirTree.Find has to
search each directory in a linear fashion as it is stored as a slice.
This patch fixes the problem by scanning the DirTree for directories
before starting the CheckParents process so it never has to call
DirTree.Find.
After the fix calling DirTree.CheckParents on a directory with
1,000,000 directories in it will take about 1 second.
Anything which calls DirTree.Find can potentially have bad performance
so in the future we should redesign the DirTree to use a different
underlying datastructure or have an index.
https://forum.rclone.org/t/almost-24-hours-cpu-compute-time-during-sync-between-two-large-s3-buckets/39375/
when multi-thread downloading is enabled, rclone used
to send a write to disk after every read, resulting in a lot
of small writes to different locations of the file.
depending on the underlying filesystem or device, it can be more
efficient to send bigger writes.