The --metadata-mapper was being called twice for files that rclone
needed to stream to disk,
This happened only for:
- files bigger than --upload-streaming-cutoff
- on backends which didn't support PutStream
This also meant that these were being logged as two transfers which
was a little strange.
This fixes the problem by not using operations.Copy to upload the file
once it has been streamed to disk, instead using the Put method on the
backend.
This should have no effect on reliability of the transfers as we retry
Put if possible.
This also tidies up the Rcat function to make the different ways of
uploading the data clearer and make it easy to see that it gets
verified on all those paths.
See #7848
Before this change on files which have unknown length (like Google
Documents) the SrcFsType would be set to "memoryFs".
This change fixes the problem by getting the Copy function to pass the
src Fs into a variant of Rcat.
Fixes#7848
For example using
--onedrive-metadata-permissions read,write,failok
Will allow permissions to be read and written but if the writing
fails, then only an ERROR will be written in the log and the transfer
won't fail.
Before this change, attempting to copy a large google doc while using
the metadata mapper caused a panic. Google doc files use Rcat to
download as they have an unknown size, and when the size of the doc
file got above --streaming-upload-cutoff it used a
object.NewStaticObjectInfo with a `nil` Fs to upload the file which
caused the crash in the metadata mapper code.
This change makes sure that the Fs in object.NewStaticObjectInfo is
never nil, and it returns MemoryFs which is consistent with the Rcat
code when the source is sized below the --streaming-upload-cutoff
threshold.
Fixes#7845
Before this change multipart downloads to the local disk with
--metadata failed to have their metadata set properly.
This was because the OpenWriterAt interface doesn't receive metadata
when creating the object.
This patch fixes the problem by using the recently introduced
Object.SetMetadata method to set the metadata on the object after the
download has completed (when using --metadata). If the backend we are
copying to is using OpenWriterAt but the Object doesn't support
SetMetadata then it will write an ERROR level log but complete
successfully. This should not happen at the moment as only the local
backend supports metadata and OpenWriterAt but it may in the future.
It also adds a test to check metadata is preserved when doing
multipart transfers.
Fixes#7424
For example using
--drive-metadata-permissions read,write,failok
Will allow metadata to be read and written but if the writing fails,
then only an ERROR will be written in the log and the transfer won't
fail.
This enables gitannex end-to-end tests to run on CI. Otherwise, the
version would not match and tests that check the rclone version would
fail like so:
```
=== RUN TestEndToEnd
e2e_test.go:199: Skipping due to rclone version: expected version "v1.67.0-DEV", but got "v1.67.0-beta.7905.220bbe24d.merge"
--- SKIP: TestEndToEnd (0.07s)
```
Issue #7625
For each layout mode, these tests start with a git-annex-remote-rclone
remote, migrate it to a git-annex-remote-rclone-builtin remote. They
verify that a file copied pre-migration is still present and that `git
annex testremote` passes.
Issue #7625
Now that e2e tests are running in parallel, undoing the chdir to the
temp dir was causing flaky failures on cleanup. We don't need it anyway
because the worrisome subcommands have their working directory
controlled by `runInRepo()`.
Issue #7625
I'm hopeful that running these in parallel will not impact CI runtime
very much, but that likely depends on the number of CPU cores and
whether the tmp filesystem is backed by memory vs a physical disk.
Issue #7625
TestEndToEndRepoLayoutCompat exercises git-annex-remote-rclone-builtin
and git-annex-remote-rclone on the same rclone remote to ensure they are
compatible. It repeats the same test for all known layout modes.
Issue #7625
This commit adds support for the same repo layouts supported by
git-annex-remote-rclone. This should enable git-annex users with remotes
of type "rclone" to switch to a "rclone-builtin" without needing to
retransfer content.
Issue #7625
Before this change, cache.PinUntilFinalized was called twice if the root pointed
to a composite multi-chunk file without metadata, resulting in a fatal "finalizer
already set" error. This change fixes the issue.
Before this change an `rclone lsjson --encrypted` command where
additional `--crypt-` parameters were supplied on the command line:
rclone lsjson --crypt-description XXX --encrypted secret:
Produced an error like this:
Failed to lsjson: ListJSON failed to load config for crypt remote: config name contains invalid characters...
This was due to an incorrect lookup of the crypt config to create the
encrypted mapping.
Fixes#7833
When an external OAuth flow is being used (i.e. a client ID and an
OAuth token are set in the config), a client secret should not be set.
If one is, the server may reject a token refresh attempt.
But there's no way to clear out a backend's default client secret via
configuration, since empty-string config values are ignored.
So instead, when a client ID is set, we should clear out any default
client secret, since it wouldn't apply anyway.
Version 5 removed go cache management, and therefore also options skip-pkg-cache and
skip-build-cache, because the cache related to go itself is already handled by
actions/setup-go, and now it only caches golangci-lint analysis. Since we run multiple
golangci-lint-action steps for different goos, we want to cache package and build cache
and golangci-lint results from all of them, and therefore this commit now changes the
approach by disabling all built-in caching and introducing a separate cache step to
handle it properly.
This change adds support for "group" identities, and SharePoint variants
"siteUser" and "siteGroup". It also adds support for using any identity type
(including "application" and "device") as a recipient source when adding
permissions.