Before this fix `NewObject` could return a wrapped `fs.Object(nil)`
which caused a crash. This was caused by `wrapObject` returning a
`nil` `*Object` which was cast into an `fs.Object`.
This changes the interface of `wrapObject` so it returns an
`fs.Object` instead of a `*Object` and an error which must be checked.
This forces the callers to return a `nil` object rather than an
`fs.Object(nil)`.
See: https://forum.rclone.org/t/panic-in-hasher-when-mounting-with-vfs-cache-and-not-synced-data-in-the-cache/29697/11
Having a replace directive in go.mod causes "go get
github.com/rclone/rclone" to fail as it discussed in this Go issue:
https://github.com/golang/go/issues/44840
This is apparently how the Go team want go.mod to work, so this commit
hard forks github.com/jlaffaye/ftp into github.com/rclone/ftp so we
can remove the `replace` directive from the go.mod file.
Fixes#5810
Before this change rclone send pre-1970 timestamps as negative
numbers. pCloud ignores these and sets them as todays date.
This change sends the timestamps as unsigned 64 bit integers (which is
how the binary protocol sends them) and pCloud accepts the (actually
negative) timestamp like this.
Before this change the new multipart upload ETag checking code was
failing in the integration tests with Alibaba OSS.
Apparently Alibaba calculate the ETag in a different way to AWS.
This introduces a new provider quirk with a flag to disable the
checking of the ETag for multipart uploads.
Mulpart Etag checking has been enabled for all providers that we can
test for and work, and left disabled for the others.
Before this rclone ignored the ETag on multipart uploads which missed
an opportunity for a whole file integrity check.
This adds that check which means that we now check even harder that
multipart uploads have arrived properly.
See #5993
Before this change `rclone about swift:container` would show aggregate
info about all the containers, not just the one in use.
This causes a problem if container listing is disabled (for example in
the Blomp service).
This fix makes `rclone about swift:container` show only the info about
the given `container`. If aggregate info about all the containers is
required then use `rclone about swift:`.
See: https://forum.rclone.org/t/rclone-mount-blomp-problem/29151/18
Before this change, rclone supported authorizing for remote systems by
going to a URL and cutting and pasting a token from Google. This is
known as the OAuth out-of-band (oob) flow.
This, while very convenient for users, has been shown to be insecure
and has been deprecated by Google.
https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob
> OAuth out-of-band (OOB) is a legacy flow developed to support native
> clients which do not have a redirect URI like web apps to accept the
> credentials after a user approves an OAuth consent request. The OOB
> flow poses a remote phishing risk and clients must migrate to an
> alternative method to protect against this vulnerability. New
> clients will be unable to use this flow starting on Feb 28, 2022.
This change disables that flow, and forces the user to use the
redirect URL flow. (This is the flow used already for local configs.)
In practice this will mean that instead of cutting and pasting a token
for remote config, it will be necessary to run "rclone authorize"
instead. This is how all the other OAuth backends work so it is a well
tested code path.
Fixes#6000
The directory created by `T.TempDir` is automatically removed when the
test and all its subtests complete.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Before this change a multipart upload with the --no-head flag returned
the MD5SUM as a base64 string rather than a Hex string as the rest of
rclone was expecting.
Before this change attempting NewObject on a SAS URL's root would
crash the Azure SDK.
This change detects that using the code from this previous fix
f7404f52e7 azureblob: fix crash when listing outside a SAS URL's root - fixes#4851
And returns not object not found instead.
It also prevents things being uploaded to the root of the SAS URL
which also crashes the Azure SDK.
Before this fix if a file was updated, but to the same length and
timestamp then the local backend would return the wrong (cached)
hashes for the object.
This happens regularly on a crypted local disk mount when the VFS
thinks files have been changed but actually their contents are
identical to that written previously. This is because when files are
uploaded their nonce changes so the contents of the file changes but
the timestamp and size remain the same because the file didn't
actually change.
This causes errors like this:
ERROR: file: Failed to copy: corrupted on transfer: md5 crypted
hash differ "X" vs "Y"
This turned out to be because the local backend wasn't clearing its
cache of hashes when the file was updated.
This fix clears the hash cache for Update and Remove.
It also puts a src and destination in the crypt message to make future
debugging easier.
Fixes#4031
Currently the B2 docs don't specify which format the download_url
setting should have, and if you input it wrong, there is nothing
in the verbose logs or anywhere else that can let you know that.
* Wasabi starts to provide AP Northeast 2 (Osaka) endpoint, so add it to the list
* Rename ap-northeast-1 as "AP Northeast 1 (Tokyo)" from "AP Northeast"
Signed-off-by: lindwurm <lindwurm.q@gmail.com>
After speed testing it was discovered that upload speed goes up pretty
much linearly with upload concurrency. This patch changes the default
from 4 to 16 which means that rclone will use 16 * 4M = 64M per
transfer which is OK even for low memory devices.
This adds a note that performance may be increased by increasing
upload concurrency.
See: https://forum.rclone.org/t/performance-of-rclone-vs-azcopy/27437/9
Previously only the fs being checked on gets passed to
GetModifyWindow(). However, in most tests, the test files are
generated in the local fs and transferred to the remote fs. So the
local fs time precision has to be taken into account.
This meant that on Windows the time tests failed because the
local fs has a time precision of 100ns. Checking remote items uploaded
from local fs on Windows also requires a modify window of 100ns.
This is possible now that we no longer support go1.12 and brings
rclone into line with standard practices in the Go world.
This also removes errors.New and errors.Errorf from lib/errors and
prefers the stdlib errors package over lib/errors.
This removes the checks against the provider throughout the code and
puts them into a single setQuirks function for easy maintenance when
adding a new provider.
It also updates the quirks with the results of testing against
backends we have access to.
This also adds a list_url_encode parameter so that quirk can be
manually set.
This implements a quirks system for providers and notes which
providers we have tested to support ListObjectsV2.
For those providers which don't support ListObjectsV2 we use the
original ListObjects call.
The API doesn't seem to accept a value of "0" any more for the root
directory ID, giving the error "Could not decode folder id".
However omitting it seems to work fine.
In this commit, released in 1.56.0 we started reading the size of the
object from the Content-Length header as returned by the GET request
to read the object.
4401d180aa s3: add --s3-no-head-object
However some object storage systems, notably Ceph, don't return a
Content-Length header.
The new code correctly calls the setMetaData function with a nil
pointer to the ContentLength.
However due to this commit from 2014, released in v1.18, the
setMetaData function was not ignoring the size as it should have done.
0da6f24221 s3: use official github.com/aws/aws-sdk-go including multipart upload #101
This commit correctly ignores the content length if not set.
Fixes#5732
Before this change the `shared_credentials_file` config option was
being ignored.
The correct value is passed into the SDK but it only sets the
credentials in the default provider. Unfortunately we wipe the default
provider in order to install our own chain if env_auth is true.
This patch restores the shared credentials file in the session
options, exactly the same as how we restore the profile.
Original fix:
1605f9e14d s3: Fix shared_credentials_file auth
This patch reverts this commit
1605f9e14d s3: Fix shared_credentials_file auth
It unfortunately had the side effect of making the s3 SDK ignore the
config in our custom chain and use the default provider. This means
that advanced auth was being ignored such as --s3-profile with
role_arn.
Fixes#5468Fixes#5762
The API has changed in the directory move call JSON response from
returning a TaskID as a string to returning it as an integer. In other
places it is still returned as a string though.
This patch allows the TaskID to be an integer or a string in the JSON
response and keeps it internally as a string like before.
Before this change the backoff for the error_background error was 6
seconds. This means that if it wasn't resolved in 60 seconds (with the
default 10 low level retries) then an error was reported.
This error was being reported frequently in the integration tests, so
is likely affecting real users too.
This patch changes the backoff into an exponential backoff
1,2,4,8...1024 seconds to make sure we wait long enough for the
background operation to complete.
See #5734
- setup correct path encoding (fixes backend test FsEncoding)
- ignore range option if file is empty (fixes VFS test TestFileReadAtZeroLength)
- cleanup stray files left after failed upload (fixes test FsPutError)
- rebase code on master, adapt backend for rclone context passing
- translate Siad errors to rclone native FS errors in sia errorHandler
- TestSia: return proper backend options from the script
- TestSia: use uptodate AntFarm image, nebulouslabs/siaantfarm is stale
In
05f128868f azureblob: add --azureblob-no-head-object
we incorrectly parsed the size of the object as the Content-Length of
the returned header. This is incorrect in the presense of Range
requests.
This fixes the problem by parsing the Content-Range header if
avaialble to read the correct length from if a Range request was
issued.
See: #5734
This reverts commit
dc06973796 Revert "s3: use rclone's low level retries instead of AWS SDK to fix listing retries"
Which in turn reverted
5470d34740 "backend/s3: use low-level-retries as the number of SDK retries"
So we are back where we started.
It then modifies it to set the AWS SDK to `--low-level-retries`
retries, but set the rclone retries to 2 so that directory listings
can be retried.
Before this change the cleanup routine exited on the first deletion
error.
This change counts any errors on deletion and exits when the iteration
is complete with an error showing the number of deletion failures.
Deletion failures will be logged.
Before this change we uses limit/offset paging for directories in the
main directory listing routine and in the trash cleanup listing.
This switches to the new scheme of limit/marker which is more reliable
on a directory which is continuously changing. It has the disadvantage
that it doesn't tell us the total number of items available, however
that wasn't information rclone uses.
This changes the interface to NewObject so that if NewObject is called
on a directory then it should return fs.ErrorIsDir if possible without
doing any extra work, otherwise fs.ErrorObjectNotFound.
Tested on integration test server with:
go run integration-test.go -tests backend -run TestIntegration/FsMkdir/FsPutFiles/FsNewObjectDir -branch fix-stat -maxtries 1
The egress charges while using a CloudFront CDN url is cheaper when
compared to accessing the file directly from S3. So added a download
URL advanced option, which when set downloads the file using it.
Before this patch the md5all option would skip creating metadata with
hashsum if base filesystem provided md5, in hope to pass it through.
However, if base hash is slow (for example on local fs), chunker passed
slow md5 but never reported this fact in features.
This patch makes chunker snapshot base hashsum in metadata when md5all is
set and base hashsum is slow since chunker was intended to provide only
instant hashsums from the start.
Fixes#5508
Before this change, when uploading to a crypt, the ObjectInfo
accidentally used the encrypted size, not the unencrypted size when
--crypt-no-data-encryption was set.
Fixes#5498
In presence of no_data_encryption the Crypt's Put method used to over-optimize
and returned base object. This patch makes it return Crypt-wrapped object now.
Fixes#5498
I discovered that `rclone` always upload in chunks of 16MiB whenever
uploading a file smaller than `--drive-upload-cutoff`. This is
undesirable since the purpose of the flag `--drive-upload-cutoff` is
to *prevent* chunking below a certain file size.
I realized that it wasn't `rclone` forcing the 16MiB chunks. The
`google-api-go-client` forces a chunk size default of
[`googleapi.DefaultUploadChunkSize`](32bf29c2e1/googleapi/googleapi.go (L55-L57))
bytes for resumable type uploads. This means that all requests that
use `*drive.Service` directly for upload without specifying a
`googleapi.ChunkSize` will be forced to use a *`resumable`*
`uploadType` (rather than `multipart`) for files less than
`googleapi.DefaultUploadChunkSize`. This is also noted directly in the
Drive API client documentation [here](https://pkg.go.dev/google.golang.org/api/drive/v3@v0.44.0#FilesUpdateCall.Media).
This fixes the problem by passing `googleapi.ChunkSize(0)` to
`Media()` method calls, which is the only way to disable chunking
completely. This is mentioned in the API docs
[here](https://pkg.go.dev/google.golang.org/api/googleapi@v0.44.0#ChunkSize).
The other alternative would be to pass
`googleapi.ChunkSize(f.opt.ChunkSize)` -- however, I'm *strongly* in
favor of *not* doing this for performance reasons. By not explicitly
passing a `googleapi.ChunkSize(0)`, we effectively allow
[`PrepareUpload()`](https://pkg.go.dev/google.golang.org/api/internal/gensupport@v0.44.0#PrepareUpload)
to create a
[`NewMediaBuffer`](https://pkg.go.dev/google.golang.org/api/internal/gensupport@v0.44.0#NewMediaBuffer)
that copies the original `io.Reader` passed to `Media()` in order to
check that its size is less than `ChunkSize`, which will unnecessarily
consume time and memory.
`minChunkSize` is also changed to be `googleapi.MinUploadChunkSize`,
as it is something specified we have no control over.
Google Drive API allows for clauses like "modifiedTime > '2012-06-04T12:00:00'"
in the query param, so the filter flags --max-age and --min-age can be applied
directly at the directory listing phase rather than in a filter.
This is extremely helpful when we want to do an incremental backup of a remote
drive with many files but the number of recently changed file is small.
Co-authored-by: fotile96 <fotile96@users.noreply.github.com>
This replaces built-in os.MkdirAll with a patched version that stops the recursion
when reaching the volume part of the path. The original version would continue recursion,
and for extended length paths end up with \\? as the top-level directory, and the error
message would then be something like:
mkdir \\?: The filename, directory name, or volume label syntax is incorrect.
Before this change the union's feature flags were a strict AND of the
underlying remotes. This means that a union of a local disk (which can
Move but not Copy) and a bucket based remote (which can Copy but not
Move) could neither Move nor Copy.
This fix advertises Move in the union if all the remotes can Move or
Copy. It also implements Move as Copy+Delete (like rclone does
normally) if the underlying union does not support Move.
This enables renames to work with unions of local disk and bucket
based remotes expected.
Fixes#5632
This was started in
3626f10f26 pcloud: add sha256 support - fixes#5496
But this support turned out to be incomplete and caused the
integration tests to fail.
After updating rclone's dependencies these tests started failing on
windows/386
- TestInternalDoubleWrittenContentMatches
- TestInternalMaxChunkSizeRespected
The failures look like this. The root cause is unknown. The `Wait(n=1)
would exceed context deadline` errors come from golang.org/x/time/rate
but it isn't clear what is calling them.
2021/08/20 21:57:16 ERROR : worker-0 <one>: object open failed 0: rate: Wait(n=1) would exceed context deadline
[snip ~10 duplicates]
2021/08/20 21:57:56 ERROR : tidwcm1629496636/one: (0/26) error (chunk not found 0) response
2021/08/20 21:58:02 ERROR : worker-0 <one>: object open failed 0: rate: Wait(n=1) would exceed context deadline
--- FAIL: TestInternalDoubleWrittenContentMatches (45.77s)
cache_internal_test.go:310:
Error Trace: cache_internal_test.go:310
Error: Not equal:
expected: "one content updated double"
actual : ""
Diff:
--- Expected
+++ Actual
@@ -1 +1 @@
-one content updated double
+
Test: TestInternalDoubleWrittenContentMatches
2021/08/20 21:58:03 original size: 23592960
2021/08/20 21:58:03 updated size: 12
In this commit the config system was re-arranged
94dbfa4ea fs: change Config callback into state based callback #3455
This passed the password as a temporary config parameter but forgot to
reveal it in the API call.
At some point some google docs files started having sizes returned in
their listing information.
This then caused rclone to treat the docs as files which caused
downloads to fail.
The API docs now state that google docs may have sizes (whereas I'm
pretty sure it didn't earlier).
This fix removes the check for size, so google docs are identified
solely by not having an MD5 checksum.
This change fixes the bug described below:
if a file is removed while the local backend List() runs,
the call will flag an accounting error.
The bug manifests itself if local backend is the Sync target
due to intrinsic concurrency.
The odds to hit this bug depend on --checkers and --transfers.
Chunker over local backend is affected even more because
updating a composite object with a smaller size content
translates into removing chunks on the underlying file system
and involves a number of List() calls.
- Unify all hash names as lowercase alphanumerics without punctuation.
- Legacy names continue to work but disappear from docs, they can be depreciated or dropped later.
- Make rclone hashsum print supported hash list in case of wrong spelling.
- Update documentation.
Fixes#5071Fixes#4841
Before this change, rclone would always check the root to see if it
was an object.
This change doesn't check to see if the root is an object if the path
ends with a /
This avoids a transaction where rclone HEADs the path to see if it
exists.
See #4990
macOS stores files in NFD form and transferring them like this to some
systems causes the Korean language to display incorrectly.
This adds the flag --local-unicode-normalization to optionally
normalize the file names to NFC.
This also removes the (long deprecated) --local-no-unicode-normalization flag
See: https://forum.rclone.org/t/support-for-korean-jaso-conversion/19435
This is a very large change which turns the post Config function in
backends into a state based call and response system so that
alternative user interfaces can be added.
The existing config logic has been converted, but it is quite
complicated and folloup commits will likely be needed to fix it!
Follow up commits will add a command line and API based way of using
this configuration system.
It was discovered on some Android systems, the stat size of a symlink
is different to the size that readlink returns.
This was giving errors like this
transport connection broken: http: ContentLength=30 with Body length 28
There are enough exceptions to the size of readlink being different to
the size of stat that this patch now always does readlink to work out
the size of a symlink.
Since symlinks are relatively uncommon this shouldn't affect
performance too much and will mean that the size is always correct.
This deprecates the --local-zero-size-links flag which is now
effectively always enabled.
See: https://forum.rclone.org/t/problem-with-symlinks-and-links/23840/
Includes adding support for additional size input suffix Mi and MiB, treated equivalent to M.
Extends binary suffix output with letter i, e.g. Ki and Mi.
Centralizes creation of bit/byte unit strings.
v1.4.6 of uplink allows us to do a negative offset from the end of the
file. This removes a round trip when requesting the last N bytes of a
file.
Previous to v1.4.6 of uplink it wasn't possible to do a negative offset
on download. This meant that to fulfill the semantics of http range
headers it was necessary to first fetch the size of the object via a
stat call and compute absolute offset and length.
Restructuring of config code in v1.55 resulted in config
file being loaded early at process startup. If configuration
file is encrypted this means user will need to supply the password,
even when running commands that does not use config.
This also lead to an issue where mount with --deamon failed to
decrypt the config file when it had to prompt user for passord.
Fixes#5236Fixes#5228
Including the bucket name as part of the `fileNamePrefix` passed to
`b2_get_download_authorization` results in a link valid for objects that
have the bucket name as part of the object path; e.g.,
rclone link :b2:some-bucket/some-file
would result in a public link valid for the object
`some-bucket/some-file` in the `some-bucket` bucket (in rclone-remote
parlance, `:b2:some-bucket/some-bucket/some-file`). This will almost
certainly result in a broken link.
The B2 docs don't explicitly specify this behavior, but the example
given for `fileNamePrefix` provides some clarification.
See https://www.backblaze.com/b2/docs/b2_get_download_authorization.html.
This code removes the code added in
15d19131bd s3: use aws web identity role provider
This code no longer works because it doesn't initialise the
tokenFetcher - leading to a nil pointer crash.
The proper way to initialise this is with the
NewWebIdentityCredentials but it isn't clear where to get the other
parameters: roleARN, roleSessionName, path.
In the linked issue a user reports rclone working with EKS anyway, so
perhaps this code is no longer needed.
If it is needed, hopefully someone who knows AWS better will come
along and fix it!
See: https://forum.rclone.org/t/add-support-for-aws-sso/23569
Betweeen rclone v1.54 and v1.55 there was an approx 3x performance
regression when transferring to distant SFTP servers (in particular
rsync.net).
This turned out to be due to the library github.com/pkg/sftp rclone
uses. Concurrent writes used to be enabled in this library by default
(for v1.12.0 as used in rclone v1.54) but they are no longer enabled
(for v1.13.0 as used in rclone v1.55) for safety reasons and it is
necessary to enable them specifically.
The safety concerns are due to the uncertainty as to whether writes
come in order and whether a half completed file might have holes in
it. This isn't a problem for rclone since a) it doesn't restart
uploads and b) it has a post-transfer checksum test.
This change introduces a new flag `--sftp-disable-concurrent-writes`
to control the feature which defaults to false, meaning that
concurrent writes are enabled as in v1.54.
However this isn't quite enough to fix the problem as the sftp library
needs to be able to sniff the size of the stream from the reader
passed in, so this also adds a `Size` interface to the reader to
enable this. This involved a patch to the library.
The library was reverted to v1.12.0 for v1.55.1 - this patch installs
v1.13.0+master to fix the Size interface problem.
See: https://github.com/pkg/sftp/issues/426
Before this change, rclone checked to see if an object existed before
doing an upload by listing the destination directory. This was very
inefficient, especially with large directories.
After this change rclone uses the pre upload check API call which
checks to see if it is OK to upload an object, and also returns the ID
of an existing object which saves rclone having to do a directory
listing.
OneDrive randomly returns the error message: "InvalidAuthenticationToken: Unable to initialize RPS". These unexpected errors typically caused the entire rclone command to fail.
This work around recognizes these errors and marks them for a low level retry, that mostly succeeds. This will make rclone commands complete without being noticeable affected.
Fixes: #5270
With the file version format standardized in lib/version, `crypt` can
now treat the version strings separately from the encrypted/decrypted
file names. This allows --b2-versions to work with `crypt`.
Fixes#1627
Co-authored-by: Luc Ritchie <luc.ritchie@gmail.com>
Before this change rclone would auth over https even when the server
was configured with http.
Authing over http obviously isn't ideal, however this type of server
is on-premise and doesn't work over https.
PR #4266 modified ftpConnection to make ftp library into using
a custom dial function which is QoS aware and takes care of TLS.
However the ServerConn.Login function from the ftp library also needs
TLS config passed explicitly as a trigger for sending PSBZ and PROT
options to FTP server. This was not taken care of resulting in
failure to connect via FTP with implicit TLS.
This PR fixes that.
Fixes#5210
In
a3fcadddc8 sftp: close idle connections after --sftp-idle-timeout (1m by default)
Idle SFTP connections were closed after 1 minute. However due to the
way SSH multiplexes connections over a single SSH connection this
meant that if uploads or downloads went on for more than one minute
they failed with "EOF errors" as their underlying connection was
closed.
This fixes the problem by not clearing idle connections if there are
any transfers in progress.
Fixes#5197
This reverts the library update done in this commit.
713f8f357d sftp: fix "file not found" errors for read once servers
Reverting this commit triples the performance to a far away sftp server.
See: https://github.com/pkg/sftp/issues/426
Before this change when the context was cancelled (due to
--max-duration for example) this could deadlock when uploading
multipart uploads.
This change fixes the problem by introducing another go routine to
monitor the context and close the pipe with an error when the context
errors.
When reading files from B2 via cloudflare using --b2-download-url
cloudflare strips the Content-Length headers (presumably so it can
inject stuff into the body).
This caused rclone to think the file was corrupted as the length
didn't match.
The patch uses the old length read from the listing if there is no
Content-Length.
See: https://forum.rclone.org/t/b2-cloudflare-error-directory-not-found/23026
This commit broke the initialisation of the union backend
f17d7c0012 union: refactor to use fspath.SplitFs instead of fs.ParseRemote #4996
This patch fixes it.
Box recently changed their API, changing the case of returned API items
> On May 10th, 2021, as part of our continued infrastructure upgrade,
> Box's API response headers will standardize to return in a case
> insensitive manner, in line with industry best practices and our API
> documentation. Applications that are using these headers, such as
> "location" and "retry-after", will need to verify that their
> applications are checking for these headers in a case-insensitive
> fashion.
Rclone was reading the raw headers from the `http.Header` and not
using the `Get` accessor method which meant that it was sensitive to
case changes.
This fixes the problem by using the `Get` accessor method.
See: https://forum.rclone.org/t/box-backend-incompatible-with-box-api-changes-being-deployed/22972
If you exceed rate limits, dropbox tells you to wait for 300 seconds -
this is rather a long time for the user to be waiting for rclone to
finish, so emit a NOTICE level log instead of a DEBUG.
Some sftp servers don't allow the user to access the file after upload.
In this case the error message indicates that using
--sftp-set-modtime=false would fix the problem. However it doesn't
because SetModTime does a stat call which can't be disabled.
Update SetModTime failed: SetModTime stat failed: object not found
After upload this patch checks for an `object not found` error if
set_modtime == false and ignores it, returning the expected size of
the object instead.
It also makes SetModTime do nothing if set_modtime = false
https://forum.rclone.org/t/sftp-update-setmodtime-failed/22873
These were added by accident in
d9959b0271 drive: pass context on to drive SDK - this will help with cancellation
Which added lots of new Context() calls but duplicated some existing
ones.
Before this we just failed if the ftp connection or login failed.
This change adds a pacer just for the ftp connect and retries if the
connection failed to Dial or the login returns a 421 error.
This is implemented as a state machine parser so it can emit sensible
error messages.
It does not use the connection strings elsewhere in rclone yet - see
subsequent commits.
An optional fuzzer is implemented for the Parse function.
Before this change, when using an all create method with one of the
upstreams being read only, if there was an existing file on the read
only remote, it was impossible to update it.
This change detects that situation and creates the file on a
read/write upstream. This file will shadow the file on the read/only
upstream. If it is deleted the read only upstream file will be visible
again.
Fixes#4929
Before this fix using the epff policy could double close a channel.
The fix refactors the code to make that impossible and cancels any
running queries when the first query is found.
Before this change the config file needed to be explicitly reloaded.
This coupled the config file implementation with the backends
needlessly.
This change stats the config file to see if it needs to be reloaded on
every config file operation.
This allows us to remove calls to
- config.SaveConfig
- config.GetFresh
Which now makes the the only needed interface to the config file be
that provided by configmap.Map when rclone is not being configured.
This also adds tests for configfile
It introduces a new flag --sftp-disable-concurrent-reads to stop the
problematic behaviour in the SFTP library for read-once servers.
This upgrades the sftp library to v1.13.0 which has the fix.
This change checks the context whenever rclone might retry, and
doesn't retry if the current context has an error.
This fixes the pathological behaviour of `--max-duration` refusing to
exit because all the context deadline exceeded errors were being
retried.
This unfortunately meant changing the shouldRetry logic in every
backend and doing a lot of context propagation.
See: https://forum.rclone.org/t/add-flag-to-exit-immediately-when-max-duration-reached/22723
This change makes dedupe recursively count elements in same-named directories
and make the largest one primary. This allows to minimize the amount of data
moved (or at least the amount of API calls) when dedupe merges them.
It also adds a new fs.Object interface `ParentIDer` with function `ParentID` and
implements it for the drive and opendrive backends. This function returns
parent directory ID for objects on filesystems that allow same-named dirs.
We use it to correctly count sizes of same-named directories.
Fixes#2568
Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
If you are using rclone a library you can decide to use the rclone
config file system or not by calling
configfile.LoadConfig(ctx)
If you don't you will need to set `config.Data` to an implementation
of `config.Storage`.
Other changes
- change interface of config.FileGet to remove unused default
- remove MustValue from config.Storage interface
- change GetValue to return string or bool like elsewhere in rclone
- implement a default config file system which panics with helpful error
- implement getWithDefault to replace the removed MustValue
- don't embed goconfig.ConfigFile so we can change the methods
This fixes the polling implementation for Dropbox, particularly
when using a scoped app. This also adds a lower end check for the
timeout, as I forgot to include that in the original implementation.
In this commit
fc5b14b620 s3: Added `--s3-disable-http2` to disable http/2
We created our own transport so we could disable http/2. However the
added function is called twice meaning that we create two HTTP
transports. This didn't happen with the original code because the
default transport is cached by fshttp.
Rclone normally does a PUT followed by a HEAD request to check an
upload has been successful.
With the two transports, the PUT and the HEAD were being done on
different HTTP transports. This means that it wasn't re-using the same
HTTP connection, so the HEAD request showed the previous object value.
This caused rclone to declare the upload was corrupted, delete the
object and try again.
This patch makes sure we only create one transport and use it for both
PUT and HEAD requests which fixes the problem with Wasabi.
See: https://forum.rclone.org/t/each-time-rclone-is-run-1-3-fails-2-3-succeeds/22545
Some storage providers e.g. S3 don't have an efficient rename operation.
Before this change, when chunker finished an upload, the server-side copy
and delete operations that renamed temporary chunks to their final names
could take a significant amount of time.
This PR records transaction identifier (versioning) in the metadata of
chunker composite objects striving to remove the need for rename
operations on such backends.
This approach will be triggered be the new "transactions" configuration
option, which can be "rename" (the default) or "norename".
We implement the new approach for uploads (Put operations).
The chunker Move operation still uses the rename operation of
underlying backend. Filling this gap is left for a later PR.
Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
Before this change, if folder level access permissions policy was in
use, with trailing `/` marking the folders then rclone would HEAD the
path without a trailing `/` to work out if it was a file or a folder.
This returned a permission denied error, which rclone returned to the
user.
Failed to create file system for "s3:bucket/path/": Forbidden: Forbidden
status code: 403, request id: XXXX, host id:
Previous to this change
53aa03cc44 s3: complete sse-c implementation
rclone would assume any errors when HEAD-ing the object implied it
didn't exist and this test would not fail.
This change reverts the functionality of the test to work as it did
before, meaning any errors on HEAD will make rclone assume the object
does not exist and the path is referring to a directory.
Fixes#4990
This implements polling support for the Dropbox backend. The Dropbox SDK dependency had to be updated due to an auth issue, which was fixed on Jan 12 2021. A secondary internal Dropbox service was created to handle unauthorized SDK requests, as is necessary when using the ListFolderLongpoll function/endpoint. The config variable was renamed to cfg to avoid potential conflicts with the imported config package.
Sharepoint 2016 returns status 204 to the purge request
even if the directory to purge does not really exist.
This change adds an extra check to detect this condition
and returns a proper error code.
The go-ntlmssp NTLM negotiator has to try various authentication methods.
Intermediate responses from Sharepoint have status code 401, only the
final one is different. When rclone runs a large operation in parallel
goroutines according to --checkers or --transfers, one of threads can
receive intermediate 401 response targeted for another one and returns
the 401 authentication error to the user.
This patch fixes that.
On-premises Sharepoint returns HTTP errors 400 or 500 in
reply to attempts to use file names with special characters
like hash, percent, tilde, invalid UTF-7 and so on.
This patch activates transparent encoding of such characters.
As per Microsoft documentation, Windows authentication
(NTLM/Kerberos/Negotiate) is not supported with HTTP/2.
This patch disables transparent HTTP/2 support when the
vendor setting is "sharepoint-ntlm". Otherwise connections
to IIS/10.0 can fail with HTTP_1_1_REQUIRED.
Co-authored-by: Georg Neugschwandtner <georg.neugschwandtner@gmx.net>
The most popular keyword for the Sharepoint in-house or company
installations is "On-Premises".
"Microsoft OneDrive account" is in fact just a Microsoft account.
Co-authored-by: Georg Neugschwandtner <georg.neugschwandtner@gmx.net>
Add new option option "sharepoint-ntlm" for the vendor setting.
Use it when your hosted Sharepoint is not tied to the OneDrive
accounts and uses NTLM authentication.
Also add documentation and integration test.
Fixes: #2171
S3 backend shared_credentials_file option wasn't working neither from
config option nor from command line option. This was caused cause
shared_credentials_file_provider works as part of chain provider, but in
case user haven't specified access_token and access_key we had removed
(set nil) to credentials field, that may contain actual credentials got
from ChainProvider.
AWS_SHARED_CREDENTIALS_FILE env varible as far as i understood worked,
cause aws_sdk code handles it as one of default auth options, when
there's not configured credentials.
This change adds the scopes rclone wants during the oauth request.
Previously rclone left these blank to get a default set.
This allows rclone to add the "members.read" scope which is necessary
for "impersonate" to work, but only when it is in use as it require
authorisation from a Team Admin.
See: https://forum.rclone.org/t/dropbox-no-members-read/22223/3
Some virtual filesystems (such as Google Drive File Stream) may
incorrectly set the actual file size equal to the preallocated space,
causing checksum and file size checks to fail.
This flag can be used to disable preallocation for local backends of
this type.
Before this change, if an application key limited to a prefix was in
use, with trailing `/` marking the folders then rclone would HEAD the
path without a trailing `/` to work out if it was a file or a folder.
This returned a permission denied error, which rclone returned to the
user.
Failed to create file system for "b2:bucket/path/":
failed to HEAD for download: Unknown 401 (401 unknown)
This change assumes any errors on HEAD will make rclone assume the
object does not exist and the path is referring to a directory.
See: https://forum.rclone.org/t/b2-error-on-application-key-limited-to-a-prefix/22159/
Before this change, if --b2-chunk-size was raised above 200M then this
error would be produced:
b2: upload cutoff: 200M is less than chunk size 1G
This change automatically reaises --b2-upload-cutoff to be the value
of --b2-chunk-size if it is below it, which stops this error being
generated.
Fixes#4475
Before this change, running
rclone backend copyid drive: ID file.txt
Failed with the error
command "copyid" failed: failed copying "ID" "file.txt": can't use empty string as a path
This fixes the problem.
This makes sure that partially uploaded large files are removed
unless the `--swift-leave-parts-on-error` flag is supplied.
- refactor swift.go
- add unit test for swift with chunk
- add unit test for large object with fail case
- add "-" to white list char during encode.
Assume the Stat size of links is zero (and read them instead)
On some virtual filesystems (such ash LucidLink), reading a link size via a
Stat call always returns 0.
However, on unix it reads as the length of the text in the link. This may
cause errors like this when syncing:
Failed to copy: corrupted on transfer: sizes differ 0 vs 13
Setting this flag causes rclone to read the link and use that as the size of
the link instead of 0 which in most cases fixes the problem.
Fixes#4950
Signed-off-by: Riccardo Iaconelli <riccardo@kde.org>
This patch changes to using the default page limit for listing
unfinished multpart uploads rather than 1000. 1000 is the maximum
specified in the docs, but setting anything larger than 200 gives an
error.
- fix test case FsNewObjectCaseInsensitive (PR #4830)
- continue PR #4917, add comments in metadata detection code
- add warning about metadata detection in user documentation
- change metadata size limits, make room for future development
- hide critical chunker parameters from command line
Before this patch chunker required that there is at least one
data chunk to start checking for a composite object.
Now if chunker finds at least one potential temporary or control
chunk, it marks found files as a suspected composite object.
When later rclone tries a concrete operation on the object,
it performs postponed metadata read and decides: is this a native
composite object, incompatible metadata or just garbage.
This includes an HDFS docker image to use with the integration tests.
Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
The current authentication scheme works without creating
a public download endpoint for a private bucket as in the B2 official blog.
On the contrary, if the existing authorization header gets duplicated
in the Cloudflare Workers script, one might receive 401 Unauthorized errors.
Before this change the webdav backend didn't truncate Range requests
to the size of the object. Most webdav providers are OK with this (it
is RFC compliant), but it causes 4shared to return 500 internal error.
Because Range requests are used in mounting, this meant that mounting
didn't work for 4shared.
This change truncates the Range request to the size of the object.
See: https://forum.rclone.org/t/cant-copy-use-files-on-webdav-mount-4shared-that-have-foreign-characters/21334/
Before this change, attempting to update an archive tier blob failed
with a 409 error message:
409 This operation is not permitted on an archived blob.
This change detects if we are overwriting a blob and either generates
the error (if `--azureblob-archive-tier-delete` is not set):
can't update archive tier blob without --azureblob-archive-tier-delete
Or deletes the blob first before uploading it again (if
`--azureblob-archive-tier-delete` is set).
Fixes#4819
Before this change if you attempted to list a remote set up with a SAS
URL outside its container then it would crash the Azure SDK.
A check is done to make sure the root is inside the container when
starting the backend which is usually enough, but when two SAS URL
based remotes are mounted in a union, the union backend attempts to
read paths outside the named container. This was causing a mysterious
crash in the Azure SDK.
This fixes the problem by checking to see if the container in the
listing is the one in the SAS URL before listing the directory and
returning directory not found if it isn't.
Before this change when NewObject was called the b2 backend would list
the directory that the object was in in order to find it.
Unfortunately list calls are Class C transactions and cost more.
This patch switches to using HEAD requests instead. These are Class B
transactions. It is then necessary to parse the headers from response
back into the data that we get from the listing. However B2 returns
exactly the same data, just in a different form.
Rclone will use the old directory listing method when looking for
files with versions as these can't be found via a HEAD request.
This change will particularly benefit --files-from, rclone serve
restic but most operations will see some benefit.
Starting September 30th, 2021, the Dropbox OAuth flow will no longer
return long-lived access tokens. It will instead return short-lived
access tokens, and optionally return refresh tokens.
This patch adds the token_access_type=offline parameter which causes
dropbox to return short lived tokens now.
Before this change rclone would upload the whole of multipart files
before receiving a message from dropbox that the path was too long.
This change hard codes the 255 rune limit and checks that before
uploading any files.
Fixes#4805
Before this change, rclone would retry files with filenames that were
too long again and again.
This changed recognises the malformed_path error that is returned and
marks it not to be retried which stops unnecessary retrying of the file.
See #4805
Before this change rclone was using the copy endpoint to copy large objects.
This can fail for large objects with this error:
Error 413: Copy spanning locations and/or storage classes could
not complete within 30 seconds. Please use the Rewrite method
This change makes Copy use the Rewrite method as suggested by the
error message which should be good for any size of copy.
Yandex appears to ignore mime types set as part of the PUT request or
as part of a PATCH request.
The docs make no mention of being able to set a mime type, so set
WriteMimeType=false indicating the backend can't set mime types on
uploaded files.
This is done by making fs.Config private and attaching it to the
context instead.
The Config should be obtained with fs.GetConfig and fs.AddConfig
should be used to get a new mutable config that can be changed.
Before this change, small objects uploaded with SSE-AWS/SSE-C would
not have MD5 sums.
This change adds metadata for these objects in the same way that the
metadata is stored for multipart uploaded objects.
See: #1824#2827
If rclone is configured for server side encryption - either aws:kms or
sse-c (but not sse-s3) then don't treat the ETags returned on objects
as MD5 hashes.
This fixes being able to upload small files.
Fixes#1824
This shouldn't be read as encouraging the use of math/rand instead of
crypto/rand in security sensitive contexts, rather as a safer default
if that does happen by accident.
For some reason the API started returning some integers as strings in
JSON. This is probably OK in Javascript but it upsets Go.
This is easily fixed with the `json:"name,size"` struct tag.
Before this change a circular symlink would cause rclone to error out from the listings.
After this change rclone will skip a circular symlink and carry on the listing,
producing an error at the end.
Fixes#4743
This adds a context.Context parameter to NewFs and related calls.
This is necessary as part of reading config from the context -
backends need to be able to read the global config.
It seems that when doing chunked uploads to onedrive, if the chunks
take more than 3 minutes or so to upload then they may timeout with
error 504 Gateway Timeout.
This change produces an error (just once) suggesting lowering
`--onedrive-chunk-size` or decreasing `--transfers`.
This is easy to replicate with:
rclone copy -Pvv --bwlimit 0.05M 20M onedrive:20M
See: https://forum.rclone.org/t/default-onedrive-chunk-size-does-not-work/20010/
Minor wording change to help for explicit and implicit FTPS flags. More consistent between flags. Add 's' to request because only one 'client' mentioned.
As reported in
https://github.com/rclone/rclone/issues/4660#issuecomment-705502792
After switching to a password callback function, if the ssh connection
aborts and needs to be reconnected then the user is-reprompted for their
password. Instead we now remember the password they entered and just give
that back. We do lose the ability for them to correct mistakes, but that's
the situation from before switching to callbacks. We keep the benefits
of not asking for passwords until the SSH connection succeeds (right
known_hosts entry, for example).
This required a small refactor of how `f := &Fs{}` was built, so we can
store the saved password in the Fs object
Before this change rclone returned the size from the Stat call of the
link. On Windows this reads as 0 always, however on unix it reads as
the length of the text in the link. This caused errors like this when
syncing:
Failed to copy: corrupted on transfer: sizes differ 0 vs 13
This change causes Windows platforms to read the link and use that as
the size of the link instead of 0 which fixes the problem.
Based on Issue 4087
https://github.com/rclone/rclone/issues/4087
Current behaviour is insecure. If the user specifies this value then we
switch to validating the server hostkey and so can detect server changes
or MITM-type attacks.
This allows files to be copied by ID from google drive. These can be
copied to any rclone remote and if the remote is a google drive then
server side copy will be attempted.
Fixes#3625
This type of error is unlikely to be an error that can be resolved by a retry,
and is triggered in #2296 by files with a timestamp before the unix epoch.
The maximum value for the --s3--copy-cutoff should be 5GiB as tested
with AWS S3.
However b2 have implemented this as 5GB rather than 5GiB so having the
default at 5 GiB makes the b2s3 server side copy of a large file by
default.
This patch sets the default to 4768 MiB which is slightly less than
5GB.
This should have very little effect on anything.
If in future rclone can lower this limit more if Copy can multithread.
See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/76
Before this change the s3 multipart server side copy was not
preserving the metadata of the object. This was most noticeable
because the modtime was not preserved.
This change fetches the metadata from the object before starting the
copy and overwrites it if requires.
It will also mean any other metadata is preserved.
See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/70
Before this change, when the above backends created a new backend they
didn't put it into the backend cache.
This meant that rc commands acting on those backends did not work.
This was fixed by making sure the backends use the backend cache.
See: https://forum.rclone.org/t/rclone-rc-backend-command-not-working-as-expected/18834
Before this change writing with the all policy deadlocked while
uploading.
This change fixes the problem by fixing the multi reader, closing the
pipes at the correct time with the correct error. This is factored
into a new function as it was used twice.
This patch also adds a new test which tests the all policies.
Before this fix we were reading the hash from the upload using the
string "ETag", however the go runtime normalises the tag into "Etag"
so we were in fact always reading an empty string.
This bug was introduced in
aeea4430d5 swift: efficiency: slim Object and reduce requests on upload
It was spotted by the integration tests.
The fix was just to use the canonical form "Etag" instead of "ETag".
In this commit
a2afa9aadd fs: Add directory to optional Purge interface
We failed to encrypt the directory name so the Purge failed.
This was spotted by the integration tests.
In this commit:
cbf3d43561 drive: fix missing items when listing using --fast-list / ListR
We introduced a bug where under specific circumstances it could cause
a "panic: send on closed channel".
This was caused by:
- rclone engaging the workaround from the commit above
- one of the listing routines returning an error
- this caused the `in` channel to be closed to stop the readers
- however the workaround was recycling stuff into the `in` channel at the time
- hence the panic on closed channel
This fix factors out the sending to the `in` channel into `sendJob`
and calls this both from the master go routine and the list
runners. `sendJob` detects the `in` channel being closed properly and
also deals correctly with contention on the `in` channel.
Fixes#4511
When using `rclone authorize` the hostname doesn't get set in the
config file.
This commit allows it to be set in the configurator and gives the user
a hint that it needs setting.
This reverts part of
151f03378f s3: fix upload of single files into buckets without create permission
This erroneously assumed that a HEAD request on a non existent object
would return "NotFound" if the bucket was found. In fact it returns
"NotFound" when the bucket isn't found also.
This will break the fix for #4297 - however that can be made to work
using the new --s3-assume-bucket-exists flag
Before this change, rclone was looking for the file without the
extension to see if it existed which meant that it never did.
This change checks the destination file exists firsts, before removing
the extension.
Google drive appears to no longer be copying the modification time of
google docs.
Setting the mod time immediately after the copy doesn't work either,
so this patch copies the object, waits for 1 second and then sets the
modtime.
Fixes#4517
This was only working for files in the root directory and wasn't
looking at the encoding.
This is fixed to use NewObject which takes both things into account
and it makes the share by ID instead of by path.
This problem was spotted by the integration tests.
Before this change we errored out if one upstream errored in Purge or
About.
This change checks for fs.ErrorDirNotFound and skips that backend in
this case.
1. adds SharedOptions data structure to oauthutil
2. adds config.ConfigToken option to oauthutil.SharedOptions
3. updates the backends that have oauth functionality
Fixes#2849
After uploading a multipart object, rclone deletes any unused parts.
Probably as part of the listing unification, the detection of the
parts beloning to the current upload was failing and calling Update
was deleting the parts for the current object.
This change fixes the detection and deletes all the old parts but none
of the new ones now.
Fixes#4075
Previous to this change rclone cached the looked up root_folder_id in
the root_folder_id config variable.
This has caused a lot of confusion and a few attempts at workarounds
and ultimately was a mistake.
This reverts rclone attempting to cache anything in root_folder_id and
returns that variable to be entirely user modified.
It gives a little hint in the debug that rclone could be sped up
slightly by setting it, but it is up to the user to think about
whether that would be OK or not.
Google drive root '': root_folder_id = "XXX" - save this in the config to speed up startup
It does not change root_folder_id itself, leaving this to the user.
See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215
- add a directory to the optional Purge interface
- fix up all the backends
- add an additional integration test to test for the feature
- use the new feature in operations.Purge
Many of the backends had been prepared in advance for this so the
change was trivial for them.
If this option is enabled, rclone will not set modtime of uploaded files and
the backend will return ModTimeNotSupported as its Precision.
Normally rclone updates modification time of files after they are done
uploading. This can cause permissions issues on Linux platforms when
rclone is copying to a CIFS mount where the user rclone is
running as does not own the file uploaded. If this option is enabled,
rclone will no longer update the modtime after copying a file.
See: https://forum.rclone.org/t/chtimes-error-on-local-mounted-copy/17784
This implements `rclone cleanup` to remove multipart uploads over 24
hours old. It also implements the backend command
`list-multipart-uploads` to see which ones are available and `cleanup`
to delete them with a configurable expiry interval.
See #4302
Before this fix, if an object had ID set and download_url was in use,
downloading the object would give this error:
failed to open for download: bucket example_bucket does not have file: /b2api/v1/b2_download_file_by_id (404 not_found)
After this fix we only download by ID if download_url is not set
See: https://forum.rclone.org/t/correct-format-for-rclone-b2-download-url-variable/15498
When we run MKCOL on 4shared on a directory that already exists, this
returns a 409/Conflict error. However this error code usually means
that the intermediate collections need creating.
The actual error code to return when trying to create a directory that
already exists isn't specified in the RFC, only that an error MUST be
returned and there are already 3 statuses checked in the code.
However using 409 makes rclone's usual strategy for making directories
fail and return the 409 error.
This patch tries the MKCOL and if it returns an unrecognised error
code, then calls PROPFIND on the directory to discover whether the
directory really exists or not.
This should also cover other WebDAV servers returning other error
messages we haven't accounted for in the code yet.
Before this change the cache backend contained its own routines for
mounting testing on that mount.
These tests are never run on the CI and cause a maintenance burden.
This commit removes the tests.
Previous to this fix if Region was not set and Endpoint was not set
then we set the endpoint to "https://s3.amazonaws.com/".
This is unecessary because if the Region alone isn't set then we set
it to "us-east-1" which has the same endpoint.
Having the endpoint set breaks the bucket region auto detection with
the error "Failed to update region for bucket: can't set region to
"xxx" as endpoint is set".
This fix removes that check.
At some point Purge stopped deleting directory markers. We don't have
an integration test for this so it went unnoticed.
This patch fixes the problem but doesn't introduce an integration test
as we don't have a framework for making directory markers yet.
Before this change, large objects which had had their contents deleted
would return "Object not found" and break the listing.
This change makes these objects appear as 0 sized entities so they can
be listed and deleted.
Pcloud appears to have opened up a new region and they are returning
the hostname in the oauth callback, thus
GET /?code=XXX&locationid=1&hostname=api.pcloud.com&state=XXX HTTP/1.1
GET /?code=XXX&locationid=2&hostname=eapi.pcloud.com&state=XXX HTTP/1.1
This isn't documented yet, however pCloud have confirmed that this is
the correct interpretation.
Rclone now reads the "hostname" parameter in the oauth callback and
stores it in the config file. It uses it for all subequent API calls.
Previous to this a dangling shortcut would error the directory
listing.
This patch makes dangling shortcuts appear as 0 sized objects in the
directory listing so they can be deleted. These objects can't be read
though.
For some objects the onedrive backend has been doing a server side
copy and a delete when a server side move would have worked OK.
This was caused by not detecting the home drive correctly (when it was
an empty string) and assuming that these transfers were cross drive.
This is fixed by comparing canonicalizing drive IDs before comparing them.
Currently credentials are required to download a public bucket file
which is not really necessary and makes automated usage more complex.
Add a new option "anonymous" which when enabled configures the gcs
backend to use an anonymous HTTP client. This of course only works
for read access and trying to write will lead to errors like that:
"googleapi: Error 401: Anonymous caller does not not have
storage.objects.create access to the Google Cloud Storage object.",
as expected. By default the anonymous access option is disabled so that
the GCS Application Default Credentials are still used by default as
before and an error is given if they can't be found.
Before this change rclone used the relative path from the current
working directory.
It appears that WS FTP doesn't like this and the openssh sftp tool
also uses absolute paths which is a good reason for switching to
absolute paths.
This change reads the current working directory at startup and bases
all file requests from there.
See: https://forum.rclone.org/t/sftp-ssh-fx-failure-directory-not-found/17436
Before this change the --local-no-updated flag would not error if the
files changed in size during the transfer. The file could still be
read beyond the size advertised though which caused problems with
certain backends.
After this change we attempt to provide a consistent view of the file
once it has been opened.
Once the file has had stat() called on it for the first time we
- Only transfer the size that stat gave
- Only checksum the size that stat gave
- Don't update the stat info for the file
This means that files that are extending can be transferred - rclone
will transfer the length it saw the first time it listed the file.
See: https://forum.rclone.org/t/transport-connection-broken/16494/21
In this commit
5c5ad6220 drive: fix --drive-impersonate with cached root_folder_id
We disabled the use of root_folder_id with --drive-impersonate to fix
a problem with a cached root_folder_id giving the wrong results.
This, alas, broke one users setup with a root_folder_id of
appDataFolder. Since this is identifiable and definitely couldn't have
been cached, we can safely skip this check in this case.
See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215/10
Before this change there was lots of duplicated code in all the
dircache using backends to support DirMove.
This change factors this code into the dircache library.
Dircache was changed to:
- Remove special cases for the root directory
- Remove Fatal errors
- Call FindRoot on behalf of the user wherever possible
- Bring up to modern Go standards
Backends were changed to:
- Remove calls to FindRoot
- Change calls to FindRootAndPath to FindPath
- Don't make special cases for the root
This fixes several corner cases, for example removing a non existent
directory if FindRoot hasn't been called.
Before this fix rclone would continually try to delete non empty
segment containers which made deleting lots of files very slow.
This fix makes rclone just try the delete once and then carry on which
was the original intent of the code before the retry logic got put in.
Before this change if the server sent us xml like this
```
<D:propstat>
<D:prop>
<g0:quota-available-bytes/>
<g0:quota-used-bytes/>
</D:prop>
<D:status>HTTP/1.1 404 Not Found</D:status>
</D:propstat>
```
Rclone would read the empty XML items as containing 0
After this fix we make sure that we have a value before using it.
Before this fix rclone v1.51 and 1.52 would incorrectly use the cached
root_folder_id when the --drive-impersonate flag was in use. This
meant that rclone could be looking up the wrong directory ID with
unpredictable results - usually all files apparently being missing.
This fix makes rclone look up the root_folder_id always when using
--drive-impersonate. It does this by clearing the root_folder_id and
making a NOTICE message that it is ignoring the cached value.
It also stops rclone caching the root_folder_id when using
--drive-impersonate.
See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215
Adding the expires parameter gives settings_error/not_authorized/.. errors.
The expires setting isn't in the documentation so this commit removes
it for now.
For SSH authentication, `key_pem` should both override `key_file`
and not require other SSH authentication methods to be set.
Prior to this fix, rclone would attempt to use an ssh-agent
when `key_pem` was the only SSH authentication method set.
Fixes#4240
Before this change we were setting the headers on the PUT
request for normal and multipart uploads. For normal uploads this caused the error
403 Forbidden: There were headers present in the request which were not signed
After this fix we set the headers in the object upload request itself
as the s3 SDK expects.
This means that we only support a limited range of headers
- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Amz-Tagging
- X-Amz-Meta-
Note for the last of those are for setting custom metadata in the form
"X-Amz-Meta-Key: value".
This now works for multipart uploads and single part uploads
See also #59
This provides two things:
* It gives Storj insight into which uplink clients are using the
network.
* It facilitate rclone participating in the Tardigrade Open Source
Partner Program https://tardigrade.io/partner/
* s3: add `max_upload_parts` support
This allows to configure a maximum amount of chunks used to upload file:
- Support Scaleway which has a limit of 1k chunks currently
- Reduce a cost on S3 when each request costs some money at the expense of memory used
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
This adds expire and unlink fields to the PublicLink interface.
This fixes up the affected backends and removes unlink parameters
where they are present.
This factors copy out of SetModTime and Copy so it can be called from
both places.
This also reworks all the multipart uploading to use sync.Errgroup and
memory pooling like the other backends. This makes it more memory
efficient and handle errors better.
See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/10
Before this change, attempting to upload a single file into an s3
bucket which did not have create permission gave AccessDenied: Access
Denied error when it tried to create the bucket.
This was masked until e2bf91452a was
fixed.
This fix marks the bucket as OK if a fetch on an object indicates it
is OK. This stops rclone thinking it has to create the bucket in the
first place.
Fixes#4297
This is caused by a bug in Google drive where, in some circumstances
querying for "(A in parents) or (B in parents)" returns nothing
whereas querying for "A in parents" and "B in parents" separately
works fine.
This has been reported here:
https://issuetracker.google.com/issues/149522397
This workaround detects this condition by seeing if a listing for more
than one directory at once returns nothing.
If it does then it retries each one individually.
This can potentially have a false positive if the user has multiple
empty directories which are queried at once. The consequence of this
will be that ListR is disabled for a while until the directories are
found to be actually empty in which case it will be re-enabled.
Fixes#3114 and Fixes#4289
This reverts commit 9e4b68a364.
This does not work as intended - it only changes docs files and to
make it change drive files would take an extra roundtrip.
I think the sematics of server side copy are now correct - additional
features should be added with a new flag.
See #4230
When wrapping a backend that supports Server Side Copy (e.g. `b2`, `s3`)
and configuring the `tmp_upload_path` option, the `cache` backend would
erroneously report that Server Side Copy/Move was not supported, causing
operations such as file moves to fail. This change fixes this issue
under these circumstances such that Server Side Copy will now be used
when the wrapped backend supports it.
Fixes#3206
Before this change we early exited the SetModTime call which means we
skipped reading the info about the file.
This change reads info about the file in the SetModTime call even if
we are skipping setting the modtime.
See: https://forum.rclone.org/t/sftp-and-set-modtime-false-error/16362
This commit changes the output of the rclone backend encode crypt: and
decode commands to output a plain list of decoded or encoded file
names.
This makes the command much more useful for command line scripting.
Enable fast list functions for union backend when:
- at least one of the upstreams supports fast list
- upstreams only consist of backends that support fast list and local backend.
Fixes#3000
When server side copying Google docs files we attempt to preserve the
description.
This patch makes it so that we use the default description if the
original description was empty.
See: 6fdd7149c1 (commitcomment-38008638)
Before this change, for some operations, eg rcat or copyto (of a file)
rclone would attempt to create the container when using a SAS URL
limited to a container.
After this change we assume the container does not need creating when
using a container SAS URL.
See: https://forum.rclone.org/t/rclone-rcat-azure-blob-container-sas-token-403-error/16286
This also fixes typo in the name of the function, and allows making
shortcuts from the root directory which are useful in cross drive
shortcut creation.
This also adds a basic suite of tests for creating listing, removing
shortcuts.
This means that we can return ErrorNotAFile when there is an object
with the same name as a directory rather than potentially creating a
duplicate name.
Before this code we were settig the headers on the PUT request. However this isn't where GCS needs them.
After this fix we set the headers in the object upload request itself.
This means that we only support a limited range of headers
- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Goog-Meta-
Note for the last of those are for setting custom metadata in the form
"X-Goog-Meta-Key: value".
Before this change the local backend was returning file not found
errors for post transfer hashes for files which were moved. This was
caused by the routine which checks for the object being changed.
After this change we ignore file not found errors while checking to
see if the object has changed. If the hash has to be computed then a
file not found error will be thrown when it is opened, otherwise the
cached hash will be returned.
Before this change rclone would skip all shortcuts with a message
Ignoring unknown document type "application/vnd.google-apps.shortcut"
After this message rclone resolves the shortcuts by default to the
actual files that they point to. See the docs for more info.
The --drive-skip-shortcuts flag can be used to skip shortcuts.
Before this change the newObject* functions could return object=nil
with err=nil. The result of these functions are passed outside of the
backend code (eg in Copy, Move) and returning a nil object with a nil
error leads to crashes elsewhere as it breaks expectations.
After this change we return (nil, fs.ErrorObjectNotFound) in these
cases. The one place this is actually needd internally (when turning
items into listings) we detect that error and use it to mean skip the
directory item.
This problem was noticed while testing the shortcuts code. It
shouldn't happen normally but it is conceivable it could.
Apparently some tools (eg duplicati) upload the SHA1 in uppercase to
b2 to be stored in the `large_file_sha1` metadata. This patch forces
it to lower case.
According to Microsoft support this error can be caused by
> A timing/concurrency issue where the PUT operations are happening
> about the same time for a single blob. The Put Block List operation
> writes a blob by specifying the list of block IDs that make up the
> blob. In order to be written as part of a blob, a block must have
> been successfully written to the server in a prior Put Block
> operation.
>
> Documentation reference:
>
> https://docs.microsoft.com/en-us/rest/api/storageservices/put-block
>
> This error can happen when doing concurrent upload commits after you
> have started the upload but before you commit. In that case, the
> upload fails. The application can retry this error or attempt some
> other recovery action based on the required scenario.
See: https://forum.rclone.org/t/error-while-syncing-with-azure-blob-storage-x-ms-error-code-invalidbloborblock/15561
For a certain class of broken or missing image Google Photos puts an
image in the error message.
Before this fix we blindly chucked it into the error message.
After this fix we replace it with some sensible text.
Before this change crypt would not calculate hashes for files it was
uploading. This is because, in the general case, they have to be
downloaded, encrypted and hashed which is too resource intensive.
However this causes backends which need the hash first before
uploading (eg s3/b2 when uploading chunked files) not to have a hash
of the file. This causes cryptcheck to complain about missing hashes
on large files uploaded via s3/b2.
This change calculates hashes for the upload if the upload is coming
from a local filesystem. It does this by encrypting and hashing the
local file re-using the code used by cryptcheck. For a local disk this
is not a lot more intensive than calculating the hash.
See: https://forum.rclone.org/t/strange-output-for-cryptcheck/15437Fixes: #2809
Previously we had a map of pools for different chunk sizes.
In practice the mapping is not very useful and requires a lock.
Pools of size other that ChunkSize can only happen when we have a huge file (over 10k * ChunkSize).
We need to have a bunch of identically sized huge files.
In such case most likely ChunkSize should be increased.
The mapping and its lock is replaced with a single initialised pool for ChunkSize, in other cases pool is allocated and freed on per file basis.
Rclone can't safely delete files with multiple parents without
PATCHing the parents list. This can be done, but since multiple
parents are going away to be replaced by drive shortcuts we return an
error for now.
See #4013
Before this change we queries /me/drives for a list of the users
drives and asked the user to choose. Sometimes this does not return
the users main drive for reasons unknown.
After this change we query /me/drives first then /me/drive and add
that to the list of drives if it wasn't already there.
In 5470d34740 "backend/s3: use low-level-retries as the number
of SDK retries" we switched over to using the AWS SDK low level
retries instead of rclone's low level retry logic.
This had the unfortunate attempt that retrying listings to correct XML
Syntax errors failed on non S3 backends such as CEPH. The AWS SDK was
also retrying the XML Syntax error request which doesn't make sense.
This change turns off the AWS SDK retries in favour of just using
rclone's retry logic.
If chunk size is more than 250M (262,144,000 bytes) then API throws the following error:
Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big. The server does not allow messages larger than 262144000 bytes.
Before this change rclone didn't use sparse files on Windows. This
means that when you downloaded a file with multithread download it
wrote the entire file with zeros first on the first write not at the
start of the file.
This change makes the file be sparse on Windows. Linux/macOS files
were already sparse.
Before this change shared with me items with multiple parents (ie most
of them that aren't in the root) would appear twice in the directory
listings.
This fixes the problem by doing an early exit for shared with me
items.
This bug was introduced here by removing some necessary code detecting
shared with me items at the root with no parents.
4453fa4ba6 "drive: fix --fast-list when using appDataFolder"
This fix reverts that part of the patch.
Fixes#4018
This adds a bit of missed locking around the uploaded info to fix the
concurrent map write.
All the other accesses have locking - this one must have got missed.
pureftpd has a bug where it sends messages like this
```
150-Accepted data connection\r\n
Response code: File status okay; about to open data connection (150)
Response arg: Accepted data connection
150 32768.0 kbytes to download\r\n
150 0.014 seconds (measured here), 1665.27 Mbytes per second\r\n
```
The last `150` is treated as a new response - the previous `150` should have been `150-`.
This means that rclone sees the `150 0.014 seconds (measured here),
1665.27 Mbytes per second` as a reply to the next message and reports
it as an error.
This fix ignores that specific message when it is received in the
`Close` method. It dumps the FTP connection after as it is out of
sync.
See: #3984Fixes#3445
Before this change if rclone failed to close a file download for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.
This fix returns the concurrency token regardless of the error status.
Before this change if rclone failed to upload a file for some reason
it would leak a concurrency token. When all the tokens were leaked
then rclone would lock up.
The fix returns the concurrency token regardless of the error state.
Before this change if rclone failed to make an FTP connection for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.
The fix returns the concurrency token if creating the FTP connection
returns an error.
Amazon S3 is built to handle different kinds of workloads.
In rare cases where S3 is not able to scale for whatever reason users
will face status 500 errors.
Main mechanism for handling these errors are retries.
Amount of needed retries varies for each different use case.
This change is making retries for s3 backend configurable by using
--low-level-retries option.
Currently each multipart upload allocated his own buffers, which after
file upload was garbaged. Next files couldn't leverage already allocated
memory which resulted in inefficent memory management. This change
introduces backend memory pool keeping memory chunks which can be
used during object operations.
Fixes#3967
The error code 500 Internal Error indicates that Amazon S3 is unable to handle the request at that time. The error code 503 Slow Down typically indicates that the requests to the S3 bucket are very high, exceeding the request rates described in Request Rate and Performance Guidelines.
Because Amazon S3 is a distributed service, a very small percentage of 5xx errors are expected during normal use of the service. All requests that return 5xx errors from Amazon S3 can and should be retried, so we recommend that applications making requests to Amazon S3 have a fault-tolerance mechanism to recover from these errors.
https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/
This removes the unused functions run.writeRemoteRandomBytes() run.writeObjectRandomBytes() run.listPath() Directory.parentRemote() and Persistent.dumpRoot().
Before this change, when uploading multipart files, onedrive would
sometimes return an unexpected 416 error and rclone would abort the
transfer.
This is usually after a 500 error which caused rclone to do a retry.
This change checks the upload position on a 416 error and works how
much of the current chunk to skip, then retries (or skips) the current
chunk as appropriate.
If the position is before the current chunk or after the current chunk
then rclone will abort the transfer.
See: https://forum.rclone.org/t/fragment-overlap-error-with-encrypted-onedrive/14001Fixes#3131
This hides:
- "use_created_date"
- "use_shared_date"
- "size_as_quota"
from the configurator (rclone config) as they interfere with normal
operations and shouldn't be set in a backend config.
They can still be put in the config file by hand and will still work
as variables, etc.
This adds some more docs to "size_as_quota" also.
Fixes#3912
Before this change we used non multipart uploads for files of unknown
size (streaming and uploads in mount). This is slower and less
reliable and is not recommended by Google for files smaller than 5MB.
After this change we use multipart resumable uploads for all files of
unknown length. This will use an extra transaction so is less
efficient for files under the chunk size, however the natural
buffering in the operations.Rcat call specified by
`--streaming-upload-cutoff` will overcome this.
See: https://forum.rclone.org/t/upload-behaviour-and-speed-when-using-vfs-cache/9920/
This error started happening after updating golang/x/crypto which was
done as a side effect of:
3801b8109 vendor: update termbox-go to fix ncdu command on FreeBSD
This turned out to be a deliberate policy of making
ssh.ParsePrivateKeyWithPassphrase fail if the passphrase was empty.
See: https://go-review.googlesource.com/c/crypto/+/207599
This fix calls ssh.ParsePrivateKey if the passphrase is empty and
ssh.ParsePrivateKeyWithPassphrase otherwise which fixes the problem.
If the --drive-stop-on-upload-limit flag is in effect this checks the
error string from Google Drive to see if it is the error you get when
you've breached your 750GB a day limit.
If so then it turns this error into a Fatal error which should stop
the sync.
Fixes#3857
In listings if the ID `appDataFolder` is used to list a directory the
parents of the items returned have the actual ID instead the alias
`appDataFolder`. This confused the ListR routine into ignoring all
these items.
This change makes the listing routine accept all parent IDs returned
if there was only one ID in the query. This fixes the `appDataFolder`
problem. This means we are relying on Google Drive to only return the
items we asked for which is probably OK.
Fixes#3851
The S3 ListObject API returns paginated bucket listings, with
"MaxKeys" items for each GET call.
The default value is 1000 entries, but for buckets with millions of
objects it might make sense to request more elements per request, if
the backend supports it. This commit adds a "list_chunk" option for
the user to specify a lower or higher value.
This commit does not add safe guards around this value - if a user
decides to request a too large list, it might result in connection
timeouts (on the server or client).
In AWS S3, there is a fixed limit of 1000, some other services might
have one too. In Ceph, this can be configured in RadosGW.
Before this patch we were failing to URL decode the NextMarker when
url encoding was used for the listing.
The result of this was duplicated listings entries for directories
with >1000 entries where the NextMarker was a file containing a space.
Before this change we used the same (relatively low limits) for server
side copy as we did for multipart uploads. It doesn't make sense to
use the same limits since no data is being downloaded or uploaded for
a server side copy.
This change introduces a new parameter --s3-copy-cutoff to control
when the switch from single to multipart server size copy happens and
defaults it to the maximum 5GB.
This makes server side copies much more efficient.
It also fixes the erroneous error when trying to set the modification
time of a file bigger than 5GB.
See #3778
Before this change multipart copies were giving the error
Range specified is not valid for source object of size
This was due to an off by one error in the range source introduced in
7b1274e29a "s3: support for multipart copy"
Before this change rclone used "Authorization: BEARER token". However
according the the RFC this should be "Bearer"
https://tools.ietf.org/html/rfc6750#section-2.1
This changes it to "Authorization: Bearer token"
Fixes#3751 and interop with Salesforce Webdav server
When using nextcloud, before this change we only uploaded one of SHA1
or MD5 checksum in the OC-Checksum header with preference to SHA1 if
both were set.
This makes the MD5 checksums read as empty string which makes syncing
with checksums less useful than they should be as all the MD5
checksums are blank.
This change makes it so that we only upload the SHA1 to nextcloud.
The behaviour of owncloud is unchanged as owncloud uses the checksum
as an upload integrity check only and calculates its own checksums.
See: https://forum.rclone.org/t/how-to-specify-hash-method-to-checksum/13055
This also corrects the symlink detection logic to only check symlink
files. Previous to this it was checking all directories too which was
making it do more stat calls than was necessary.
Before this change we forgot to URL decode the X-Object-Manifest in a dynamic large object.
This problem was introduced by 2fe8285f89 "swift: reserve
segments of dynamic large object when delete objects in container what
was enabled versioning."
For few commands, RClone counts a error multiple times. This was fixed by
creating a new error type which keeps a flag to remember if the error has
already been counted or not. The CountError function now wraps the original
error eith the above new error type and returns it.
Before this change rclone used the team_drive ID as the root if set
even if the root_folder_id was set too.
This change uses the root_folder_id in preference over the team_drive
which restores the functionality.
This problem was introduced by ba7c2ac443Fixes#3742
We attempt to find the ID of the root folder by doing a GET on the
folder ID "root". With scope "drive.files" this fails with a 404
message.
After this change if we get the 404 message, we just carry on using
"root" as the root folder ID and we cache that for future lookups.
This means that changenotify messages will not work correctly in the
root folder but otherwise has minor consequences.
See: https://forum.rclone.org/t/fresh-raspberry-pi-build-google-drive-404-error-failed-to-ls-googleapi-error-404-file-not-found/12791
Before this change rclone would allow the user to stream (eg with
rclone mount, rclone rcat or uploading google photos or docs) 5TB
files. This meant that rclone allocated 4 * 525 MB buffers per
transfer which is way too much memory by default.
This change makes rclone use the configured chunk size for streamed
uploads. This is 5MB by default which means that rclone can stream
upload files up to 48GB by default staying below the 10,000 chunks
limit.
This can be increased with --s3-chunk-size if necessary.
If rclone detects that a file is being streamed to s3 it will make a
single NOTICE level log stating the limitation.
This fixes the enormous memory usage.
Fixes#3568
See: https://forum.rclone.org/t/how-much-memory-does-rclone-need/12743
Before this fix we neglected to add the shared drive ID to the request
when asking for an initial change notify token and this caused a lot
more results to be returned than was necessary.
When we changed recursive lists to use --fast-list by default this
broke listing with --drive-shared-with-me from the root.
This turned out to be an unwarranted assumption in the ListR code that
all items would have a parent folder that we had searched for - this
isn't true for shared with me items.
This was fixed when using --drive-shared-with-me to give items that
didn't have any parents a synthetic parent.
Fixes#3639
Before this change we used the id "root" as an alias for the root drive ID.
However this causes problems when we receive IDs back from drive which
are not in this format and have been expanded to their canonical ID.
This change looks up the ID "root" and stores it in the
"drive_folder_id" parameter in the config file.
This helps with
- Notifying changes at the root
- Files shared with me at the root
See #3639
Before this change when rclone was compiled with go1.13 it used HTTP/2
to contact drive by default.
This causes lockups and INTERNAL_ERRORs from the HTTP/2 code.
This is a workaround disabling the HTTP/2 code on an option.
It can be re-enabled with `--drive-disable-http2=false`
See #3631
Before this change we silently skipped uploads to dropbox of
disallowed file names. However this then caused "corrupted on
transfer" errors because the sizes were wrong.
After this change we return an no retry error which will mean that the
sync fails (as it should - not all files were uploaded) but no
unecessary retries happened.
This works around a bug in Ceph which doesn't encode CommonPrefixes
when using URL encoded directory listings.
See: https://tracker.ceph.com/issues/41870
changes:
- chunker: remove GetTier and SetTier
- remove wdmrcompat metaformat
- remove fastopen strategy
- make hash_type option non-advanced
- adverise hash support when possible
- add metadata field "ver", run strict checks
- describe internal behavior in comments
- improve documentation
note:
wdmrcompat used to write file name in the metadata, so maximum metadata
size was 1K; removing it allows to cap size by 200 bytes now.
Note: chunker implements many irrelevant methods (UserInfo, Disconnect etc),
but they are required by TestIntegration/FsCheckWrap and cannot be removed.
Dropped API methods: MergeDirs DirCacheFlush PublicLink UserInfo Disconnect OpenWriterAt
Meta formats:
- renamed old simplejson format to wdmrcompat.
- new simplejson format supports hash sums and verification of chunk size/count.
Change list:
- split-chunking overlay for mailru
- add to all
- fix linter errors
- fix integration tests
- support chunks without meta object
- fix package paths
- propagate context
- fix formatting
- implement new required wrapper interfaces
- also test large file uploads
- simplify options
- user friendly name pattern
- set default chunk size 2G
- fix building with golang 1.9
- fix ci/cd on a separate branch
- fix updated object name (SyncUTFNorm failed)
- fix panic in Box overlay
- workaround: Box rename failed if name taken
- enhance comments in unit test
- fix formatting
- embed wrapped remote rather than inherit
- require wrapped remote to support move (or copy)
- implement 3 (keep fstest)
- drop irrelevant file system interfaces
- factor out Object.mainChunk
- refactor TestLargeUpload as InternalTest
- add unit test for chunk name formats
- new improved simplejson meta format
- tricky case in test FsIsFile (fix+ignore)
- remove debugging print
- hide temporary objects from listings
- fix bugs in chunking reader:
- return EOF immediately when all data is sent
- handle case when wrapped remote puts by hash (bug detected by TestRcat)
- chunked file hashing (feature)
- server-side copy across configs (feature)
- robust cleanup of temporary chunks in Put
- linear download strategy (no read-ahead, feature)
- fix unexpected EOF in the box multipart uploader
- throw error if destination ignores data
When used with v2_auth = true, PresignRequest doesn't return
signed headers, so remote dest authentication would be fail.
This commit copying back HTTPRequest.Header to headers.
Tested with RiakCS v2.1.0.
Signed-off-by: Anthony Rusdi <33247310+antrusd@users.noreply.github.com>
- Read the storage class for each object
- Implement SetTier/GetTier
- Check the storage class on the **object** before using SetModTime
This updates the fix in 1a2fb52 so that SetModTime works when you are
using objects which have been migrated to GLACIER but you aren't using
GLACIER as a storage class.
Fixes#3522
Before this change we used PATCH on the object to update the metadata.
Apparently this requires the "full_control" scope which Google were
unhappy with in their oauth review.
This changes it to update the metadata by copying the object ontop of
itself (which is the way s3 works). This can be done with normal
permissions.
This fixes a crash on the google photos backend when an error is
returned from the rest.Call function.
This turned out to be a mis-understanding of the rest docs so
- improved rest.Call docs
- fixed mis-understanding in google photos backend
- fixed similar mis-understading in onedrive backend
- change the interface of listBuckets() removing dir parameter and adding context
- add makeBucket() and use in place of Mkdir("")
- this fixes some corner cases in Copy/Update
- mark all the listed buckets OK in ListR
Thanks to @yparitcher for the review.
Before this change, if the caller didn't provide a hint, we would
calculate all hashes for reads and writes.
The new whirlpool hash is particularly expensive and that has become noticeable.
Now we don't calculate any hashes on upload or download unless hints are provided.
This means that some operations may run slower and these will need to be discovered!
It does not affect anything calling operations.Copy which already puts
the corrects hints in.
When using the VFS with swift and --swift-no-chunk, PutStream was
returning objects with size -1 which was causing corrupted transfer
messages.
This was fixed by counting the bytes transferred in a streamed file
and updating the metadata with that.
This was factored from fstest as we were including the testing
enviroment into the main binary because of it.
This was causing opening the browser to fail because of 8243ff8bc8.
In 53a1a0e3ef we started returning non nil from NewObject when
an object isn't found. This breaks the integration tests and the API
expected of a backend.
This fixes it.
Introduce stats groups that will isolate accounting for logically
different transferring operations. That way multiple accounting
operations can be done in parallel without interfering with each other
stats.
Using groups is optional. There is dedicated global stats that will be
used by default if no group is specified. This is operating mode for CLI
usage which is just fire and forget operation.
For running rclone as rc http server each request will create it's own
group. Also there is an option to specify your own group.
Configuration time option to disable the above for if using Dropbox (does not
allow setting mtime on copy) or Amazon Drive (neither on upload nor on copy).
Before this change rclone was sending a MimeType in the requests for
server side Move and Copy.
The conjecture is that if you attempt to set the MimeType to something
different in a Copy then Google Drive has to do an actual copy of the
file data. This takes a very long time (since it is large) and fails
after a 90s timeout.
After the change we no longer set the MimeType in Move or Copy and the
copies happen instantly and correctly.
Many thanks to @darthShadow for discovering that this was causing the
problem.
Fixes#3070Fixes#3033Fixes#3300Fixes#3155
This was started by Fionera, finished off by Laura with fixes and more
docs from Nick.
Co-authored-by: Fionera <fionera@fionera.de>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>