Before this change we calculated all possible hashes for the file when
the `Hashes` method was called.
After we only calculate the Hash requested.
Almost all uses of `Hash` just need one checksum. This will slow down
`rclone lsjson` with the `--hash` flag. Perhaps lsjson should have a
`--hash-type` flag.
However it will speed up sync/copy/move/check/md5sum/sha1sum etc.
Before it took 12.4 seconds to md5sum a 1GB file, after it takes 3.1
seconds which is the same time the md5sum utility takes.
This fixes rclone returning `listing failed: strconv.ParseInt` errors
when listing files which have a malformed `src_last_modified_millis`.
This is uploaded by the client so care is needed in interpreting it as
it can be malformed.
Fixes#3065
In as many methods as possible we attempt to obey the Retry-After
header where it is provided.
This means that when objects are being requested from OVH cold storage
rclone will sleep the correct amount of time before retrying.
If the sleeps are short it does them immediately, if long then it
returns an ErrorRetryAfter which will cause the outer retry to sleep
before retrying.
Fixes#3041
This implements the Expiry interface so token expiry works properly
This change makes sure that this change from the swift library works
correctly with rclone's custom authenticator.
> Renew the token 60s before the expiry time
>
> The v2 and v3 auth schemes both return the expiry time of the token,
> so instead of waiting for a 401 error, renew the token 60s before this
> time.
>
> This makes transfers more efficient and also works around a bug in
> CEPH which returns 403 instead of 401 when the token expires.
>
> http://tracker.ceph.com/issues/22223
Some WebDAV servers return an empty Available and Used which parses as 0.
This caused About to return the Total as 0 which can confused mounted
file systems.
After this change we ignore the result if Available and Used are both 0.
See: https://forum.rclone.org/t/windows-mounted-webdav-drive-has-no-free-space/8938
Before this change a race condition existed in mkdir
- the directory was attempted to be created
- the parent didn't exist so it failed
- the parent was created
- the directory was created again
The last step failed as the directory was created in a different thread.
This was fixed by checking the error messages of MKCOL for both
directory creations, rather than only the first.
Before this change a range request on a 0 length file would fail
$ rclone cat --head 128 drive:test/emptyfile
ERROR : open file failed: googleapi: Error 416: Request range not satisfiable, requestedRangeNotSatisfiable
To fix this we remove Range: headers on requests for zero length files.
This introduces a new config variable bucket_policy_only. If this is
set then rclone:
- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
Fall back to default application credentials when all other credentials sources fail
This change allows users with default application credentials
configured (notably when running on google compute instances) to
dispense with explicitly configuring google cloud storage credentials
in rclone's own configuration.
This enables MD5 checksum calculation and publication when uploading file above the "Cutoff" limit.
It was explictely ignored in case of multi-block (a.k.a. multipart) uploads to Azure Blob Storage.
Make the pacer package more flexible by extracting the pace calculation
functions into a separate interface. This also allows to move features
that require the fs package like logging and custom errors into the fs
package.
Also add a RetryAfterError sentinel error that can be used to signal a
desired retry time to the Calculator.
Bitrix Site Manager emits `<D:resourcetype><collection/></D:resourcetype>`
missing the namespace on the `collection` tag. This causes the item
to be identified as a file instead of a directory.
To work around this look at the Microsoft extension prop
`iscollection` which seems to be emitted as well.
Before this change any attempt to access a google doc in an rclone
mount would give the error "partial downloads are not supported while
exporting Google Documents" as the mount uses ranged requests to read
data.
This implements ranged requests for a limited number of scenarios,
just enough so that Google docs can be cat-ed from an rclone mount.
When they are cat-ed then they receive their correct size also.
Before this change the union remote was using whether the writable
union could poll for changes to decide whether the union mount could
poll for changes.
The fix causes the union backend to signal it can poll for changes if
**any** of the remotes can poll for changes.
Before this change it was setting the modification times of the things
that the symlinks pointed to.
Note that this is only implemented for unix style OSes. Other OSes
will not attempt to set the modification time of a symlink.
If the upload concurrency is set > 1 then the hash becomes corrupted.
The upload is fine, and can be downloaded fine, however the hash is no
longer the md5sum of the object. It is not known whether this is
rclone's fault or a bug at QingStor.
Before this change if ContentLength was set in the options but 0 then
we would upload using chunked encoding. Fix this to always upload
with a "Content-Length" header even if the size is 0.
Remove workarounds for this from b2 and onedrive backends.
This fixes the issue for the webdav backend described here:
https://forum.rclone.org/t/code-500-errors-with-webdav-nextcloud/8440/
Before this change azureblob would attempt to create already existing
containers. This causes problems with limited permissions keys.
This change checks the container exists before trying to create it in
the same way the s3 backend does. This uses no more requests in the
usual case of the container existing.
See: https://forum.rclone.org/t/copying-individual-files-to-azure-blob-storage/8397
Before this change buckets were created with the same ACL as objects.
After this change, the user can set just --s3-acl to set the ACL of
buckets and objects, or use --s3-bucket-acl as well to have a
different ACL used for bucket creation.
This also logs at INFO level the creation and deletion of buckets.