It otherwise has the nearly the same interface as walk.Walk which it
will fall back to if it can't use ListR.
Using walk.ListR will speed up file system operations by default and
use much less memory and start immediately compared to if --fast-list
had been supplied.
The previous behavior of the remotes completion was that only
alphanumeric characters were allowed in a remote name. This limitation
has been lifted somewhat by #2985, which also allowed an underscore.
With the new implementation introduced in this commit, the completion of
the remote name has been simplified: If there is no colon (":") in the
current word, then complete remote name. Otherwise, complete the path
inside the specified remote. This allows correct completion of all
remote names that are allowed by the config (including - and _).
Actually it matches much more than that, even remote names that are not
allowed by the config, but in such a case there already would be a wrong
identifier in the configuration file.
With this simpler string comparison, we can get rid of the regular
expression, which makes the completion multiple times faster. For a
sample benchmark, try the following:
# Old way
$ time bash -c 'for _ in {1..1000000}; do
[[ remote:path =~ ^[[:alnum:]]*$ ]]; done'
real 0m15,637s
user 0m15,613s
sys 0m0,024s
# New way
$ time bash -c 'for _ in {1..1000000}; do
[[ remote:path != *:* ]]; done'
real 0m1,324s
user 0m1,304s
sys 0m0,020s
The UPnP MediaServer spec says that the ConnectionManager service is
required, and adding it was enough to get dlna support working on my
other TV (LG webOS 2.2.1).
Some WebDAV servers return an empty Available and Used which parses as 0.
This caused About to return the Total as 0 which can confused mounted
file systems.
After this change we ignore the result if Available and Used are both 0.
See: https://forum.rclone.org/t/windows-mounted-webdav-drive-has-no-free-space/8938
The SCPD URL was being set after marshalling the XML, and thus coming
out blank. Now works on my Samsung TV, and likely fixes some issues
reported by others in #2648.
Before this change a race condition existed in mkdir
- the directory was attempted to be created
- the parent didn't exist so it failed
- the parent was created
- the directory was created again
The last step failed as the directory was created in a different thread.
This was fixed by checking the error messages of MKCOL for both
directory creations, rather than only the first.
Before this change a range request on a 0 length file would fail
$ rclone cat --head 128 drive:test/emptyfile
ERROR : open file failed: googleapi: Error 416: Request range not satisfiable, requestedRangeNotSatisfiable
To fix this we remove Range: headers on requests for zero length files.
This introduces a new config variable bucket_policy_only. If this is
set then rclone:
- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
Before this change we locked the root directory, recursively fetched
the listing, applied it then unlocked the root directory.
After this change we recursively fetch the listing then apply it with
the root directory locked which shortens the time that the root
directory is locked greatly.
With the original method and the new method the subdirectories are
left unlocked and so potentially could be changed leading to
inconsistencies. This change makes the potential for inconsistencies
slightly worse by leaving the root directory unlocked at a gain of a
much more responsive system while runing vfs/refresh.
See: https://forum.rclone.org/t/rclone-rc-vfs-refresh-locking-directory-being-refreshed/9004
Fall back to default application credentials when all other credentials sources fail
This change allows users with default application credentials
configured (notably when running on google compute instances) to
dispense with explicitly configuring google cloud storage credentials
in rclone's own configuration.
This enables MD5 checksum calculation and publication when uploading file above the "Cutoff" limit.
It was explictely ignored in case of multi-block (a.k.a. multipart) uploads to Azure Blob Storage.
Before this change when doing Mkdir the VFS layer could add the new
item to an unread directory which caused confusion.
It could also do mkdir on a file when run on a bucket based remote
which would temporarily overwrite the file with a directory.
Fixes#2993