This caches all the objects returned from the List call. This makes
opening them much quicker so speeds up prune and restores. It also
uses fewer transactions. It can be disabled with
`--cache-objects=false`.
This was discovered when using the B2 backend when the budget was
being blown on list object calls which can avoided with a bit of
caching.
For typical 1 million file backup for a latop or server this will only
use a small amount more memory.
Before this change a circular symlink would cause rclone to error out from the listings.
After this change rclone will skip a circular symlink and carry on the listing,
producing an error at the end.
Fixes#4743
Before this change the tests were run against the previous stable
rclone/rclone docker image.
This unfortunately masked errors in the integration test server.
This change uses the currently installed rclone to run "rclone serve
ftp" etc. This is installed out of the current code by the integration
test server so will make a better test.
Before this change, if a file was created on a remote but deleted
externally from that remote then there was potential for the delete to
never be noticed.
The sequence of events was:
- Create file on VFS - creates virtual directory entry
- File deleted externally to remote before the directory refreshed
- Now the file has a virtual add but is not in the listings so will never disappear
This patch fixes it by removing all virtual directory entries except
the following when the directory is re-read.
- On remotes which can't have empty directories: virtual directory
adds are not flushed. These will remain virtual as long as the
directory is empty.
- For virtual file add: files that are in the process of being
uploaded are not flushed
This patch also adds the distinction between virtually added files and
directories.
It also refactors the virtual directory logic to make it easier to follow.
Fixes#4446
This adds a context.Context parameter to NewFs and related calls.
This is necessary as part of reading config from the context -
backends need to be able to read the global config.
Fix the copy and move operations that broke in 127f0fc when copying directly
to a remote without a specific destination.
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
It seems that when doing chunked uploads to onedrive, if the chunks
take more than 3 minutes or so to upload then they may timeout with
error 504 Gateway Timeout.
This change produces an error (just once) suggesting lowering
`--onedrive-chunk-size` or decreasing `--transfers`.
This is easy to replicate with:
rclone copy -Pvv --bwlimit 0.05M 20M onedrive:20M
See: https://forum.rclone.org/t/default-onedrive-chunk-size-does-not-work/20010/
The topic is mostly about so limitations so all of these are grouped together with a section hyperlink near the top of the page. Intention is to avoid potential duplication and make it more straightforward (there is a place and it is essentially just a list so wording doesn't need to be elegant) to add notes about limitations in future, harvested from rclone forum postings.
Minor wording alterations that do not intend to change meaning
It turns out that NFS calls mknod in FUSE even though we have create
defined. This was causing EIO errors when creating files.
This patch fixes it by implementing mknod. The way it is implemented
means that to write to an NFS file system you'll need --vfs-cache-mode
writes.
Minor wording change to help for explicit and implicit FTPS flags. More consistent between flags. Add 's' to request because only one 'client' mentioned.