This patch provides the support of synchronous cache space recovery
to allow read threads to recover from ENOSPC errors when cache space
can be recovered from cache items that are not in use or safe to be
reset/emptied .
The patch complements the existing cache cleaning process in two ways.
Firstly, the existing cache cleaning process is time-driven that runs
periodically. The cache space can run out while the cache cleaner
thread is still waiting for its next scheduled run. The io threads
encountering ENOSPC return an internal error to the applications
in this case even when cache space can be recovered to avoid this
error. This patch addresses this problem by having the read threads
kick the cache cleaner thread in this condition to recover cache
space preventing unnecessary ENOSPC errors from being seen by the
applications.
Secondly, this patch enhances the cache cleaner to support cache
item reset. Currently the cache purge process removes cache
items that are not in use. This may not be sufficient when the
total size of the working set exceeds the cache directory's
capacity. Like in the current code, this patch starts the purge
process by removing cache files that are not in use. Cache items
whose access times are older than vfs-cache-max-age are removed first.
After that, other not-in-use items are removed in LRU order until
vfs-cache-max-size is reached. If the vfs-cache-max-size (the quota)
is still not reached at this time, this patch adds a cache reset
step to reset/empty cache files that are still in use but not
dirtied. This enables application processes to continue without
seeing an error even when the working set depletes the cache space
as long as there is not a large write working set hoarding the
entire cache space.
By design this patch does not add ENOSPC error recovery for write
IOs. Rclone does not empty a write cache item until the file data
is written back to the backend upon close. Allowing more cache
space to be consumed by dirty cache items when the cache space is
already running low would increase the risk of exhausting the cache
space in a way that the vfs mount becomes unreadable.
Before this change we set the modtime of the cache file when all
writers had finished.
This has the unfortunate effect that the file is uploaded with the
wrong modtime which means on backends which can't set modtimes except
when uploading files it is wrong.
This change sets the modtime of the cache file immediately in the
cache and in turn sets the modtime in the file info.
Before this change the background writing of the file was racing with
the test of the object on the remote.
This meant that the tests passed locally but failed on a lot of the
remotes.
Before this change we didn't check the file exists before renaming it,
setting its modification time or deleting it. If the file isn't in the
cache we don't need to do the action since it has been done on the
actual object, so these errors were producing unecessary log messages.
This change checks to see if the file exists first before doing those
actions.
Before this fix, download threads would fill up the buffer and then
timeout even though data was still being read from them. If the client
was streaming slower than network speed this caused the downloader to
stop and be restarted continuously. This caused more potential for
skips in the download and unecessary network transactions.
This patch fixes that behaviour - as long as a downloader is being
read from more often than once every 5 seconds, it won't timeout.
This was done by:
- kicking the downloader whenever ensureDownloader is called
- making the downloader loop if it has already downloaded past the maxOffset
- making setRange() always kick the downloader
Due to Chrome's rather complicated use of file handles when saving
files from the download windows, rclone was attempting to truncate a
closed file.
The file appeared closed due to the handling of 0 length files.
This patch removes the check for the file being closed in the
WriteFileHandle.Truncate call. This is safe because the only action
this method takes is to emit an error message if the file is the wrong
size.
See: https://forum.rclone.org/t/google-drive-cannot-save-files-directly-from-browser-to-gdrive-mounted-path/17992/
This is preparation for getting the Accounting to check the context,
buf first we need to get it in place. Since this is one of those
changes that makes lots of noise, this is in a seperate commit.
Before this fix we took the directory lock to read the ModTime of the
directory. This was causing locking on directories which were being
re-read from the backend.
This commit gives the modtime its own lock so it can be read even when
the directory is being updated.
See: https://forum.rclone.org/t/high-cpu-load-with-rclone-mount/17604
Before this change files that were in the cache and renamed with
--vfs-cache-mode minimal weren't renamed at all.
This fixes the problem and adds tests for all the different
combinations of cache modes and in and out of the cache.
In this commit (released in v1.52.0)
6ca7198f mount: fix disappearing cwd problem
SetSys was introduced to cache node lookups.
Unfortunately taking the vfs.(*Dir) lock in SetSys causes any FUSE
operations on a directory to pile up behind slow directory listings.
In some situations this leads to very high load.
This commit fixes it by using atomic operations to read and write the
Sys value make it independent of the lock.
See: https://forum.rclone.org/t/high-cpu-load-with-rclone-mount/17604
See: #4104
Previous to the fix, if an item was being uploaded and it was renamed,
the upload would fail with missing checksum errors.
This change cancels any uploads in progress if the file is renamed.
This was caused by the signal to stop buffering being ignored when
there was no buffer!
This is fixed by explicitly checking for no buffering and stopping.
Before this change, if we restarted an upload after a restart then the
file would get uploaded but never added to the directory listings.
This change makes sure we add virtual items to the directory cache
when reloading the cache so that they show up properly.
Rclone adds virtual directory entries to the directory cache when it
creates a file or directory.
Before this change these dropped out of the directory cache when the
directory cache was reloaded. This meant that when the directory cache
expired:
- On bucket based backends, empty directories would disappear
- When using VFS writeback, files in the process of uploading would disappear
This is fixed by keeping track of the virtual entries in each
directory. The virtual entries are removed when they become real - ie
the object is read back from the listing.
This also keeps tracks of deletes in the same way so if a file is
deleted, it will not re-appear when the directory cache is reloaded if
the deletion hasn't finished yet.
Before this change we initialized the rc for a single VFS. However
rclone can have multiple VFSes in use now so this is no longer
adequate.
This change adds an optional fs parameter to all the VFS methods to
disambiguate VFSes when there is more than one in use.
It also adds a method vfs/list to show all the active VFSes.
This adds outline tests for the rc commands which didn't have tests
before.
- fix deadlock when cancelling upload
- fix double upload and panic after cancelled upload
- fix cancelation strategy of uploading files
- don't cancel uploads if we don't modify the file
- cancel uploads if we do modify the file
- fix deadlock between Item and writeback
- fix confusion about whether writeback item was being uploaded
- fix cornercases in cancelling uploads and removing files
Item
- Remove unused method getName
- Fix Truncate on unopened file
- Fix bug when downloading segments to fill out file on close
- Fix bug when WriteAt extends the file and we don't mark space as used
downloader
- Retry failed waiters every 5 seconds
- Download to multiple places at once in the stream
- Restart as necessary
- Timeout unused downloaders
- Close reader if too much data skipped
- Only use one file handle as use item for writing
- Implement --vfs-read-chunk-size and --vfs-read-chunk-limit
- fix deadlock between asyncbuffer and vfs cache downloader
- fix display of stream abandoned error which should be ignored
On file Remove
- cancel any writebacks in progress
- ignore error message deleting non existent file if file was in the
process of being uploaded
Writeback
- Don't transfer the file if it has disappeared in the meantime
- Take our own copy of the file name to avoid deadlocks
- Fix delayed retry logic
- Wait for upload to finish when cancelling upload
Fix race condition in item saving
Fix race condition in vfscache test
Make sure we delete the file on the error path - this makes cascading
failures much less likely
This allows reads to only read part of the file and it keeps on disk a
cache of what parts of each file have been loaded.
File data itself is kept in sparse files.
Before this fix, rclone would sometimes hang in vfs.readAt().
This was due to a race condition causing rclone to miss the timeout
signal.
This was fixed by a small amount of extra locking.
This very likely also fixes a number of "failed to wait for
in-sequence read" errors.
The tests are now run for the mount commands and for the plain VFS.
This makes the tests much easier to debug when running with a VFS than
through a mount.
In
54deb01f00 vfs: Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC
The generated file open_test.go was edited directly without editing
the generator.
This commit brings the generator make_open_tests.go back into line
with that edit. It also makes it so `go generate` can be used to
regenerate the tests.
Previously we were using f which calls f.String() which calls f.Path()
which can cause a deadlock if uses carelessly.
This patch explicitly calls f.Path() or f._path() to bring attention
to the fact that there is a call to a method.
As part of this we take a copy of the directory path as calling
d.Path() violates the total locking order.
See the comment at the top of file.go for details
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 5ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
See: https://forum.rclone.org/t/constantly-high-iowait-add-log/14156
When a file has its modtime set while it is open we delay setting the
modtime until the file is closed.
The file is then uploaded in Flush. In Release we check the cached
file has been uploaded by comparing modtimes and or hashes and upload
it again if it has changed.
Before this change we forgot to change the time on the cached file
when we updated the time file on the object, so this mean that Release
reset the time to the wrong time and uploaded the file again on
remotes which don't support hashes (eg crypt).
The fix was to set the modtime of the cached file at the same time we
set the modtime of the remote object. This means that the files check
as identical in Release so it doesn't try to upload the file.
This means that we avoid a double upload and the modtime is correct.
See: https://forum.rclone.org/t/modification-time-with-vfs-cache/13906/8
Before this change, when uploading files from the VFS cache which were
pending a rename, rclone would use the new path of the object when
specifiying the destination remote. This didn't cause a problem with
most backends as the subsequent rename did nothing, however with the
drive backend, since it updates objects, the incorrect Remote was
embedded in the object. This caused the rename to apparently succeed
but the object be at the wrong location.
The fix for this was to make sure we upload to the path stored in the
object if available.
This problem was spotted by the new rename tests for the VFS layer.
Before this change, renaming an open file when using the VFS cache was
delayed until the file was closed. This meant that the file was not
readable after a rename even though it is was in the cache.
After this change we rename the local cache file and the in memory
cache, delaying only the rename of the file in object storage.
See: https://forum.rclone.org/t/xen-orchestra-ebadf-bad-file-descriptor-write/13104
This makes ReadAt for non cached files wait a short time (up to 5mS)
if it gets an out of order read (which would normally cause a seek and
which take a long time) to see if the gap will be filled with an in
order read.
This makes mount2 based on go-fuse work more efficiently and enables
async reading in normal mount.
A similar change was done for WriteAt in af030f74f5
Before this change, change notify polls would clear the directory
cache recursively. So uploading a file to the root would clear the
entire directory cache.
After this change we just invalidate the directory cache of the parent
directory of the item and if the item was a directory we invalidate it
too.
Before this change when we renamed a directory this cleared the
directory cache for the parent directory too.
If the directory was remaining in the same parent this wasn't
necessary and caused the empty directory to fall out of the cache.
Fixes#3597
If a file handle is duplicated with dup() and the duplicate handle is
flushed, rclone will go ahead and close the file, making the original
file handle stale. This change removes the close() call from Flush() and
replaces it with FlushWrites() so that the file only gets closed when
Release() is called. The new FlushWrites method takes care of actually
writing the file back to the underlying storage.
Fixes#3381
These objects (eg Google Docs) appear with 0 length in the VFS.
Before this change, these only read 0 bytes.
After this change, even though the size appears to be 0, the objects
can be read to the end. If the objects are read to the end then the
size on the handle will be updated.
Before this change, with --vfs-cache-mode minimal,writes if files were
opened they would always be read from the remote, regardless of
whether they were in the cache or not.
This change checks to see if the file is in the cache when opening a
file with --vfs-cache-mode >= minimal and if so then it uses it from
the cache.
This makes --vfs-cache-mode writes in particular much more
efficient. No longer is a file uploaded (with write mode) then
immediately downloaded (with read only mode).
Fixes#3330
This was factored from fstest as we were including the testing
enviroment into the main binary because of it.
This was causing opening the browser to fail because of 8243ff8bc8.
Introduce stats groups that will isolate accounting for logically
different transferring operations. That way multiple accounting
operations can be done in parallel without interfering with each other
stats.
Using groups is optional. There is dedicated global stats that will be
used by default if no group is specified. This is operating mode for CLI
usage which is just fire and forget operation.
For running rclone as rc http server each request will create it's own
group. Also there is an option to specify your own group.
This is done to make clear ownership over accounting object and prepare
for removing global stats object.
Stats elapsed time calculation has been altered to account for actual
transfer time instead of stats creation time.
- Change rclone/fs interfaces to accept context.Context
- Update interface implementations to use context.Context
- Change top level usage to propagate context to lover level functions
Context propagation is needed for stopping transfers and passing other
request-scoped values.
This makes WriteAt for non cached files wait a short time if it gets
an out of order write (which would normally cause an error) to see if
the gap will be filled with an in order write.
The makes the SFTP backend work fine with --vfs-cache-mode off
Before this change we locked the root directory, recursively fetched
the listing, applied it then unlocked the root directory.
After this change we recursively fetch the listing then apply it with
the root directory locked which shortens the time that the root
directory is locked greatly.
With the original method and the new method the subdirectories are
left unlocked and so potentially could be changed leading to
inconsistencies. This change makes the potential for inconsistencies
slightly worse by leaving the root directory unlocked at a gain of a
much more responsive system while runing vfs/refresh.
See: https://forum.rclone.org/t/rclone-rc-vfs-refresh-locking-directory-being-refreshed/9004
Before this change when doing Mkdir the VFS layer could add the new
item to an unread directory which caused confusion.
It could also do mkdir on a file when run on a bucket based remote
which would temporarily overwrite the file with a directory.
Fixes#2993
This fixes FreeBSD which seems to call SetAttr with a size even on
read only files.
This is probably a bug in the FreeBSD FUSE implementation as it
happens with mount and cmount.
See: https://forum.rclone.org/t/freebsd-question/8662/12
Previously to this change, backends without the optional interface
DirMove could not rename directories.
This change uses the new operations.DirMove call to implement renaming
directories which will fall back to Move/Copy as necessary.
Before this change, renaming and deleting of open files (which can
easily happen due to the asynchronous nature of file systems) would
produce an error, for example saving files with Firefox.
After this change we open files with the flags necessary for open
files to be renamed or deleted.
Fixes#2730
Before this change we took the locks file.mu and file.muRW in an
inconsistent order - after the change we always take them in the same
order to fix the deadlock.
Before this fix there were two paths where concurrent use of a
directory could take the file lock then directory lock and the other
would take the locks in the reverse order.
Fix this by narrowing the locking windows so the file lock and
directory lock don't overlap.
Reduce the number of nodes purged from the dir-cache when ForgetPath is
called. This is done by only forgetting the cache of the received path
and invalidating the parent folder cache by resetting *Dir.read.
The parent will read the listing on the next access and reuse the
dir-cache of entries in *Dir.items.
Before this change remotes without server side Move (eg swift, s3,
gcs) would not be able to rename files.
After it means nearly all remotes will be able to rename files on
rclone mount with the notable exceptions of b2 and yandex.
This changes checks to see if the remote can do Move or Copy then
calls `operations.Move` to do the actual move. This will do a server
side Move or Copy but won't download and re-upload the file.
It also checks to see if the destination exists first which avoids
conflicts or duplicates.
Fixes#1965Fixes#2569
vfs/refresh will walk the directory tree for the given paths and
freshen the directory cache. It will use the fast-list capability
of the remote when enabled.
This stops the cache cleaner running unnecessarily and saves
resources.
This also helps with issue #2227 which was caused by a second mount
deleting objects in the first mounts cache.
Without this fix the cached file can be removed as the file is being
uploaded or downloaded. This can cause the directory listings to
become inconsistent (this issue) or data loss (if a retry was needed
in the Copy).
Remove file needs to be excluded from running at the same time as both
openPending and close so it makes sense to unify the locking between
all 3.
This was caused by this sequence of calls
1> file.Release
1> file.close -> takes the file lock
2> vfs.waitforWriters
2> dir.walk -> takes the dir lock
1> file.setObject
1> dir.addObject -> attempts to take the dir lock - BLOCKS
2> file.activeWriters -> tries to take file lock - BLOCKS - DEADLOCK
The fix is to make activeWriters not take the file lock and use atomic
operations to read the number of writers instead.
This no longer needs to deal with O_RDONLY and O_TRUNC since we
disallow this earlier. This also fixes the code to just do it for
O_APPEND, not for everything.
Before this change Open("name", os.O_RDONLY|os.O_TRUNC) would have
truncated the file. This is what Linux does, but is counterintuitive.
POSIX states this is undefined, so return an error in this case
instead. This preserves the invariant O_RDONLY => file is not
changed.
Background: cmd/mount/file.go Open() function does a Seek(0, 1) to see
if the file handle is seekable to set a FUSE hint. Before this change
the file was downloaded before it needed to be which was inefficient
(and broke beta.rclone.org because HEAD requests caused downloads!).
This means that Google docs will no longer appear as huge files in
`rclone mount`. They will not be downloadable, though sometimes
trying twice will work.
The purpose of this is to make it easier to maintain and eventually to
allow the rclone backends to be re-used in other projects without
having to use the rclone configuration system.
The new code layout is documented in CONTRIBUTING.
This was causing errors if the cache cleaner was called between the
Open and the pendingOpen of a RW file.
The fix was to move the cache open to the Open from the openPending.