Previous to the fix, if an item was being uploaded and it was renamed,
the upload would fail with missing checksum errors.
This change cancels any uploads in progress if the file is renamed.
This is a small patch to remove a defer statement found in a for loop.
It instead closes the file after it is done copying the bytes from the
tar file reader.
Pcloud appears to have opened up a new region and they are returning
the hostname in the oauth callback, thus
GET /?code=XXX&locationid=1&hostname=api.pcloud.com&state=XXX HTTP/1.1
GET /?code=XXX&locationid=2&hostname=eapi.pcloud.com&state=XXX HTTP/1.1
This isn't documented yet, however pCloud have confirmed that this is
the correct interpretation.
Rclone now reads the "hostname" parameter in the oauth callback and
stores it in the config file. It uses it for all subequent API calls.
Previous to this a dangling shortcut would error the directory
listing.
This patch makes dangling shortcuts appear as 0 sized objects in the
directory listing so they can be deleted. These objects can't be read
though.
Before this change, if the cache was given a source `remote:file` it
stored `remote:` with the error `fs.ErrorIsFile` attached. This meant
that if it `remote:` was subsequently looked up it would return the
`fs.ErrorIsFile` error.
This broke `moveto remote:file remote:file2` as moveto would lookup
`remote:` from the second argument and erroneously get the
`fs.ErrorIsFile` error.
This likely broke other commands too.
This was broken in
4c9836035 fs/cache: Add Pin and Unpin and canonicalised lookup
Which was released in v1.52.0
The fix is to make a new cache entry for `remote:` with no error
attached in the case that the original call returned `fs.ErrorIsFile`.
For some objects the onedrive backend has been doing a server side
copy and a delete when a server side move would have worked OK.
This was caused by not detecting the home drive correctly (when it was
an empty string) and assuming that these transfers were cross drive.
This is fixed by comparing canonicalizing drive IDs before comparing them.
Currently credentials are required to download a public bucket file
which is not really necessary and makes automated usage more complex.
Add a new option "anonymous" which when enabled configures the gcs
backend to use an anonymous HTTP client. This of course only works
for read access and trying to write will lead to errors like that:
"googleapi: Error 401: Anonymous caller does not not have
storage.objects.create access to the Google Cloud Storage object.",
as expected. By default the anonymous access option is disabled so that
the GCS Application Default Credentials are still used by default as
before and an error is given if they can't be found.
Before this change rclone used the relative path from the current
working directory.
It appears that WS FTP doesn't like this and the openssh sftp tool
also uses absolute paths which is a good reason for switching to
absolute paths.
This change reads the current working directory at startup and bases
all file requests from there.
See: https://forum.rclone.org/t/sftp-ssh-fx-failure-directory-not-found/17436
This was caused by the signal to stop buffering being ignored when
there was no buffer!
This is fixed by explicitly checking for no buffering and stopping.
Before this change, if we restarted an upload after a restart then the
file would get uploaded but never added to the directory listings.
This change makes sure we add virtual items to the directory cache
when reloading the cache so that they show up properly.
Rclone adds virtual directory entries to the directory cache when it
creates a file or directory.
Before this change these dropped out of the directory cache when the
directory cache was reloaded. This meant that when the directory cache
expired:
- On bucket based backends, empty directories would disappear
- When using VFS writeback, files in the process of uploading would disappear
This is fixed by keeping track of the virtual entries in each
directory. The virtual entries are removed when they become real - ie
the object is read back from the listing.
This also keeps tracks of deletes in the same way so if a file is
deleted, it will not re-appear when the directory cache is reloaded if
the deletion hasn't finished yet.
Before this change we initialized the rc for a single VFS. However
rclone can have multiple VFSes in use now so this is no longer
adequate.
This change adds an optional fs parameter to all the VFS methods to
disambiguate VFSes when there is more than one in use.
It also adds a method vfs/list to show all the active VFSes.
This adds outline tests for the rc commands which didn't have tests
before.
- fix deadlock when cancelling upload
- fix double upload and panic after cancelled upload
- fix cancelation strategy of uploading files
- don't cancel uploads if we don't modify the file
- cancel uploads if we do modify the file
- fix deadlock between Item and writeback
- fix confusion about whether writeback item was being uploaded
- fix cornercases in cancelling uploads and removing files
Item
- Remove unused method getName
- Fix Truncate on unopened file
- Fix bug when downloading segments to fill out file on close
- Fix bug when WriteAt extends the file and we don't mark space as used
downloader
- Retry failed waiters every 5 seconds
- Download to multiple places at once in the stream
- Restart as necessary
- Timeout unused downloaders
- Close reader if too much data skipped
- Only use one file handle as use item for writing
- Implement --vfs-read-chunk-size and --vfs-read-chunk-limit
- fix deadlock between asyncbuffer and vfs cache downloader
- fix display of stream abandoned error which should be ignored
On file Remove
- cancel any writebacks in progress
- ignore error message deleting non existent file if file was in the
process of being uploaded
Writeback
- Don't transfer the file if it has disappeared in the meantime
- Take our own copy of the file name to avoid deadlocks
- Fix delayed retry logic
- Wait for upload to finish when cancelling upload
Fix race condition in item saving
Fix race condition in vfscache test
Make sure we delete the file on the error path - this makes cascading
failures much less likely