Unfortunately bcrypt only hashes the first 72 bytes of a given input
which meant that using it on ssh keys which are longer than 72 bytes
was incorrect.
This swaps over to using sha256 which should be adequate for the
purpose of protecting in memory passwords where the unencrypted
password is likely in memory too.
Before this change, when uploading files from the VFS cache which were
pending a rename, rclone would use the new path of the object when
specifiying the destination remote. This didn't cause a problem with
most backends as the subsequent rename did nothing, however with the
drive backend, since it updates objects, the incorrect Remote was
embedded in the object. This caused the rename to apparently succeed
but the object be at the wrong location.
The fix for this was to make sure we upload to the path stored in the
object if available.
This problem was spotted by the new rename tests for the VFS layer.
This error started happening after updating golang/x/crypto which was
done as a side effect of:
3801b8109 vendor: update termbox-go to fix ncdu command on FreeBSD
This turned out to be a deliberate policy of making
ssh.ParsePrivateKeyWithPassphrase fail if the passphrase was empty.
See: https://go-review.googlesource.com/c/crypto/+/207599
This fix calls ssh.ParsePrivateKey if the passphrase is empty and
ssh.ParsePrivateKeyWithPassphrase otherwise which fixes the problem.
This has been reported to Wasabi and they've confirmed as a known
issue that multipart uploads can't be 0 sized even though that is
incompatible with AWS S3.
If the --drive-stop-on-upload-limit flag is in effect this checks the
error string from Google Drive to see if it is the error you get when
you've breached your 750GB a day limit.
If so then it turns this error into a Fatal error which should stop
the sync.
Fixes#3857
The failure is this which is not reproducable locally, only on the CI
servers.
--- FAIL: TestMount/CacheMode=minimal/TestWriteFileOverwrite (1.01s)
fs.go:351:
Error Trace: fs.go:351
write.go:65
Error: Received unexpected error:
open E:testwrite: The request could not be performed because of an I/O device error.
Test: TestMount/CacheMode=minimal/TestWriteFileOverwrite
The corresponding ERROR from the log is this:
ERROR : IO error: truncate C:\Users\runneradmin\AppData\Local\rclone\vfs\local\C\Users\RUNNER~1\AppData\Local\Temp\rclone298719627\testwrite: Access is denied.
Instead of using ioutil.WriteFile this fix uses an equivalent based on
rclone's lib/file which doesn't set the exclusive flag on
Windows. This allows files to be deleted that are open. It also
deletes existing files if an error is received and retries.
Statistics of ransfers which were interrupted are not cleared before
retry iteration. These transfers completed with over 100 percentage.
This change clears transfer accounting before next retry iteration is
done in order to keep numbers in track.
Fixes#3861
In listings if the ID `appDataFolder` is used to list a directory the
parents of the items returned have the actual ID instead the alias
`appDataFolder`. This confused the ListR routine into ignoring all
these items.
This change makes the listing routine accept all parent IDs returned
if there was only one ID in the query. This fixes the `appDataFolder`
problem. This means we are relying on Google Drive to only return the
items we asked for which is probably OK.
Fixes#3851
Without the fix we can have a race, example:
```
Write at 0x00c000432039 by goroutine 187:
github.com/rclone/rclone/fs/accounting.(*StatsInfo).Error()
fs/accounting/stats.go:495 +0x3f1
github.com/rclone/rclone/fs/accounting.(*StatsInfo).Error-fm()
fs/accounting/stats.go:477 +0x55
github.com/rclone/rclone/fs/walk.listRwalk.func1()
fs/walk/walk.go:162 +0xd2
github.com/rclone/rclone/fs/walk.walk.func2()
fs/walk/walk.go:402 +0x30f
Previous read at 0x00c000432039 by goroutine 184:
github.com/rclone/rclone/fs/accounting.(*statsGroups).sum()
fs/accounting/stats_groups.go:351 +0xcae
github.com/rclone/rclone/fs/accounting.rcTransferredStats()
fs/accounting/stats_groups.go:132 +0x1f4
```
Fixes#3844
The S3 ListObject API returns paginated bucket listings, with
"MaxKeys" items for each GET call.
The default value is 1000 entries, but for buckets with millions of
objects it might make sense to request more elements per request, if
the backend supports it. This commit adds a "list_chunk" option for
the user to specify a lower or higher value.
This commit does not add safe guards around this value - if a user
decides to request a too large list, it might result in connection
timeouts (on the server or client).
In AWS S3, there is a fixed limit of 1000, some other services might
have one too. In Ceph, this can be configured in RadosGW.
Before this patch we were failing to URL decode the NextMarker when
url encoding was used for the listing.
The result of this was duplicated listings entries for directories
with >1000 entries where the NextMarker was a file containing a space.
Before this change, renaming an open file when using the VFS cache was
delayed until the file was closed. This meant that the file was not
readable after a rename even though it is was in the cache.
After this change we rename the local cache file and the in memory
cache, delaying only the rename of the file in object storage.
See: https://forum.rclone.org/t/xen-orchestra-ebadf-bad-file-descriptor-write/13104