This error was introduced in this commit when refactoring the list
routine.
b8591b230d onedrive: implement ListR method which gives --fast-list support
The error was caused by OneNote files not being skipped properly.
Before this change the IP address of the server was used in the SMB
connect request (see CloudSoda/go-smb2#18).
The updated library now can pass the hostname instead.
The update requires a small change in the dial method call.
Fixes rclone#6672
Before this change ListR was unconditionally enabled on onedrive.
This caused performance problems for some uses, so now the
--onedrive-delta flag has to be supplied.
Fixes#7362
Before this change PartialUploads was not set. This is clearly wrong
since incoming files are visible on the smb server.
Setting PartialUploads fixes the multithread upload modtime problem as
it uses the PartialUploads flag as an indication that it needs to set
the modtime explicitly.
This problem was detected by the new TestMultithreadCopy integration
tests
Fixes#7411
Before this change smb drives sometimes showed a fraction of the
correct size using `rclone about`.
This fixes the problem by switching the upstream library from
github.com/hirochachacha/go-smb2 to github.com/cloudsoda/go-smb2 which
has a fix for the problem.
The new library passes the integration tests.
Fixes#6733
Before this change overwriting an existing file with a 0 length file
didn't update the file size.
This change corrects the issue and makes sure the file is truncated
properly.
This was discovered by the full integration tests.
Before this change serve s3 would return NoSuchKey errors when a non
existent prefix was listed.
This change fixes it to return an empty list like AWS does.
This was discovered by the full integration tests.
The following command will block for 60s(default) when the network is slow or unavailable:
```
rclone --contimeout 10s --low-level-retries 0 lsd dropbox:
```
This change will make it timeout after the expected 10s.
Signed-off-by: rkonfj <rkonfj@gmail.com>
Before this change, streaming files an exact multiple of the chunk
size would cause rclone to attempt to stream a 0 sized chunk which was
rejected by the b2 servers.
This bug was noticed by the new integration tests for chunked streaming.
Before this change the b2 servers would complain as this was only a
single part transfer.
This was noticed by the new integration tests for server side chunked copy.
Before this change, if a multithread upload failed (let's say the
source became unavailable) rclone would finalise the file first before
aborting the transfer.
This caused the partial file to be written which would overwrite any
existing files.
This was fixed by making sure we Abort the transfer before Close-ing
it.
This updates the docs to encourage calling of Abort before Close and
updates writerAtChunkWriter to make sure that works properly.
This also reworks the tests to detect this and to make sure we upload
and download to each multi-thread capable backend (we were only
downloading before which isn't a full test).
Fixes#7071
This commit fixed the problem but made the integration tests fail.
33376bf399 dropbox: fix missing encoding for rclone purge
This fixes the problem properly by making sure we send the encoded or
non encoded root to the right places.
The free account has a very ungenerous 1000 api calls per day limit
and the full integration test suite breaches that so limit the
integration tests to just the backend.
For uploads which are coming from disk or going to disk or going to a
backend which doesn't need to seek except for retries this doesn't
buffer the input.
This dramatically reduces rclone's memory usage.
Fixes#7350