In this commit:
75dfdbf211 ci: revert revive settings back to fix lint
We accidentally disabled all the revive linters. Unfortunately setting
the rules clears the default set of rules so it is necessary to
mention all rules that we need.
This implements the OpenChunkWriter interface for b2 which
enables multi-thread uploads.
This makes the memory controls of the s3 backend inoperative; they are
replaced with the global ones.
--b2-memory-pool-flush-time
--b2-memory-pool-use-mmap
By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
This implements the OpenChunkWriter interface for azureblob which
enables multi-thread uploads.
This makes the memory controls of the s3 backend inoperative; they are
replaced with the global ones.
--azureblob-memory-pool-flush-time
--azureblob-memory-pool-use-mmap
By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
This makes the memory controls of the s3 backend inoperative and
replaced with the global ones.
--s3-memory-pool-flush-time
--s3-memory-pool-use-mmap
By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
Fixes#7141
- fix docs and error messages for multithread
- use sync/errgroup built in concurrency limiting
- re-arrange multithread code
- don't continue multi-thread uploads if one part fails
In this commit we introduced a race condition when using the auth
proxy.
94a320f23c serve ftp: update to goftp.io/server v2.0.1
This was due to the re-organisation of the upstream library which made
the driver be a singleton rather than per session.
This means that when using the auth proxy we need to keep track of
which VFS to use by based on which FTP user is connected.
This also adjusts the locking so that the methods will run
concurrently.
Before this change uploading files with rclone to:
rclone serve sftp --vfs-cache-mode full
Would return the error:
command "md5sum XXX" failed with error: unexpected non file
This patch detects that the file is still in the VFS cache and reads
the MD5SUM from there rather from the remote.
Fixes#7241
Before this change, when using --cutoff-mode=soft and --max-duration
rclone deadlocked when the cutoff limit was reached.
This was because the sync objects Pipe became full and nothing was
emptying it because the cutoff was reached.
This changes the context for putting items into the pipe to be the one
that gets cancelled when the cutoff is reached.
See: https://forum.rclone.org/t/sync-command-hanging-using-cutoff-mode-soft-with-max-duration-time-flags/40866
As an extra security feature some FTP servers (eg FileZilla) require
that the data connection re-use the same TLS connection as the control
connection. This is a good thing for security.
The message "TLS session of data connection not resumed" means that it
was not done.
The problem turned out to be that rclone was re-using the TLS session
cache between concurrent connections so the resumed TLS data
connection could from any of the control connections.
This patch makes each TLS connection have its own session cache which
should fix the problem.
This also reverts the ftp library to the upstream version which now
contains all of our patches.
Fixes#7234
I ( @boukendesho ) have volunteered to maintain the snap package so
this adds it back into the installation instructions.
It will set a `snap` tag visible in `rclone version` so we know where
it came from for support queries.