This gives you more control over how long rclone will run for, making
it easier to script backups, e.g. via cron. Once the `--max-duration`
time limit is reached, no new transfers will be initiated, but those
already in-flight will be allowed to complete.
Fixes#985
Before this change the expect/continue timeout was set to
--conntimeout which was 60s by default which is too long to wait.
This was noticed when using s3 with a proxy which apparently didn't
support expect / continue properly.
Set --expect-continue-timeout 0 to disable expect/continue.
from rsync manual:
--compare-dest=DIR
This option instructs rsync to use DIR on the destination machine as an
additional hierarchy to compare destination files against doing transfers
(if the files are missing in the destination directory). If a file is found
in DIR that is identical to the sender's file, the file will NOT be
transferred to the destination directory. This is useful for creating
a sparse backup of just files that have changed from an earlier backup.
--copy-dest=DIR
This option behaves like --compare-dest, but rsync will also copy unchanged
files found in DIR to the destination directory using a local copy.
This is useful for doing transfers to a new destination while leaving
existing files intact, and then doing a flash-cutover when all files
have been successfully transferred.
On google fs (drive, google photos, and google cloud storage), if
headless is selected, do not open browser.
This also supplies a new option "auth-no-open-browser" for authorize
if the user does not want it.
This should fix#3323.
Before this fix, attempting to set a non top level environment
variable would fail with "Couldn't find flag".
This fixes it by passing in the flags that the env var is being set
from.
Fixes#3615
Before this change it was possible to make a remote with an invalid
name in the config file, either manually or with `rclone config
create` (but not with `rclone config`).
When this remote was used, because it was invalid, rclone would
presume this remote name was a local directory for a very suprising
user experience!
This change checks remote names more carefully and returns errors
- when the user tries to use an invalid remote name on the command line
- when an invalid remote name is used in `rclone config create/update/password`
- when the user tries to enter an invalid remote name in `rclone config`
This does not prevent the user entering a remote name with invalid
characters in the config manually, but such a remote will fail
immediately when it is used on the command line.
See: https://forum.rclone.org/t/rc-rc-job-expire-interval-bug/11188
rclone was ignoring the --rc-job-expire-duration and --rc-job-interval
flags. This turned out to be an initialization order problem and was
fixed by moving those flags out of global config into rc config.
- Change rclone/fs interfaces to accept context.Context
- Update interface implementations to use context.Context
- Change top level usage to propagate context to lover level functions
Context propagation is needed for stopping transfers and passing other
request-scoped values.
Before this change when using "rclone config create" it wasn't
possible to add passwords in one go, it was necessary to call "rclone
config password" to add the passwords afterwards as "rclone config
create" didn't obscure passwords.
After this change "rclone config create" and "rclone config update"
will obscure passwords as necessary as will the corresponding API
calls config/create and config/update.
This makes "rclone config password" and its API config/password
obsolete, however they will be left for backwards compatibility.
This replaces the `sync.Pool` allocator with lib/pool. This
implements a pool of buffers of up to 64MB which can be re-used but is
flushed every 5 seconds.
If `--use-mmap` is set then rclone will use mmap for memory
allocations which is much better at returning memory to the OS.
* drive: don't run teamdrive config if auto confirm set
* onedrive: don't run extra config if auto confirm set
* make Confirm results customisable by config
Fixes#1010
Cookies are handled by cookiejar in memory with fshttp module through
the entire session.
One useful scenario is, with HTTP storage system where index server
adds authentication cookie while redirecting to CDN for actual files.
Also, it can be helpful to reuse fshttp in other storage systems
requiring cookie.
This means that rclone will pick up tokens from concurrently running
rclones. This helps for Box which only allows each refresh token to
be used once.
Without this fix, rclone caches the refresh token at the start of the
run, then when the token expires the refresh token may have been used
already by a concurrently running rclone.
This also will retry the oauth up to 5 times at 1 second intervals.
See: https://forum.rclone.org/t/box-token-refresh-timing/8175
The --no-traverse flag was not implemented when the new sync routines
(using the march package) was implemented.
This re-implements --no-traverse in march by trying to find a match
for each object with NewObject rather than from a directory listing.
--max-backlog controls the queue length.
Add statistics for the check/upload/rename queues.
This means that checking can complete before the uploads which will
give rclone the ability to show exactly what is outstanding.
This unifies the 3 methods of reading config
* command line
* environment variable
* config file
And allows them all to be configured in all places. This is done by
making the []fs.Option in the backend registration be the master
source of what the backend options are.
The backend changes are:
* Use the new configmap.Mapper parameter
* Use configstruct to parse it into an Options struct
* Add all config to []fs.Option including defaults and help
* Remove all uses of pflag
* Remove all uses of config.FileGet
Before this change if the rclone was running in an environment which
couldn't find the HOME directory, it would print a warning about
supplying a --config flag even if the user had done so.
Before this copyto would parse windows paths incorrectly.
This change moves the parsing code into fspath and makes sure
fspath.Split calls fspath.Parse which does the parsing correctly for
This also renames fspath.RemoteParse to fspath.Parse for consistency
This introduces a method of making provider specific configuration
within a remote. This is useful particularly in s3.
This commit does the basic configuration in S3 for IBM COS.
The purpose of this is to make it easier to maintain and eventually to
allow the rclone backends to be re-used in other projects without
having to use the rclone configuration system.
The new code layout is documented in CONTRIBUTING.