Make the pacer package more flexible by extracting the pace calculation
functions into a separate interface. This also allows to move features
that require the fs package like logging and custom errors into the fs
package.
Also add a RetryAfterError sentinel error that can be used to signal a
desired retry time to the Calculator.
This change allows remotes to be created on the fly without a config
file by using the remote type prefixed with a : as the remote name, Eg
:s3: to make an s3 remote.
This assumes the user is supplying the backend config via command line
flags or environment variables.
--max-backlog controls the queue length.
Add statistics for the check/upload/rename queues.
This means that checking can complete before the uploads which will
give rclone the ability to show exactly what is outstanding.
This unifies the 3 methods of reading config
* command line
* environment variable
* config file
And allows them all to be configured in all places. This is done by
making the []fs.Option in the backend registration be the master
source of what the backend options are.
The backend changes are:
* Use the new configmap.Mapper parameter
* Use configstruct to parse it into an Options struct
* Add all config to []fs.Option including defaults and help
* Remove all uses of pflag
* Remove all uses of config.FileGet
Before this copyto would parse windows paths incorrectly.
This change moves the parsing code into fspath and makes sure
fspath.Split calls fspath.Parse which does the parsing correctly for
This also renames fspath.RemoteParse to fspath.Parse for consistency
* Implement about for:
* local, crypt, cache, drive, swift, hubic, onedrive, pcloud, dropbox
* Implement `--json` and `---full` flag for `rclone about`
* change About interface to return a Usage structure
* Remove operations.About as it is too thin an interface
* Implement Integration test
Relates to #1138 and #1564
This introduces a method of making provider specific configuration
within a remote. This is useful particularly in s3.
This commit does the basic configuration in S3 for IBM COS.
The purpose of this is to make it easier to maintain and eventually to
allow the rclone backends to be re-used in other projects without
having to use the rclone configuration system.
The new code layout is documented in CONTRIBUTING.
Previously config sub commands were manually parsed rather than using
cobra.
Make config command have the following sub commands:
* create Create a new remote with name, type and options.
* delete Delete an existing remote <name>.
* dump Dump the config file as JSON.
* edit Enter an interactive configuration session.
* file Show path of configuration file in use.
* providers List in JSON format all the providers and options.
* show Print (decrypted) config file, or the config for a single remote.
* update Update options in an existing remote.
The following changes were made to existing commands
* listproviders was renamed to providers
* listoptions was removed in favour of providing the output in providers
* jsonconfig was renamed to create
* an optional parameter was added to the show command
During the sync we collect a list of directories which should be empty
and attempt to rmdir them at the end of the sync. If the directories
are not empty then the rmdir will fail, logging a message but not
erroring the sync.
* Fixup bitrot (rclone and Azure library)
* Implement Copy
* Add modtime to metadata under mtime key as RFC3339Nano
* Make multipart upload work
* Make it pass the integration tests
* Fix uploading of zero length blobs
* Rename to azureblob as it seems likely we will do azurefile
* Add docs
This simplifies the implementation of remotes. The only required
interface is now `List` which is a simple one level directory list.
Optionally remotes may implement `ListR` if they have an efficient way
of doing a recursive list.
The ListR interface will be implemented by remotes that can do a
recursive directory listing more efficiently than just recursing
through the directories. These include the bucket based remotes.