The purpose of this is to make it easier to maintain and eventually to
allow the rclone backends to be re-used in other projects without
having to use the rclone configuration system.
The new code layout is documented in CONTRIBUTING.
Previously config sub commands were manually parsed rather than using
cobra.
Make config command have the following sub commands:
* create Create a new remote with name, type and options.
* delete Delete an existing remote <name>.
* dump Dump the config file as JSON.
* edit Enter an interactive configuration session.
* file Show path of configuration file in use.
* providers List in JSON format all the providers and options.
* show Print (decrypted) config file, or the config for a single remote.
* update Update options in an existing remote.
The following changes were made to existing commands
* listproviders was renamed to providers
* listoptions was removed in favour of providing the output in providers
* jsonconfig was renamed to create
* an optional parameter was added to the show command
During the sync we collect a list of directories which should be empty
and attempt to rmdir them at the end of the sync. If the directories
are not empty then the rmdir will fail, logging a message but not
erroring the sync.
* Fixup bitrot (rclone and Azure library)
* Implement Copy
* Add modtime to metadata under mtime key as RFC3339Nano
* Make multipart upload work
* Make it pass the integration tests
* Fix uploading of zero length blobs
* Rename to azureblob as it seems likely we will do azurefile
* Add docs
This simplifies the implementation of remotes. The only required
interface is now `List` which is a simple one level directory list.
Optionally remotes may implement `ListR` if they have an efficient way
of doing a recursive list.
The ListR interface will be implemented by remotes that can do a
recursive directory listing more efficiently than just recursing
through the directories. These include the bucket based remotes.
This is a fix left over from the v2 conversion. Dropbox ignores the
client modification on an incoming file if it was identical to the
existing file. This change deletes the existing file first before
re-uploading the new one.
* Add options to Put, PutUnchecked and Update for all Fses
* Use these to create HashOption
* Implement this in local
* Pass the option in fs.Copy
This has the effect that we only calculate hashes we need to in the
local Fs which speeds up transfers significantly.
Optional interfaces are becoming more important in rclone,
--track-renames and --backup-dir both rely on them.
Up to this point rclone has used interface upgrades to define optional
behaviour on Fs objects. However when one Fs object wraps another it
is very difficult for this scheme to work accurately. rclone has
relied on specific error messages being returned when the interface
isn't supported - this is unsatisfactory because it means you have to
call the interface to see whether it is supported.
This change enables accurate detection of optional interfaces by use
of a Features struct as returned by an obligatory Fs.Features()
method. The Features struct contains flags and function pointers
which can be tested against nil to see whether they can be used.
As a result crypt and hubic can accurately reflect the capabilities of
the underlying Fs they are wrapping.
These are set in the form RCLONE_CONFIG_remote_option where remote is
the uppercased remote name and option is the uppercased config file
option name. Note that RCLONE_CONFIG_remote_TYPE must be set if
defining a new remote.
Fixes#616
This works by making sure directory listings that use a filter only
iterate the files provided in the filter (if any).
Single file copies now don't iterate the source or destination
buckets.
Note that this could potentially slow down very long `--files-from`
lists - this is easy to fix (with another flag probably) if it causes
anyone a problem.
Fixes#610Fixes#769
* Make move command check for overlapping remotes and refuse to run
* Do copy/delete rather than all the copies then all the deletes
* Doesn't purge the source - this was unexpected behaviour see #512 and #416
* Add -list-retries flag to test suite to control retries
This changes the semantics of `move` slightly. However it now errs on
the side of not deleting stuff.
Refactor sync/copy/move
* Don't load the src listing unless doing a sync and --delete-before
* Don't load the dst listing if doing copy/move and --no-traverse is set
`rclone --no-traverse copy src dst` now won't load either of the
listings into memory so will use the minimum amount of memory.
This change will reduce the amount of memory rclone uses dramatically
too as in normal operations (copy without --notraverse or sync) as it
no longer loads the source file listing into memory at all.
Fixes#8Fixes#544Fixes#546
If remote:path points to a file make NewFs return a sentinel error
fs.ErrorIsFile and an Fs which points to the parent.
Use this to remove the LimitedFs and just add this file to the
--files-from list.
This means that server side operations can be used also.
Fixes#518Fixes#545
This should fix duplicate files on drive and 409 errors on
amazonclouddrive however it will slow down the upload slightly as
another roundtrip will be needed.
None of the other Fses needed adjusting.
Fixes#483