diff --git a/MANUAL.html b/MANUAL.html
index 0cbdfbda5..76b71a9b4 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -17,24 +17,26 @@
Jun 15, 2019 Aug 26, 2019rclone(1) User Manual
-
Rclone is a command line program to sync files and directories to and from:
Links
rclone config
See the following for detailed instructions for
rclone config [flags]
-h, --help help for config
+See the global flags page for global options not listed here.
Copy files from source to dest, skipping already copied
--create-empty-src-dirs Create empty source dirs on destination after copy
-h, --help help for copy
+See the global flags page for global options not listed here.
Make source and dest identical, modifying destination only.
--create-empty-src-dirs Create empty source dirs on destination after sync
-h, --help help for sync
+See the global flags page for global options not listed here.
Move files from source to dest.
--create-empty-src-dirs Create empty source dirs on destination after move
--delete-empty-src-dirs Delete empty source dirs after move
-h, --help help for move
+See the global flags page for global options not listed here.
Remove the contents of path.
rclone delete remote:path [flags]
-h, --help help for delete
+See the global flags page for global options not listed here.
Remove the path and all of its contents.
rclone purge remote:path [flags]
-h, --help help for purge
+See the global flags page for global options not listed here.
Make the path if it doesn’t already exist.
rclone mkdir remote:path [flags]
-h, --help help for mkdir
+See the global flags page for global options not listed here.
Remove the path if empty.
rclone rmdir remote:path [flags]
-h, --help help for rmdir
+See the global flags page for global options not listed here.
Checks the files in the source and destination match.
--download Check by downloading rather than with hash.
-h, --help help for check
--one-way Check one way only, source files must exist on remote
+See the global flags page for global options not listed here.
List the objects in the path with size and path.
rclone ls remote:path [flags]
-h, --help help for ls
+See the global flags page for global options not listed here.
List all directories/containers/buckets in the path.
-h, --help help for lsd
-R, --recursive Recurse into the listing.
+See the global flags page for global options not listed here.
List the objects in path with modification time, size and path.
rclone lsl remote:path [flags]
-h, --help help for lsl
+See the global flags page for global options not listed here.
Produces an md5sum file for all the objects in the path.
rclone md5sum remote:path [flags]
-h, --help help for md5sum
+See the global flags page for global options not listed here.
Produces an sha1sum file for all the objects in the path.
rclone sha1sum remote:path [flags]
-h, --help help for sha1sum
+See the global flags page for global options not listed here.
Prints the total size and number of objects in remote:path.
-h, --help help for size
--json format output as JSON
+See the global flags page for global options not listed here.
Show the version number.
--check Check for new version.
-h, --help help for version
+See the global flags page for global options not listed here.
Clean up the remote if possible
rclone cleanup remote:path [flags]
-h, --help help for cleanup
+See the global flags page for global options not listed here.
Interactively find duplicate files and delete/rename them.
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
-h, --help help for dedupe
+See the global flags page for global options not listed here.
Get quota information from the remote.
--full Full numbers instead of SI units
-h, --help help for about
--json Format output as JSON
+See the global flags page for global options not listed here.
Remote authorization.
rclone authorize [flags]
-h, --help help for authorize
+See the global flags page for global options not listed here.
Print cache stats for a remote
rclone cachestats source: [flags]
-h, --help help for cachestats
+See the global flags page for global options not listed here.
Concatenates any files and sends them to stdout.
See the global flags page for global options not listed here.
Create a new remote with name, type and options.
rclone config create <name> <type> [<key> <value>]* [flags]
-h, --help help for create
+See the global flags page for global options not listed here.
Delete an existing remote
rclone config delete <name> [flags]
-h, --help help for delete
+See the global flags page for global options not listed here.
Dump the config file as JSON.
+Disconnects user from remote
Dump the config file as JSON.
-rclone config dump [flags]
+This disconnects the remote: passed in to the cloud storage system.
+This normally means revoking the oauth token.
+To reconnect use “rclone config reconnect”.
+rclone config disconnect remote: [flags]
-h, --help help for dump
+ -h, --help help for disconnect
+See the global flags page for global options not listed here.
Enter an interactive configuration session.
+Dump the config file as JSON.
Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
-rclone config edit [flags]
+Dump the config file as JSON.
+rclone config dump [flags]
-h, --help help for edit
+ -h, --help help for dump
+See the global flags page for global options not listed here.
Show path of configuration file in use.
+Enter an interactive configuration session.
Show path of configuration file in use.
-rclone config file [flags]
+Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
+rclone config edit [flags]
-h, --help help for file
+ -h, --help help for edit
+See the global flags page for global options not listed here.
Show path of configuration file in use.
+Show path of configuration file in use.
+rclone config file [flags]
+ -h, --help help for file
+See the global flags page for global options not listed here.
+Update password in an existing remote.
-Update an existing remote’s password. The password should be passed in in pairs of
For example to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
This command is obsolete now that “config update” and “config create” both support obscuring passwords directly.
rclone config password <name> [<key> <value>]+ [flags]
- -h, --help help for password
-List in JSON format all the providers and options.
-List in JSON format all the providers and options.
-rclone config providers [flags]
-h, --help help for providers
+ -h, --help help for password
+See the global flags page for global options not listed here.
Print (decrypted) config file, or the config for a single remote.
+List in JSON format all the providers and options.
Print (decrypted) config file, or the config for a single remote.
-rclone config show [<remote>] [flags]
+List in JSON format all the providers and options.
+rclone config providers [flags]
-h, --help help for show
+ -h, --help help for providers
+See the global flags page for global options not listed here.
Re-authenticates user with remote.
+This reconnects remote: passed in to the cloud storage system.
+To disconnect the remote use “rclone config disconnect”.
+This normally means going through the interactive oauth flow again.
+rclone config reconnect remote: [flags]
+ -h, --help help for reconnect
+See the global flags page for global options not listed here.
+Print (decrypted) config file, or the config for a single remote.
+Print (decrypted) config file, or the config for a single remote.
+rclone config show [<remote>] [flags]
+ -h, --help help for show
+See the global flags page for global options not listed here.
+Update options in an existing remote.
-Update an existing remote’s options. The options should be passed in in pairs of
For example to update the env_auth field of a remote of name myremote you would do:
rclone config update myremote swift env_auth true
@@ -813,16 +852,29 @@ Other: 8849156022
If the remote uses oauth the token will be updated, if you don’t require this add an extra parameter thus:
rclone config update myremote swift env_auth true config_refresh_token false
rclone config update <name> [<key> <value>]+ [flags]
- -h, --help help for update
-See the global flags page for global options not listed here.
+Prints info about logged in user of remote.
+This prints the details of the person logged in to the cloud storage system.
+rclone config userinfo remote: [flags]
+ -h, --help help for userinfo
+ --json Format output as JSON
+See the global flags page for global options not listed here.
+Copy files from source to dest, skipping already copied
-If source:path is a file or directory then it copies it to a file or directory named dest:path.
This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.
So
@@ -837,28 +889,28 @@ if src is directoryThis doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. It doesn’t delete files from the destination.
Note: Use the -P
/--progress
flag to view real-time transfer statistics
rclone copyto source:path dest:path [flags]
- -h, --help help for copyto
-See the global flags page for global options not listed here.
+Copy url content to dest.
-Download urls content and copy it to destination without saving it in tmp storage.
rclone copyurl https://example.com dest:path [flags]
- -h, --help help for copyurl
-See the global flags page for global options not listed here.
+Cryptcheck checks the integrity of a crypted remote.
-rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.
For it to work the underlying remote of the cryptedremote must support some kind of checksum.
It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.
@@ -869,17 +921,17 @@ if src is directoryAfter it has run it will log the status of the encryptedremote:.
If you supply the –one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.
rclone cryptcheck remote:path cryptedremote:path [flags]
- -h, --help help for cryptcheck
--one-way Check one way only, source files must exist on destination
-See the global flags page for global options not listed here.
+Cryptdecode returns unencrypted file names.
-rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.
If you supply the –reverse flag, it will return encrypted file names.
use it like this
@@ -887,54 +939,54 @@ if src is directory rclone cryptdecode --reverse encryptedremote: filename1 filename2rclone cryptdecode encryptedremote: encryptedfilename [flags]
- -h, --help help for cryptdecode
--reverse Reverse cryptdecode, encrypts filenames
-See the global flags page for global options not listed here.
+Produces a Dropbox hash file for all the objects in the path.
-Produces a Dropbox hash file for all the objects in the path. The hashes are calculated according to Dropbox content hash rules. The output is in the same format as md5sum and sha1sum.
rclone dbhashsum remote:path [flags]
- -h, --help help for dbhashsum
-See the global flags page for global options not listed here.
+Remove a single file from remote.
-Remove a single file from remote. Unlike delete
it cannot be used to remove a directory and it doesn’t obey include/exclude filters - if the specified file exists, it will always be removed.
rclone deletefile remote:path [flags]
- -h, --help help for deletefile
-See the global flags page for global options not listed here.
+Output completion script for a given shell.
-Generates a shell completion script for rclone. Run with –help to list the supported shells.
- -h, --help help for genautocomplete
-See the global flags page for global options not listed here.
+Output bash completion script for rclone.
-Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg
sudo rclone genautocomplete bash
@@ -942,16 +994,16 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
. /etc/bash_completion
If you supply a command line argument the script will be written there.
rclone genautocomplete bash [output_file] [flags]
- -h, --help help for bash
-See the global flags page for global options not listed here.
+Output zsh completion script for rclone.
-Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, eg
sudo rclone genautocomplete zsh
@@ -959,28 +1011,28 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
autoload -U compinit && compinit
If you supply a command line argument the script will be written there.
rclone genautocomplete zsh [output_file] [flags]
- -h, --help help for zsh
-See the global flags page for global options not listed here.
+Output markdown docs for rclone to the directory supplied.
-This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
rclone gendocs output_directory [flags]
- -h, --help help for gendocs
-See the global flags page for global options not listed here.
+Produces an hashsum file for all the objects in the path.
-Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.
Run without a hash to see the list of supported hashes, eg
$ rclone hashsum
@@ -992,45 +1044,45 @@ Supported hashes are:
Then
$ rclone hashsum MD5 remote:path
rclone hashsum <hash> remote:path [flags]
-Options
+Options
-h, --help help for hashsum
-SEE ALSO
+See the global flags page for global options not listed here.
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
-Auto generated by spf13/cobra on 15-Jun-2019
rclone link
Generate public link to file/folder.
-Synopsis
+Synopsis
rclone link will create or retrieve a public link to the given file or folder.
rclone link remote:path/to/file
rclone link remote:path/to/folder/
If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always be created with the least constraints – e.g. no expiry, no password protection, accessible without account.
rclone link remote:path [flags]
-Options
+Options
-h, --help help for link
-SEE ALSO
+See the global flags page for global options not listed here.
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
-Auto generated by spf13/cobra on 15-Jun-2019
rclone listremotes
List all the remotes in the config file.
-Synopsis
+Synopsis
rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
rclone listremotes [flags]
-Options
+Options
-h, --help help for listremotes
--long Show the type as well as names.
-SEE ALSO
+See the global flags page for global options not listed here.
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
-Auto generated by spf13/cobra on 15-Jun-2019
rclone lsf
List directories and objects in remote:path formatted for parsing
-Synopsis
+Synopsis
List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.
Eg
$ rclone lsf swift:bucket
@@ -1100,7 +1152,7 @@ rclone copy --files-from new_files /path/to/local remote:path
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use “-R” to make them recurse.
Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
rclone lsf remote:path [flags]
-Options
+Options
--absolute Put a leading / in front of path names.
--csv Output in CSV format.
-d, --dir-slash Append a slash to directory names. (default true)
@@ -1111,14 +1163,14 @@ rclone copy --files-from new_files /path/to/local remote:path
-h, --help help for lsf
-R, --recursive Recurse into the listing.
-s, --separator string Separator for the items in the format. (default ";")
-See the global flags page for global options not listed here.
+List directories and objects in the path in JSON format.
-List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this
{ “Hashes” : { “SHA-1” : “f572d396fae9206628714fb2ce00f72e94f2258f”, “MD5” : “b1946ac92492d2347c6235b4d2611184”, “DropboxHash” : “ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc” }, “ID”: “y2djkhiujf83u33”, “OrigID”: “UYOJVTUW00Q1RzTDA”, “IsBucket” : false, “IsDir” : false, “MimeType” : “application/octet-stream”, “ModTime” : “2017-05-31T16:15:57.034468261+01:00”, “Name” : “file.txt”, “Encrypted” : “v0qpsdq8anpci8n929v3uu9338”, “EncryptedPath” : “kja9098349023498/v0qpsdq8anpci8n929v3uu9338”, “Path” : “full/path/goes/here/file.txt”, “Size” : 6, “Tier” : “hot”, }
@@ -1145,7 +1197,7 @@ rclone copy --files-from new_files /path/to/local remote:pathThe other list commands lsd
,lsf
,lsjson
do not recurse by default - use “-R” to make them recurse.
Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
rclone lsjson remote:path [flags]
- --dirs-only Show only directories in the listing.
-M, --encrypted Show the encrypted names.
--files-only Show only files in the listing.
@@ -1154,14 +1206,14 @@ rclone copy --files-from new_files /path/to/local remote:path
--no-modtime Don't read the modification time (can speed things up).
--original Show the ID of the underlying Object.
-R, --recursive Recurse into the listing.
-See the global flags page for global options not listed here.
+Mount the remote as file system on a mountpoint.
-rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a file system with FUSE.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
Start the mount like this
@@ -1182,7 +1234,7 @@ umount /path/to/local/mountThe easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using the WinFsp.Launcher infrastructure) which creates drives accessible for everyone on the system or alternatively using the nssm service manager.
Without the use of “–vfs-cache-mode” this can only write files sequentially, it can only seek when reading. This means that many applications won’t work with their files on an rclone mount without “–vfs-cache-mode writes” or “–vfs-cache-mode full”. See the File Caching section for more info.
-The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won’t work from the root - you will need to specify a bucket, or a path within the bucket. So swift:
won’t work whereas swift:bucket
will as will swift:bucket/path
. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can’t use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount more reliable.
@@ -1260,7 +1312,7 @@ umount /path/to/local/mountThis mode should support all normal file system operations.
If an upload or download fails it will be retried up to –low-level-retries times.
rclone mount remote:path /path/to/mountpoint [flags]
- --allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
@@ -1292,14 +1344,14 @@ umount /path/to/local/mount
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
-See the global flags page for global options not listed here.
+Move file or directory from source to dest.
-If source:path is a file or directory then it moves it to a file or directory named dest:path.
This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command.
So
@@ -1315,16 +1367,16 @@ if src is directoryImportant: Since this can cause data loss, test first with the –dry-run flag.
Note: Use the -P
/--progress
flag to view real-time transfer statistics.
rclone moveto source:path dest:path [flags]
- -h, --help help for moveto
-See the global flags page for global options not listed here.
+Explore a remote with a text based user interface.
-This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - “What is using all my disk space?”.
To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.
@@ -1336,34 +1388,35 @@ if src is directory g toggle graph n,s,C sort by name,size,count d delete file/directory + Y display current path ^L refresh screen ? to toggle help on and off q/ESC/c-C to quitThis an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands.
Note that it might take some time to delete big files/folders. The UI won’t respond in the meantime since the deletion is done synchronously.
rclone ncdu remote:path [flags]
- -h, --help help for ncdu
-See the global flags page for global options not listed here.
+Obscure password for use in the rclone.conf
-Obscure password for use in the rclone.conf
rclone obscure password [flags]
- -h, --help help for obscure
-See the global flags page for global options not listed here.
+Run a command against a running rclone.
-This runs a command against a running rclone. Use the –url flag to specify an non default URL to connect on. This can be either a “:port” which is taken to mean “http://localhost:port” or a “host:port” which is taken to mean “http://host:port”
A username and password can be passed in with –user and –pass.
Note that –rc-addr, –rc-user, –rc-pass will be read also for –url, –user, –pass.
@@ -1374,7 +1427,7 @@ if src is directoryrclone rc --loopback operations/about fs=/
Use “rclone rc” to see a list of all possible commands.
rclone rc commands parameter [flags]
- -h, --help help for rc
--json string Input JSON - use instead of key=value args.
--loopback If set connect to this rclone instance not via HTTP.
@@ -1382,14 +1435,14 @@ if src is directory
--pass string Password to use to connect to rclone remote control.
--url string URL to connect to rclone remote control. (default "http://localhost:5572/")
--user string Username to use to rclone remote control.
-See the global flags page for global options not listed here.
+Copies standard input to file on remote.
-rclone rcat reads from standard input (stdin) and copies it to a single remote file.
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file
@@ -1397,53 +1450,54 @@ ffmpeg - | rclone rcat remote:path/to/file
rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff
. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.
Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you’re better off caching locally and then rclone move
it to the destination.
rclone rcat remote:path [flags]
- -h, --help help for rcat
-See the global flags page for global options not listed here.
+Run rclone listening to remote control commands only.
-This runs rclone so that it only listens to remote control commands.
This is useful if you are controlling rclone via the rc API.
If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.
See the rc documentation for more info on the rc flags.
rclone rcd <path to files to serve>* [flags]
- -h, --help help for rcd
-See the global flags page for global options not listed here.
+Remove empty directories under the path.
-This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.
If you supply the –leave-root flag, it will not remove the root directory.
This is useful for tidying up remotes that rclone has left a lot of empty directories in.
rclone rmdirs remote:path [flags]
- -h, --help help for rmdirs
--leave-root Do not remove root directory if empty
-See the global flags page for global options not listed here.
+Serve a remote over a protocol.
-rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, eg
rclone serve http remote:
Each subcommand has its own options which you can see in their help.
rclone serve <protocol> [opts] <remote> [flags]
- -h, --help help for serve
-See the global flags page for global options not listed here.
+Serve remote:path over DLNA
-rclone serve dlna is a DLNA media server for media stored in a rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.
Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.
This mode should support all normal file system operations.
If an upload or download fails it will be retried up to –low-level-retries times.
rclone serve dlna remote:path [flags]
- --addr string ip:port or :port to bind the DLNA http server to. (default ":7879")
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
@@ -1542,14 +1595,14 @@ ffmpeg - | rclone rcat remote:path/to/file
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
-See the global flags page for global options not listed here.
+Serve remote:path over FTP.
-rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it.
Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
@@ -1613,9 +1666,34 @@ ffmpeg - | rclone rcat remote:path/to/fileIn this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
If an upload or download fails it will be retried up to –low-level-retries times.
+If you supply the parameter --auth-proxy /path/to/program
then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.
There is an example program bin/test_proxy.py in the rclone source code.
+The program’s job is to take a user
and pass
on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won’t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.
This config generated must have this extra parameter - _root
- root to use for the backend
And it may have this parameter - _obscure
- comma separated strings for parameters to obscure
For example the program might take this on STDIN
+{
+ "user": "me",
+ "pass": "mypassword"
+}
+And return this on STDOUT
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}
+This would mean that an SFTP backend would be created on the fly for the user
and pass
returned in the output to the host given. Note that since _obscure
is set to pass
, rclone will obscure the pass
parameter before creating the backend (which is required for sftp backends).
The progam can manipulate the supplied user
in any way, for example to make proxy to many different sftp backends, you could make the user
be user@example.com
and then set the host
to example.com
in the output and the user to user
. For security you’d probably want to restrict the host
to a limited list.
Note that an internal cache is keyed on user
so only use that for configuration, don’t use pass
. This also means that if a user’s password is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve ftp remote:path [flags]
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
+ --auth-proxy string A program to use to create the backend from the auth.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
@@ -1638,14 +1716,14 @@ ffmpeg - | rclone rcat remote:path/to/file
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
-See the global flags page for global options not listed here.
+Serve the remote over HTTP.
-rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
You can use the filter flags (eg –include, –exclude) to control what is served.
The server will log errors. Use -v to see access logs.
@@ -1655,6 +1733,7 @@ ffmpeg - | rclone rcat remote:path/to/fileIf you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
–server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
–max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+–baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used –baseurl “/rclone” then rclone would serve from a URL starting with “/rclone/”. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” and –baseurl “/rclone/” are all treated identically.
By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags.
@@ -1725,8 +1804,9 @@ htpasswd -B htpasswd anotherUserThis mode should support all normal file system operations.
If an upload or download fails it will be retried up to –low-level-retries times.
rclone serve http remote:path [flags]
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --baseurl string Prefix for URLs - leave blank for root.
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
@@ -1755,14 +1835,14 @@ htpasswd -B htpasswd anotherUser
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
-See the global flags page for global options not listed here.
+Serve the remote for restic’s REST API.
-rclone serve restic implements restic’s REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
Restic is a command line program for doing backups.
The server will log errors. Use -v to see access logs.
@@ -1807,6 +1887,7 @@ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
–server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
–max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+–baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used –baseurl “/rclone” then rclone would serve from a URL starting with “/rclone/”. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” and –baseurl “/rclone/” are all treated identically.
By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags.
@@ -1821,9 +1902,10 @@ htpasswd -B htpasswd anotherUserBy default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also.
–cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate.
rclone serve restic remote:path [flags]
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
--append-only disallow deletion of repository data
+ --baseurl string Prefix for URLs - leave blank for root.
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
-h, --help help for restic
@@ -1837,14 +1919,14 @@ htpasswd -B htpasswd anotherUser
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--stdio run an HTTP2 server on stdin/stdout
--user string User name for authentication.
-See the global flags page for global options not listed here.
+Serve the remote over SFTP.
-rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.
You can use the filter flags (eg –include, –exclude) to control what is served.
The server will log errors. Use -v to see access logs.
@@ -1910,9 +1992,34 @@ htpasswd -B htpasswd anotherUserIn this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
If an upload or download fails it will be retried up to –low-level-retries times.
+If you supply the parameter --auth-proxy /path/to/program
then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.
There is an example program bin/test_proxy.py in the rclone source code.
+The program’s job is to take a user
and pass
on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won’t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.
This config generated must have this extra parameter - _root
- root to use for the backend
And it may have this parameter - _obscure
- comma separated strings for parameters to obscure
For example the program might take this on STDIN
+{
+ "user": "me",
+ "pass": "mypassword"
+}
+And return this on STDOUT
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}
+This would mean that an SFTP backend would be created on the fly for the user
and pass
returned in the output to the host given. Note that since _obscure
is set to pass
, rclone will obscure the pass
parameter before creating the backend (which is required for sftp backends).
The progam can manipulate the supplied user
in any way, for example to make proxy to many different sftp backends, you could make the user
be user@example.com
and then set the host
to example.com
in the output and the user to user
. For security you’d probably want to restrict the host
to a limited list.
Note that an internal cache is keyed on user
so only use that for configuration, don’t use pass
. This also means that if a user’s password is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve sftp remote:path [flags]
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2022")
+ --auth-proxy string A program to use to create the backend from the auth.
--authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys")
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
@@ -1936,14 +2043,14 @@ htpasswd -B htpasswd anotherUser
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
-See the global flags page for global options not listed here.
+Serve remote:path over webdav.
-rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it.
If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
–server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
–max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+–baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used –baseurl “/rclone” then rclone would serve from a URL starting with “/rclone/”. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” and –baseurl “/rclone/” are all treated identically.
By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags.
@@ -2024,9 +2132,35 @@ htpasswd -B htpasswd anotherUserIn this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
If an upload or download fails it will be retried up to –low-level-retries times.
+If you supply the parameter --auth-proxy /path/to/program
then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.
There is an example program bin/test_proxy.py in the rclone source code.
+The program’s job is to take a user
and pass
on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won’t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.
This config generated must have this extra parameter - _root
- root to use for the backend
And it may have this parameter - _obscure
- comma separated strings for parameters to obscure
For example the program might take this on STDIN
+{
+ "user": "me",
+ "pass": "mypassword"
+}
+And return this on STDOUT
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}
+This would mean that an SFTP backend would be created on the fly for the user
and pass
returned in the output to the host given. Note that since _obscure
is set to pass
, rclone will obscure the pass
parameter before creating the backend (which is required for sftp backends).
The progam can manipulate the supplied user
in any way, for example to make proxy to many different sftp backends, you could make the user
be user@example.com
and then set the host
to example.com
in the output and the user to user
. For security you’d probably want to restrict the host
to a limited list.
Note that an internal cache is keyed on user
so only use that for configuration, don’t use pass
. This also means that if a user’s password is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve webdav remote:path [flags]
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --auth-proxy string A program to use to create the backend from the auth.
+ --baseurl string Prefix for URLs - leave blank for root.
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
@@ -2057,14 +2191,14 @@ htpasswd -B htpasswd anotherUser
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
-See the global flags page for global options not listed here.
+Changes storage class/tier of objects in remote.
-rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true
You can use it to tier single object
@@ -2074,30 +2208,30 @@ htpasswd -B htpasswd anotherUserOr just provide remote directory and all files in directory will be tiered
rclone settier tier remote:path/dir
rclone settier tier remote:path [flags]
- -h, --help help for settier
-See the global flags page for global options not listed here.
+Create new file or change file modification time.
-Create new file or change file modification time.
rclone touch remote:path [flags]
- -h, --help help for touch
-C, --no-create Do not create the file if it does not exist.
-t, --timestamp string Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05)
-See the global flags page for global options not listed here.
+List the contents of the remote in a tree like fashion.
-rclone tree lists the contents of a remote in a similar way to the unix tree command.
For example
$ rclone tree remote:path
@@ -2113,7 +2247,7 @@ htpasswd -B htpasswd anotherUser
You can use any of the filtering options with the tree command (eg –include and –exclude). You can also use –fast-list.
The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone’s short options.
rclone tree remote:path [flags]
- -a, --all All files are listed (list . files too).
-C, --color Turn colorization on always.
-d, --dirs-only List directories only.
@@ -2135,11 +2269,11 @@ htpasswd -B htpasswd anotherUser
-r, --sort-reverse Reverse the order of the sort.
-U, --unsorted Leave files unsorted.
--version Sort files alphanumerically by version.
-See the global flags page for global options not listed here.
+rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory
if it isn’t.
For example, suppose you have a remote with a file in called test.jpg
, then you could copy just that file like this
This can be used when scripting to make aged backups efficiently, eg
rclone sync remote:current-backup remote:previous-backup
rclone sync /path/to/files remote:current-backup
-Rclone has a number of options to control its behaviour.
Options that take parameters can have the values passed in two ways, --option=value
or --option value
. However boolean (true/false) options behave slightly differently to the other options in that --boolean
sets the option to true
and the absence of the flag sets it to false
. It is also possible to specify --boolean=false
or --boolean=true
. Note that --boolean false
is not valid - this is parsed as --boolean
and the false
is parsed as an extra command line argument for rclone.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as “300ms”, “-1.5h” or “2h45m”. Valid time units are “ns”, “us” (or “µs”), “ms”, “s”, “m”, “h”.
@@ -2216,6 +2350,7 @@ rclone sync /path/to/files remote:current-backuprclone sync /path/to/local remote:current --backup-dir remote:old
will sync /path/to/local
to remote:current
, but for any files which would have been updated or deleted will be stored in remote:old
.
If running rclone from a script you might want to use today’s date as the directory name passed to --backup-dir
to store the old files, or you might want to pass --suffix
with today’s date.
See --compare-dest
and --copy-dest
.
Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn’t resolve or resolves to more than one IP address it will give an error.
This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section.
Eg rclone --checksum sync s3:/bucket swift:/bucket
would run much quicker than without the --checksum
flag.
When using this flag, rclone won’t update mtimes of remote files if they are incorrect as it would normally.
+When using sync
, copy
or move
DIR is checked in addition to the destination for files. If a file identical to the source is found that file is NOT copied from source. This is useful to copy just files that have changed since the last backup.
You must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory.
+See --copy-dest
and --backup-dir
.
Specify the location of the rclone config file.
Normally the config file is in your home directory as a file called .config/rclone/rclone.conf
(or .rclone.conf
if created with an older version). If $XDG_CONFIG_HOME
is set it will be at $XDG_CONFIG_HOME/rclone/rclone.conf
.
Set the connection timeout. This should be in go time format which looks like 5s
for 5 seconds, 10m
for 10 minutes, or 3h30m
.
The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is 1m
by default.
When using sync
, copy
or move
DIR is checked in addition to the destination for files. If a file identical to the source is found that file is server side copied from DIR to the destination. This is useful for incremental backup.
The remote in use must support server side copy and you must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory.
+See --compare-dest
and --backup-dir
.
Mode to run dedupe command in. One of interactive
, skip
, first
, newest
, oldest
, rename
. The default is interactive
. See the dedupe command for more information as to what these options mean.
INFO
is equivalent to -v
. It outputs information about each transfer and prints stats once a minute by default.
NOTICE
is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.
ERROR
is equivalent to -q
. It only outputs error messages.
This switches the log format to JSON for rclone. The fields of json log are level, msg, source, time.
This controls the number of low level retries rclone does.
A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v
flag.
Use -vv
if you wish to see info about the threads.
This will work with the sync
/copy
/move
commands and friends copyto
/moveto
. Multi thread downloads will be used with rclone mount
and rclone serve
if --vfs-cache-mode
is set to writes
or above.
NB that this only works for a local destination but will work with any source.
+NB that multi thread copies are disabled for local to local copies as they are faster without unless --multi-thread-streams
is set explicitly.
When using multi thread downloads (see above --multi-thread-cutoff
) this sets the maximum number of streams to use. Set to 0
to disable multi thread downloads. (Default 4)
Exactly how many streams rclone uses for the download depends on the size of the file. To calculate the number of download streams Rclone divides the size of the file by the --multi-thread-cutoff
and rounds up, up to the maximum set with --multi-thread-streams
.
The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s.
The default is bytes
.
This is for use with --backup-dir
only. If this isn’t set then --backup-dir
will move files with their original name. If it is set then the files will have SUFFIX added on to them.
See --backup-dir
for more info.
When using sync
, copy
or move
any files which would have been overwritten or deleted will have the suffix added to them. If there is a file with the same path (after the suffix has been added), then it will be overwritten.
The remote in use must support server side move or copy and you must use the same remote as the destination of the sync.
+This is for use with files to add the suffix in the current directory or with --backup-dir
. See --backup-dir
for more info.
For example
+rclone sync /path/to/local/file remote:current --suffix .bak
+will sync /path/to/local
to remote:current
, but for any files which would have been updated or deleted have .bak added.
When using --suffix
, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after.
So let’s say we had --suffix -2019-01-01
, without the flag file.txt
would be backed up to file.txt-2019-01-01
and with the flag it would be backed up to file-2019-01-01.txt
. This can be helpful to make sure the suffixed files can still be opened.
This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file.
If an existing destination file has a modification time equal (within the computed modify window precision) to the source file’s, it will be updated if the sizes are different.
-On remotes which don’t support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.
-This can be useful when transferring to a remote which doesn’t support mod times directly as it is more accurate than a --size-only
check and faster than using --checksum
.
On remotes which don’t support mod time directly (or when using --use-server-mod-time
) the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.
This can be useful when transferring to a remote which doesn’t support mod times directly (or when using --use-server-mod-time
to avoid extra API calls) as it is more accurate than a --size-only
check and faster than using --checksum
.
If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based platforms and VirtualAlloc on Windows for its transfer buffers (size controlled by --buffer-size
). Memory allocated like this does not go on the Go heap and can be returned to the OS immediately when it is finished with.
If this flag is not set then rclone will allocate and free the buffers using the Go memory allocator which may use more memory as memory pages are returned less aggressively to the OS.
It is possible this does not work well on all platforms so it is disabled by default; in the future it may be enabled by default.
Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation.
-Use this flag to disable the extra API call and rely instead on the server’s modified time. In cases such as a local to remote sync, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.
+Use this flag to disable the extra API call and rely instead on the server’s modified time. In cases such as a local to remote sync using --update
, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.
Using this flag on a sync operation without also using --update
would cause all files modified at any time other than the last upload time to be uploaded again, which is probably not what you want.
With -v
rclone will tell you about each file that is transferred and a small number of significant events.
With -vv
rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.
Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
-Every option in rclone can have its default set by environment variable.
To find the name of the environment variable, first, take the long option name, strip the leading --
, change -
to _
, make upper case and prepend RCLONE_
.
For example, to always set --stats 5s
, set the environment variable RCLONE_STATS=5s
. If you set stats on the command line this will override the environment variable setting.
rclone copy --files-from files-from.txt /home/me/pics remote:pics
This will transfer these files only (if they exist)
/home/me/pics/file1.jpg → remote:pics/file1.jpg
-/home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg
+/home/me/pics/subdir/file2.jpg → remote:pics/subdir/file2.jpg
To take a more complicated example, let’s say you had a few files you want to back up regularly with these absolute paths:
/home/user1/important
/home/user1/dir/file
@@ -2859,7 +3010,7 @@ user2/stuff
The 3 files will arrive in remote:backup
with the paths as in the files-from.txt
like this:
/home/user1/important → remote:backup/user1/important
/home/user1/dir/file → remote:backup/user1/dir/file
-/home/user2/stuff → remote:backup/stuff
+/home/user2/stuff → remote:backup/user2/stuff
You could of course choose /
as the root too in which case your files-from.txt
might look like this.
/home/user1/important
/home/user1/dir/file
@@ -2867,9 +3018,9 @@ user2/stuff
And you would transfer it like this
rclone copy --files-from files-from.txt / remote:backup
In this case there will be an extra home
directory on the remote:
/home/user1/important → remote:home/backup/user1/important
-/home/user1/dir/file → remote:home/backup/user1/dir/file
-/home/user2/stuff → remote:home/backup/stuff
+/home/user1/important → remote:backup/home/user1/important
+/home/user1/dir/file → remote:backup/home/user1/dir/file
+/home/user2/stuff → remote:backup/home/user2/stuff
--min-size
- Don’t transfer any file smaller than thisThis option controls the minimum size file which will be transferred. This defaults to kBytes
but a suffix of k
, M
, or G
can be used.
For example --min-size 50k
means no files smaller than 50kByte will be transferred.
You can exclude dir3
from sync by running the following command:
rclone sync --exclude-if-present .ignore dir1 remote:backup
Currently only one filename is supported, i.e. --exclude-if-present
should not be used multiple times.
Rclone can serve a web based GUI (graphical user interface). This is somewhat experimental at the moment so things may be subject to change.
+Run this command in a terminal and rclone will download and then display the GUI in a web browser.
+rclone rcd --rc-web-gui
+This will produce logs like this and rclone needs to continue to run to serve the GUI:
+2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip
+2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path : /home/USER/.cache/rclone/webgui/v0.0.6.zip]
+2019/08/25 11:40:16 NOTICE: Unzipping
+2019/08/25 11:40:16 NOTICE: Serving remote control on http://127.0.0.1:5572/
+This assumes you are running rclone locally on your machine. It is possible to separate the rclone and the GUI - see below for details.
+If you wish to update to the latest API version then you can add --rc-web-gui-update
to the command line.
Once the GUI opens, you will be looking at the dashboard which has an overall overview.
+On the left hand side you will see a series of view buttons you can click on:
+(More docs and walkthrough video to come!)
+When you run the rclone rcd --rc-web-gui
this is what happens
login_token
so it can log straight in.The rclone rcd
may use any of the flags documented on the rc page.
The flag --rc-web-gui
is shorthand for
--rc-user gui
--rc-pass <random password>
--rc-serve
These flags can be overidden as desired.
+See also the rclone rcd documentation.
+For example the GUI could be served on a public port over SSL using an htpasswd file using the following flags:
+--rc-web-gui
--rc-addr :443
--rc-htpasswd /path/to/htpasswd
--rc-cert /path/to/ssl.crt
--rc-key /path/to/ssl.key
If you want to run the GUI behind a proxy at /rclone
you could use these flags:
--rc-web-gui
--rc-baseurl rclone
--rc-htpasswd /path/to/htpasswd
Or instead of htpassword if you just want a single user and password:
+--rc-user me
--rc-pass mypassword
The GUI is being developed in the: rclone/rclone-webui-react respository.
+Bug reports and contributions very welcome welcome :-)
+If you have questions then please ask them on the rclone forum.
If rclone is run with the --rc
flag then it starts an http server which can be used to remote control rclone.
If you just want to run a remote control then see the rcd command.
@@ -2966,6 +3185,19 @@ dir1/dir2/dir3/.ignoreIf this is set then rclone will serve the files in that directory. It will also open the root in the web browser if specified. This is for implementing browser based GUIs for rclone functions.
If --rc-user
or --rc-pass
is set then the URL that is opened will have the authorization in the URL in the http://user:pass@localhost/
style.
Default Off.
+Set this flag to serve the default web gui on the same port as rclone.
+Default Off.
+Set the allowed Access-Control-Allow-Origin for rc requests.
+Can be used with –rc-web-gui if the rclone is running on different IP than the web-gui.
+Default is IP address on which rc is running.
+Set the URL to fetch the rclone-web-gui files from.
+Default https://api.github.com/repos/rclone/rclone-webui-react/releases/latest.
+Set this flag to Download / Force update rclone-webui-react from the rc-web-fetch-url.
+Default Off.
Expire finished async jobs older than DURATION (default 60s).
The rc interface supports some special parameters which apply to all commands. These start with _
to show they are different.
Each rc call is classified as a job and it is assigned its own id. By default jobs are executed immediately as they are created or synchronously.
If _async
has a true value when supplied to an rc call then it will return immediately with a job id and the task will be run in the background. The job/status
call can be used to get information of the background job. The job can be queried for up to 1 minute after it has finished.
It is recommended that potentially long running jobs, eg sync/sync
, sync/copy
, sync/move
, operations/purge
are run with the _async
flag to avoid any potential problems with the HTTP request and response timing out.
Starting a job with the _async
flag:
Each rc call has it’s own stats group for tracking it’s metrics. By default grouping is done by the composite group name from prefix job/
and id of the job like so job/1
.
If _group
has a value then stats for that request will be grouped under that value. This allows caller to group stats under their own name.
Stats for specific group can be accessed by passing group
to core/stats
:
$ rclone rc --json '{ "group": "job/1" }' core/stats
+{
+ "speed": 12345
+ ...
+}
Purge a remote from the cache backend. Supports either a directory or a file. Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional)
Eg
rclone rc cache/expire remote=path/to/sub/folder/
rclone rc cache/expire remote=/ withData=true
-Ensure the specified file chunks are cached on disk.
The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end]
start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value “-5:” represents the last 5 chunks of a file.
@@ -3053,9 +3295,9 @@ rclone rc cache/expire remote=/ withData=trueAny parameter with a key that starts with “file” can be used to specify files to fetch, eg
rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
File names will automatically be encrypted when the a crypt remote is used on top of the cache.
-Show statistics for the cache remote.
-This takes the following parameters
See the config create command command for more information on the above.
Authentication is required for this call.
-Parameters: - name - name of remote to delete
See the config delete command command for more information on the above.
Authentication is required for this call.
-Returns a JSON object: - key: value
Where keys are remote names and values are the config parameters.
See the config dump command command for more information on the above.
Authentication is required for this call.
-Parameters: - name - name of remote to get
See the config dump command command for more information on the above.
Authentication is required for this call.
-Returns - remotes - array of remote names
See the listremotes command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
See the config password command command for more information on the above.
Authentication is required for this call.
-Returns a JSON object: - providers - array of objects
See the config providers command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
See the config update command command for more information on the above.
Authentication is required for this call.
-This sets the bandwidth limit to that passed in.
Eg
-rclone rc core/bwlimit rate=1M
-rclone rc core/bwlimit rate=off
+rclone rc core/bwlimit rate=off
+{
+ "bytesPerSecond": -1,
+ "rate": "off"
+}
+rclone rc core/bwlimit rate=1M
+{
+ "bytesPerSecond": 1048576,
+ "rate": "1M"
+}
+If the rate parameter is not suppied then the bandwidth is queried
+rclone rc core/bwlimit
+{
+ "bytesPerSecond": 1048576,
+ "rate": "1M"
+}
The format of the parameter is exactly the same as passed to –bwlimit except only one bandwidth may be specified.
-In either case “rate” is returned as a human readable string, and “bytesPerSecond” is returned as a number.
+This tells the go runtime to do a garbage collection run. It isn’t necessary to call this normally, but it can be useful for debugging memory problems.
-This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats
-The most interesting values for most people are:
-Pass a clear string and rclone will obscure it for the config file: - clear - string
-Returns - obscured - string
-This returns PID of current process. Useful for stopping rclone process.
-This returns all available stats
-rclone rc core/stats
+This returns list of stats groups currently in memory.
Returns the following values:
{
- "speed": average speed in bytes/sec since start of the process,
- "bytes": total transferred bytes since the start of the process,
- "errors": number of errors,
- "fatalError": whether there has been at least one FatalError,
- "retryError": whether there has been at least one non-NoRetryError,
- "checks": number of checked files,
- "transfers": number of transferred files,
- "deletes" : number of deleted files,
- "elapsedTime": time in seconds since the start of the process,
- "lastError": last occurred error,
- "transferring": an array of currently active file transfers:
+ "groups": an array of group names:
[
- {
- "bytes": total transferred bytes for this file,
- "eta": estimated time in seconds until file transfer completion
- "name": name of the file,
- "percentage": progress of the file transfer in percent,
- "speed": speed in bytes/sec,
- "speedAvg": speed in bytes/sec as an exponentially weighted moving average,
- "size": size of the file in bytes
- }
- ],
- "checking": an array of names of currently active file checks
- []
-}
-Values for “transferring”, “checking” and “lastError” are only assigned if data is available. The value for “eta” is null if an eta cannot be determined.
-{ “speed”: average speed in bytes/sec since start of the process, “bytes”: total transferred bytes since the start of the process, “errors”: number of errors, “fatalError”: whether there has been at least one FatalError, “retryError”: whether there has been at least one non-NoRetryError, “checks”: number of checked files, “transfers”: number of transferred files, “deletes” : number of deleted files, “elapsedTime”: time in seconds since the start of the process, “lastError”: last occurred error, “transferring”: an array of currently active file transfers: [ { “bytes”: total transferred bytes for this file, “eta”: estimated time in seconds until file transfer completion “name”: name of the file, “percentage”: progress of the file transfer in percent, “speed”: speed in bytes/sec, “speedAvg”: speed in bytes/sec as an exponentially weighted moving average, “size”: size of the file in bytes } ], “checking”: an array of names of currently active file checks [] }
+Values for "transferring", "checking" and "lastError" are only assigned if data is available.
+The value for "eta" is null if an eta cannot be determined.
+
+### core/stats-reset: Reset stats. {#core/stats-reset}
+
+This clears counters and errors for all stats or specific stats group if group
+is provided.
+
+Parameters
+- group - name of the stats group (string)
+
+### core/transferred: Returns stats about completed transfers. {#core/transferred}
+
+This returns stats about completed transfers:
+
+ rclone rc core/transferred
+
+If group is not provided then completed transfers for all groups will be
+returned.
+
+Parameters
+- group - name of the stats group (string)
+
+Returns the following values:
+{ “transferred”: an array of completed transfers (including failed ones): [ { “name”: name of the file, “size”: size of the file in bytes, “bytes”: total transferred bytes for this file, “checked”: if the transfer is only checked (skipped, deleted), “timestamp”: integer representing millisecond unix epoch, “error”: string description of the error (empty if successfull), “jobid”: id of the job that this transfer belongs to } ] }
+This shows the current version of go and the go runtime - version - rclone version, eg “v1.44” - decomposed - version number as [major, minor, patch, subpatch] - note patch and subpatch will be 999 for a git compiled version - isGit - boolean - true if this was compiled from the git version - os - OS in use as according to Go - arch - cpu architecture in use according to Go - goVersion - version of Go runtime in use
-Parameters - None
Results - jobids - array of integer job ids
-Parameters - jobid - id of the job (integer)
-Results - finished - boolean - duration - time in seconds that the job ran for - endTime - time the job finished (eg “2018-10-26T18:50:20.528746884+01:00”) - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg “2018-10-26T18:50:20.528336039+01:00”) - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously
-Results - finished - boolean - duration - time in seconds that the job ran for - endTime - time the job finished (eg “2018-10-26T18:50:20.528746884+01:00”) - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg “2018-10-26T18:50:20.528336039+01:00”) - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously - progress - output of the progress related to the underlying job
+Parameters - jobid - id of the job (integer)
+This takes the following parameters
The result is as returned from rclone about –json
See the about command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
See the cleanup command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
Authentication is required for this call.
-This takes the following parameters
See the copyurl command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
See the delete command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
See the deletefile command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
This command does not have a command line equivalent so use this instead:
rclone rc --loopback operations/fsinfo fs=remote:
-This takes the following parameters
See the lsjson command for more information on the above and examples.
Authentication is required for this call.
-This takes the following parameters
See the mkdir command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
Authentication is required for this call.
-This takes the following parameters
See the link command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
See the purge command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
See the rmdir command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
See the rmdirs command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
See the size command command for more information on the above.
Authentication is required for this call.
-Returns - options - a list of the options block names
-Returns an object where keys are option block names and values are an object with the current option values in.
This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions.
-Parameters
rclone rc options/set --json '{"main": {"LogLevel": 7}}'
And this sets NOTICE level logs (normal without -v)
rclone rc options/set --json '{"main": {"LogLevel": 6}}'
-This returns an error with the input as part of its error string. Useful for testing error handling.
-This lists all the registered remote control commands as a JSON map in the commands response.
-This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.
-This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.
Authentication is required for this call.
-This takes the following parameters
See the copy command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
See the move command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
See the sync command command for more information on the above.
Authentication is required for this call.
-This forgets the paths in the directory cache causing them to be re-read from the remote when needed.
If no paths are passed in then it will forget all the paths in the directory cache.
rclone rc vfs/forget
Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, eg
rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
-Without any parameter given this returns the current status of the poll-interval setting.
When the interval=duration parameter is set, the poll-interval value is updated and the polling function is notified. Setting interval=0 disables poll-interval.
rclone rc vfs/poll-interval interval=5m
The timeout=duration parameter can be used to specify a time to wait for the current poll function to apply the new value. If timeout is less or equal 0, which is the default, wait indefinitely.
The new poll-interval value will only be active when the timeout is not reached.
If poll-interval is updated or disabled temporarily, some changes might not get picked up by the polling function, depending on the used remote.
-This reads the directories for the specified paths and freshens the directory cache.
If no paths are passed in then it will refresh the root directory.
rclone rc vfs/refresh
@@ -3552,6 +3841,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash.
This is also used to return the space used, available for rclone mount
.
If the server can’t do About
then rclone about
will return an error.
The remote supports empty directories. See Limitations for details. Most Object/Bucket based remotes do not support this.
+This describes the global flags available to every rclone command split into two groups, non backend and backend flags.
+These flags are available for every command.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --ca-cert string CA certificate used to verify servers
+ --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
+ --client-cert string Client SSL certificate (PEM) for mutual TLS auth
+ --client-key string Client SSL private key (PEM) for mutual TLS auth
+ --compare-dest string use DIR to server side copy flies from.
+ --config string Config file. (default "$HOME/.config/rclone/rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ --copy-dest string Compare dest to DIR also.
+ --cpuprofile string Write cpu profile to file
+ --delete-after When synchronizing, delete files on destination after transferring (default)
+ --delete-before When synchronizing, delete files on destination before transferring
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ignore-case Ignore case in filters (case insensitive)
+ --ignore-case-sync Ignore case when synchronizing
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-stats-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000)
+ --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
+ --memprofile string Write memory profile to file
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250M)
+ --multi-thread-streams int Max number of streams to use for multi-thread downloads. (default 4)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -P, --progress Show progress during transfer.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-allow-origin string Set the allowed origin for CORS.
+ --rc-baseurl string Prefix for URLs - leave blank for root.
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-files string Path to local files to serve on the HTTP server.
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-job-expire-duration duration expire finished async jobs older than this value (default 1m0s)
+ --rc-job-expire-interval duration interval to check for expired async jobs (default 10s)
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-no-auth Don't require auth for certain methods.
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-serve Enable the serving of remote objects.
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --rc-web-fetch-url string URL to fetch the releases for webgui. (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
+ --rc-web-gui Launch WebGUI on localhost
+ --rc-web-gui-update Update / Force update to latest version of web gui
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --size-only Skip based on size only, not mod-time or checksum
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-one-line-date Enables --stats-one-line and add current date/time prefix.
+ --stats-one-line-date-format string Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix to add to changed files.
+ --suffix-keep-extension Preserve the extension when using --suffix.
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --use-cookies Enable session cookiejar.
+ --use-json-log Use json log format.
+ --use-mmap Use mmap allocator (see docs).
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.0")
+ -v, --verbose count Print lots more stuff (repeat for more)
+These flags are available for every command. They control the backends and may be set in the config file.
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use SAS URL or Emulator)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-disable-checksum Disable checksums for large (> upload cutoff) files
+ --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w)
+ --b2-download-url string Custom endpoint for downloads.
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
+ --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-server-side-across-configs Allow server side operations (eg copy) to work across different drive configs.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-size-as-quota Show storage quota usage for file size.
+ --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ --dropbox-impersonate string Impersonate this user when using a business account.
+ --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
+ --fichier-shared-folder string If you want to download a shared folder, add this parameter
+ --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
+ --ftp-host string FTP host to connect to
+ --ftp-no-check-certificate Do not verify the TLS certificate of the server
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-tls Use FTP over TLS (Implicit)
+ --ftp-user string FTP username, leave blank for current username, $USER
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --gphotos-client-id string Google Application Client Id
+ --gphotos-client-secret string Google Application Client Secret
+ --gphotos-read-only Set to make the Google Photos backend read only.
+ --gphotos-read-size Set to read the size of media items.
+ --http-headers CommaSepList Set HTTP headers for all transactions
+ --http-no-slash Set this if the site doesn't end directories with /
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --hubic-no-chunk Don't chunk files during streaming upload.
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
+ --koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
+ --koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
+ --koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
+ --koofr-setmtime Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. (default true)
+ --koofr-user string Your Koofr user name
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
+ --local-case-insensitive Force the filesystem to report itself as case insensitive
+ --local-case-sensitive Force the filesystem to report itself as case sensitive.
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
+ --qingstor-connection-retries int Number of connection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
+ --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
+ --qingstor-zone string Zone to connect to.
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
+ --s3-bucket-acl string Canned ACL used when creating buckets.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
+ --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
+ --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint.
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
+ --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
+ --sftp-key-use-agent When set forces the usage of the ssh-agent.
+ --sftp-md5sum-command string The command used to read md5 hashes. Leave blank for autodetect.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-sha1sum-command string The command used to read sha1 hashes. Leave blank for autodetect.
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --skip-links Don't warn about skipped symlinks.
+ --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
+ --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
+ --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-no-chunk Don't chunk files during streaming upload.
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --union-remotes string List of space separated remotes.
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-bearer-token-command string Command to run to get a bearer token
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
+ --yandex-unlink Remove existing public link to file/folder with link command rather than creating.
+This is a backend for the 1ficher cloud storage service. Note that a Premium subscription is required to use the API.
+Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory
.
The initial setup for 1Fichier involves getting the API key from the website which you need to do in your browser.
+Here is an example of how to make a remote called remote
. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / 1Fichier
+ \ "fichier"
+[snip]
+Storage> fichier
+** See help for fichier backend at: https://rclone.org/fichier/ **
+
+Your API Key, get it from https://1fichier.com/console/params.pl
+Enter a string value. Press Enter for the default ("").
+api_key> example_key
+
+Edit advanced config? (y/n)
+y) Yes
+n) No
+y/n>
+Remote config
+--------------------
+[remote]
+type = fichier
+api_key = example_key
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Once configured you can then use rclone
like this,
List directories in top level of your 1Fichier account
+rclone lsd remote:
+List all the files in your 1Fichier account
+rclone ls remote:
+To copy a local directory to a 1Fichier directory called backup
+rclone copy /home/source remote:backup
+1Fichier does not support modification times. It supports the Whirlpool hash algorithm.
+1Fichier can have two files with exactly the same name and path (unlike a normal file system).
+Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.
+1Fichier does not support the characters \ < > " ' ` $
and spaces at the beginning of folder names. rclone
automatically escapes these to a unicode equivalent. The exception is /
, which cannot be escaped and will therefore lead to errors.
Here are the standard options specific to fichier (1Fichier).
+Your API Key, get it from https://1fichier.com/console/params.pl
+Here are the advanced options specific to fichier (1Fichier).
+If you want to download a shared folder, add this parameter
+The alias
remote provides a new name for another remote.
Paths may be as deep as required or a local path, eg remote:directory/subdirectory
or /directory/subdirectory
.
Copy another local directory to the alias directory called source
rclone copy /home/source remote:source
-Here are the standard options specific to alias (Alias for an existing remote).
Remote or path to alias. Can be “myremote:path/to/dir”, “myremote:bucket”, “myremote:” or “/local/path”.
@@ -4206,35 +4988,11 @@ n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive +[snip] +XX / Amazon Drive \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" -10 / Local Disk - \ "local" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection - \ "sftp" -14 / Yandex Disk - \ "yandex" -Storage> 1 +[snip] +Storage> amazon cloud drive Amazon Application Client Id - required. client_id> your client ID goes here Amazon Application Client Secret - required. @@ -4284,7 +5042,7 @@ y/e/d> y.com
Amazon accountsLet’s say you usually use amazon.co.uk
. When you authenticate with rclone it will take you to an amazon.com
page to log in. Your amazon.co.uk
email and password should work here just fine.
Here are the standard options specific to amazon cloud drive (Amazon Drive).
Amazon Application Client ID.
@@ -4302,7 +5060,7 @@ y/e/d> yHere are the advanced options specific to amazon cloud drive (Amazon Drive).
Auth server URL. Leave blank to use Amazon’s.
@@ -4392,17 +5150,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \ "alias" - 2 / Amazon Drive - \ "amazon cloud drive" - 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) - \ "s3" - 4 / Backblaze B2 - \ "b2" [snip] -23 / http Connection - \ "http" +XX / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) + \ "s3" +[snip] Storage> s3 Choose your S3 provider. Choose a number from below, or type in your own value @@ -4556,6 +5307,8 @@ Choose a number from below, or type in your own value \ "GLACIER" 7 / Glacier Deep Archive storage class \ "DEEP_ARCHIVE" + 8 / Intelligent-Tiering storage class + \ "INTELLIGENT_TIERING" storage_class> 1 Remote config -------------------- @@ -4635,6 +5388,7 @@ In the case the object is larger than 5Gb or is in Glacier or Glacier Deep ArchiPutObject
PutObjectACL
When using the lsd
subcommand, the ListAllMyBuckets
permission is required.
Example policy:
{
"Version": "2012-10-17",
@@ -4655,7 +5409,12 @@ In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archi
"arn:aws:s3:::BUCKET_NAME/*",
"arn:aws:s3:::BUCKET_NAME"
]
- }
+ },
+ {
+ "Effect": "Allow",
+ "Action": "s3:ListAllMyBuckets",
+ "Resource": "arn:aws:s3:::*"
+ }
]
}
Notes on above:
@@ -4673,7 +5432,7 @@ In the case the object is larger than 5Gb or is in Glacier or Glacier Deep ArchiIn this case you need to restore the object(s) in question before using rclone.
Note that rclone only speaks the S3 API it does not speak the Glacier Vault API, so rclone cannot directly access Glacier Vaults.
-Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
Choose your S3 provider.
@@ -5502,6 +6261,10 @@ In the case the object is larger than 5Gb or is in Glacier or Glacier Deep ArchiHere are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
Canned ACL used when creating buckets.
@@ -5959,9 +6722,8 @@ n/s> n name> wasabi Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) +[snip] +XX / Amazon S3 (also Dreamhost, Ceph, Minio) \ "s3" [snip] Storage> s3 @@ -6166,33 +6928,11 @@ n/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 +[snip] +XX / Backblaze B2 \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 3 +[snip] +Storage> b2 Account ID or Application Key ID account> 123456789abc Application Key @@ -6298,8 +7038,21 @@ $ rclone -q --b2-versions ls b2:cleanup-test 15 one-v2016-07-02-155621-000.txtShowing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.
Note that when using --b2-versions
no file write operations are permitted, so you can’t upload files or delete them.
Rclone supports generating file share links for private B2 buckets. They can either be for a file for example:
+./rclone link B2:bucket/path/to/file.txt
+https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx
+
+or if run on a directory you will get:
+./rclone link B2:bucket/path
+https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx
+you can then use the authorization token (the part of the url from the ?Authorization=
on) on any file path under that directory. For example:
https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx
+https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx
+https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx
+
-Here are the standard options specific to b2 (Backblaze B2).
Account ID or Application Key ID
@@ -6325,7 +7078,7 @@ $ rclone -q --b2-versions ls b2:cleanup-testHere are the advanced options specific to b2 (Backblaze B2).
Endpoint for the service. Leave blank normally.
@@ -6387,13 +7140,22 @@ $ rclone -q --b2-versions ls b2:cleanup-testCustom endpoint for downloads.
-This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. Leave blank if you want to use the endpoint provided by Backblaze.
+This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze.
Time before the authorization token will expire in s or suffix ms|s|m|h|d.
+The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week.
+Paths are specified as remote:path
Box allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
Box supports SHA1 type hashes, so you can use the --checksum
flag.
Depending on the enterprise settings for your user, the item will either be actually deleted from Box or moved to the trash.
-Here are the standard options specific to box (Box).
Box App Client Id. Leave blank normally.
@@ -6578,7 +7312,7 @@ y/e/d> yHere are the advanced options specific to box (Box).
Cutoff for switching to multipart upload (>= 50MB).
@@ -6617,11 +7351,11 @@ n/r/c/s/q> n name> test-cache Type of storage to configure. Choose a number from below, or type in your own value -... - 5 / Cache a remote +[snip] +XX / Cache a remote \ "cache" -... -Storage> 5 +[snip] +Storage> cache Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). @@ -6757,7 +7491,7 @@ chunk_total_size = 10GPurge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt.
Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional, false by default)
-Here are the standard options specific to cache (Cache a remote).
Remote to cache. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended).
@@ -6862,7 +7596,7 @@ chunk_total_size = 10G -Here are the advanced options specific to cache (Cache a remote).
The plex token for authentication - auto set normally
@@ -7010,33 +7744,11 @@ n/s/q> n name> secret Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote +[snip] +XX / Encrypt/Decrypt a remote \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 5 +[snip] +Storage> crypt Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). @@ -7177,12 +7889,12 @@ $ rclone -q ls secret:Encrypts the whole file path including directory names Example: 1/12/123.txt
is encrypted to p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0
False
Only encrypts file names, skips directory names Example: 1/12/123.txt
is encrypted to 1/12/qgm4avr35m5loi1th53ato71v0
Crypt stores modification times using the underlying remote so support depends on that.
Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
Note that you should use the rclone cryptcheck
command to check the integrity of a crypted remote instead of rclone check
which can’t check the checksums properly.
Here are the standard options specific to crypt (Encrypt/Decrypt a remote).
Remote to encrypt/decrypt. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended).
@@ -7250,7 +7962,7 @@ $ rclone -q ls secret:Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).
For all files listed show how the names encrypt.
@@ -7341,33 +8053,11 @@ e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox +[snip] +XX / Dropbox \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 4 +[snip] +Storage> dropbox Dropbox App Key - leave blank normally. app_key> Dropbox App Secret - leave blank normally. @@ -7399,12 +8089,12 @@ y/e/d> yIf you wish to see Team Folders you must use a leading /
in the path, so rclone lsd remote:/
will refer to the root and show you all Team Folders and your User Folder.
You can then use team folders like this remote:/TeamFolder
and remote:/TeamFolder/path/to/file
.
A leading /
for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so it should be avoided.
Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.
This means that if you uploaded your data with an older version of rclone which didn’t support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don’t want this to happen use --size-only
or --checksum
flag to stop it.
Dropbox supports its own hash type which is checked for all transfers.
-Here are the standard options specific to dropbox (Dropbox).
Dropbox App Client Id Leave blank normally.
@@ -7422,7 +8112,7 @@ y/e/d> yHere are the advanced options specific to dropbox (Dropbox).
Upload chunk size. (< 150M).
@@ -7464,7 +8154,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] -10 / FTP Connection +XX / FTP Connection \ "ftp" [snip] Storage> ftp @@ -7520,7 +8210,7 @@ y/e/d> yFTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the config for the remote. The default FTPS port is 990
so the port will likely have to be explictly set in the config for the remote.
Here are the standard options specific to ftp (FTP Connection).
FTP host to connect to
@@ -7569,7 +8259,7 @@ y/e/d> yHere are the advanced options specific to ftp (FTP Connection).
Maximum number of FTP simultaneous connections, 0 for unlimited
@@ -7608,33 +8298,11 @@ e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) +[snip] +XX / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 6 +[snip] +Storage> google cloud storage Google Application Client Id - leave blank normally. client_id> Google Application Client Secret - leave blank normally. @@ -7764,7 +8432,7 @@ y/e/d> yGoogle google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the “mtime” key in RFC3339 format accurate to 1ns.
-Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
Google Application Client Id Leave blank normally.
@@ -8033,7 +8701,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -10 / Google Drive +XX / Google Drive \ "drive" [snip] Storage> drive @@ -8457,7 +9125,7 @@ trashed=false and 'c' in parents -Here are the standard options specific to drive (Google Drive).
Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance.
@@ -8526,7 +9194,7 @@ trashed=false and 'c' in parentsHere are the advanced options specific to drive (Google Drive).
Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.
@@ -8754,7 +9422,7 @@ trashed=false and 'c' in parentsThis is because rclone can’t find out the size of the Google docs without downloading them.
Google docs will transfer correctly with rclone sync
, rclone copy
etc as rclone knows to ignore the size when doing the transfer.
However an unfortunate consequence of this is that you can’t download Google docs using rclone mount
- you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable.
Sometimes, for no reason I’ve been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files.
Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.
Use rclone dedupe
to fix duplicated files.
Log into the Google API Console with your Google account. It doesn’t matter what Google account you use. (It need not be the same account as the Google Drive you want to access)
Select a project or create a new project.
Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the then “Google Drive API”.
Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the “Google Drive API”.
Click “Credentials” in the left-side panel (not “Create credentials”, which opens the wizard), then “Create credentials”, then “OAuth client ID”. It will prompt you to set the OAuth consent screen product name, if you haven’t set one already.
Choose an application type of “other”, and click “Create”. (the default name is fine)
It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote.
(Thanks to @balazer on github for these instructions.)
+The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos.
+NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use.
+The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / Google Photos
+ \ "google photos"
+[snip]
+Storage> google photos
+** See help for google photos backend at: https://rclone.org/googlephotos/ **
+
+Google Application Client Id
+Leave blank normally.
+Enter a string value. Press Enter for the default ("").
+client_id>
+Google Application Client Secret
+Leave blank normally.
+Enter a string value. Press Enter for the default ("").
+client_secret>
+Set to make the Google Photos backend read only.
+
+If you choose read only then rclone will only request read only access
+to your photos, otherwise rclone will request full access.
+Enter a boolean value (true or false). Press Enter for the default ("false").
+read_only>
+Edit advanced config? (y/n)
+y) Yes
+n) No
+y/n> n
+Remote config
+Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+y) Yes
+n) No
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+
+*** IMPORTANT: All media items uploaded to Google Photos with rclone
+*** are stored in full resolution at original quality. These uploads
+*** will count towards storage in your Google Account.
+
+--------------------
+[remote]
+type = google photos
+token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"}
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and this may require you to unblock it temporarily if you are running a host firewall, or use manual mode.
This remote is called remote
and can now be used like this
See all the albums in your photos
+rclone lsd remote:album
+Make a new album
+rclone mkdir remote:album/newAlbum
+List the contents of an album
+rclone ls remote:album/newAlbum
+Sync /home/local/images
to the Google Photos, removing any excess files in the album.
rclone sync /home/local/image remote:album/newAlbum
+As Google Photos is not a general purpose cloud storage system the backend is laid out to help you navigate it.
+The directories under media
show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month
. (NB remote:media/by-day
is rather slow at the moment so avoid for syncing.)
Note that all your photos and videos will appear somewhere under media
, but they may not appear under album
unless you’ve put them into albums.
/
+- upload
+ - file1.jpg
+ - file2.jpg
+ - ...
+- media
+ - all
+ - file1.jpg
+ - file2.jpg
+ - ...
+ - by-year
+ - 2000
+ - file1.jpg
+ - ...
+ - 2001
+ - file2.jpg
+ - ...
+ - ...
+ - by-month
+ - 2000
+ - 2000-01
+ - file1.jpg
+ - ...
+ - 2000-02
+ - file2.jpg
+ - ...
+ - ...
+ - by-day
+ - 2000
+ - 2000-01-01
+ - file1.jpg
+ - ...
+ - 2000-01-02
+ - file2.jpg
+ - ...
+ - ...
+- album
+ - album name
+ - album name/sub
+- shared-album
+ - album name
+ - album name/sub
+There are two writable parts of the tree, the upload
directory and sub directories of the the album
directory.
The upload
directory is for uploading files you don’t want to put into albums. This will be empty to start with and will contain the files you’ve uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to album
will work better.
Directories within the album
directory are also writeable and you may create new directories (albums) under album
. If you copy files with a directory hierarchy in there then rclone will create albums with the /
character in them. For example if you do
rclone copy /path/to/images remote:album/images
+and the images directory contains
+images
+ - file1.jpg
+ dir
+ file2.jpg
+ dir2
+ dir3
+ file3.jpg
+Then rclone will create the following albums with the following files in
+This means that you can use the album
path pretty much like a normal filesystem and it is a good target for repeated syncing.
The shared-album
directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.
Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn’t understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.
+Note that all media items uploaded to Google Photos through the API are stored in full resolution at “original quality” and will count towards your storage quota in your Google Account. The API does not offer a way to upload in “high quality” mode..
+When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.
+When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044.
+If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg
would then appear as file {123456}.jpg
and file {ABCDEF}.jpg
(the actual IDs are a lot longer alas!).
If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload
then uploaded the same image to album/my_album
the filename of the image in album/my_album
will be what it was uploaded with initially, not what you uploaded it with to album
. In practise this shouldn’t cause too many problems.
The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known.
+This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes.
+The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check.
+It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the --gphotos-read-size
option or the read_size = true
config parameter.
If you want to use the backend with rclone mount
you will need to enable this flag otherwise you will not be able to read media off the mount.
Rclone can only upload files to albums it created. This is a limitation of the Google Photos API.
+Rclone can remove files it uploaded from albums it created only.
+Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781.
+Rclone cannot delete files anywhere except under album
.
The Google Photos API does not support deleting albums - see bug #135714733.
+ +Here are the standard options specific to google photos (Google Photos).
+Google Application Client Id Leave blank normally.
+Google Application Client Secret Leave blank normally.
+Set to make the Google Photos backend read only.
+If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access.
+Here are the advanced options specific to google photos (Google Photos).
+Set to read the size of media items.
+Normally rclone does not read the size of media items since this takes another transaction. This isn’t necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media.
+The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn’t then please file an issue, or send a pull request!)
Paths are specified as remote:
or remote:path/to/dir
.
rclone sync remote:directory /home/local/directory
This remote is read only - you can’t upload files to an HTTP server.
-Most HTTP servers store time accurate to 1 second.
No checksums are stored.
@@ -8866,7 +9725,7 @@ e/n/d/r/c/s/q> qSince the http remote only has one config parameter it is easy to use without a config file:
rclone lsd --http-url https://beta.rclone.org :http:
-Here are the standard options specific to http (http Connection).
URL of http host to connect to
@@ -8887,8 +9746,20 @@ e/n/d/r/c/s/q> q -Here are the advanced options specific to http (http Connection).
+Set HTTP headers for all transactions
+Use this to set additional HTTP headers for all transactions
+The input format is comma separated list of key,value pairs. Standard CSV encoding may be used.
+For example to set a Cookie use ‘Cookie,name=value’, or ‘“Cookie”,“name=value”’.
+You can set multiple headers, eg ‘“Cookie”,“name=value”,“Authorization”,“xxx”’.
+Set this if the site doesn’t end directories with /
Use this if your target website does not use / on the end of directories.
@@ -8914,33 +9785,11 @@ n/s> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic +[snip] +XX / Hubic \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 8 +[snip] +Storage> hubic Hubic Client Id - leave blank normally. client_id> Hubic Client Secret - leave blank normally. @@ -8979,12 +9828,12 @@ y/e/d> yrclone copy /home/source remote:default/backup
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
The modified time is stored as metadata on the object as X-Object-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
Note that Hubic wraps the Swift backend, so most of the properties of are the same.
-Here are the standard options specific to hubic (Hubic).
Hubic Client Id Leave blank normally.
@@ -9002,7 +9851,7 @@ y/e/d> yHere are the advanced options specific to hubic (Hubic).
Above this size files will be chunked into a _segments container.
@@ -9025,7 +9874,7 @@ y/e/d> yThis uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.
The Swift API doesn’t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won’t check or use the MD5SUM for these.
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown.
-Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
Jottacloud supports MD5 type hashes, so you can use the --checksum
flag.
Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the TMPDIR
environment variable points to) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit
flag.
By default rclone will send all files to the trash when deleting files. Due to a lack of API documentation emptying the trash is currently only possible via the Jottacloud website. If deleting permanently is required then use the --jottacloud-hard-delete
flag, or set the equivalent environment variable.
Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.
@@ -9126,17 +9973,7 @@ y/e/d> yJottacloud requires each ‘device’ to be registered. Rclone brings such a registration to easily access your account but if you want to use Jottacloud together with rclone on multiple machines you NEED to create a seperate deviceID/deviceSecrect on each machine. You will asked during setting up the remote. Please be aware that this also means that copying the rclone config from one machine to another does NOT work with Jottacloud accounts. You have to create it on each machine.
-Here are the standard options specific to jottacloud (JottaCloud).
-User Name:
-Here are the advanced options specific to jottacloud (JottaCloud).
Files bigger than this will be cached on disk to calculate the MD5 if required.
@@ -9171,7 +10008,7 @@ y/e/d> yNote that Jottacloud is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
There are quite a few characters that can’t be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
Jottacloud only supports filenames up to 255 characters in length.
@@ -9193,60 +10030,10 @@ name> koofr Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - 1 / A stackable unification remote, which can appear to merge the contents of several remotes - \ "union" - 2 / Alias for an existing remote - \ "alias" - 3 / Amazon Drive - \ "amazon cloud drive" - 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) - \ "s3" - 5 / Backblaze B2 - \ "b2" - 6 / Box - \ "box" - 7 / Cache a remote - \ "cache" - 8 / Dropbox - \ "dropbox" - 9 / Encrypt/Decrypt a remote - \ "crypt" -10 / FTP Connection - \ "ftp" -11 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" -12 / Google Drive - \ "drive" -13 / Hubic - \ "hubic" -14 / JottaCloud - \ "jottacloud" -15 / Koofr +[snip] +XX / Koofr \ "koofr" -16 / Local Disk - \ "local" -17 / Mega - \ "mega" -18 / Microsoft Azure Blob Storage - \ "azureblob" -19 / Microsoft OneDrive - \ "onedrive" -20 / OpenDrive - \ "opendrive" -21 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -22 / Pcloud - \ "pcloud" -23 / QingCloud Object Storage - \ "qingstor" -24 / SSH/SFTP Connection - \ "sftp" -25 / Webdav - \ "webdav" -26 / Yandex Disk - \ "yandex" -27 / http Connection - \ "http" +[snip] Storage> koofr ** See help for koofr backend at: https://rclone.org/koofr/ ** @@ -9286,7 +10073,7 @@ y/e/d> yTo copy a local directory to an Koofr directory called backup
rclone copy /home/source remote:backup
-Here are the standard options specific to koofr (Koofr).
Your Koofr user name
@@ -9304,7 +10091,7 @@ y/e/d> yHere are the advanced options specific to koofr (Koofr).
The Koofr API endpoint to use
@@ -9322,8 +10109,16 @@ y/e/d> yDoes the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.
+Note that Koofr is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
Mega is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption.
@@ -9341,14 +10136,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \ "alias" [snip] -14 / Mega +XX / Mega \ "mega" [snip] -23 / http Connection - \ "http" Storage> mega User name user> you@example.com @@ -9380,9 +10171,9 @@ y/e/d> yrclone ls remote:
To copy a local directory to an Mega directory called backup
rclone copy /home/source remote:backup
-Mega does not support modification times or hashes yet.
-Mega can have two files with exactly the same name and path (unlike a normal file system).
Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.
Use rclone dedupe
to fix duplicated files.
Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum.
So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.
-Here are the standard options specific to mega (Mega).
User name
@@ -9416,7 +10207,7 @@ y/e/d> yHere are the advanced options specific to mega (Mega).
Output more debug from Mega.
@@ -9437,7 +10228,7 @@ y/e/d> yThis backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn’t appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.
Mega allows duplicate files which may confuse rclone.
rclone sync /home/local/directory remote:container
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
The modified time is stored as metadata on the object with the mtime
key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it.
MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, eg the local disk.
@@ -9540,10 +10301,10 @@ rclone ls azureblob:othercontainerFiles can’t be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using --azureblob-chunk-size 100M
.
Note that rclone doesn’t commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won’t allow more than that amount of uncommitted blocks.
-Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).
Storage Account Name (leave blank to use connection string or SAS URL)
+Storage Account Name (leave blank to use SAS URL or Emulator)
Storage Account Key (leave blank to use connection string or SAS URL)
+Storage Account Key (leave blank to use SAS URL or Emulator)
SAS URL for container level access only (leave blank if using account/key or connection string)
+SAS URL for container level access only (leave blank if using account/key or Emulator)
Uses local storage emulator if provided as ‘true’ (leave blank if using real azure storage endpoint)
+Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).
Endpoint for the service Leave blank normally.
@@ -9613,8 +10382,10 @@ rclone ls azureblob:othercontainerMD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.
+You can test rlcone with storage emulator locally, to do this make sure azure storage emulator installed locally and set up a new remote with rclone config
follow instructions described in introduction, set use_emulator
config as true
, you do not need to provide default account name or key if using emulator.
Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory
.
Save
.Now the application is complete. Run rclone config
to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps.
OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support QuickXorHash.
For all types of OneDrive you can use the --checksum
flag.
Any files you delete with rclone will end up in the trash. Microsoft doesn’t provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft’s apps or via the OneDrive website.
-Here are the standard options specific to onedrive (Microsoft OneDrive).
Microsoft App Client Id Leave blank normally.
@@ -9738,7 +10509,7 @@ y/e/d> yHere are the advanced options specific to onedrive (Microsoft OneDrive).
Chunk size to upload files with - must be multiple of 320k.
@@ -9775,7 +10546,7 @@ y/e/d> yNote that OneDrive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
There are quite a few characters that can’t be in OneDrive file names. These can’t occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
The largest allowed file sizes are 15GB for OneDrive for Business and 35GB for OneDrive Personal (Updated 4 Jan 2019).
@@ -9829,35 +10600,11 @@ e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / OpenDrive +[snip] +XX / OpenDrive \ "opendrive" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection - \ "sftp" -14 / Yandex Disk - \ "yandex" -Storage> 10 +[snip] +Storage> opendrive Username username> Password @@ -9886,7 +10633,7 @@ y/e/d> yOpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
-Here are the standard options specific to opendrive (OpenDrive).
Username
@@ -9905,7 +10652,7 @@ y/e/d> yNote that OpenDrive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
There are quite a few characters that can’t be in OpenDrive file names. These can’t occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
Here are the standard options specific to qingstor (QingCloud Object Storage).
Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
@@ -10098,7 +10819,7 @@ y/e/d> y -Here are the advanced options specific to qingstor (QingCloud Object Storage).
Number of connection retries.
@@ -10161,48 +10882,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box - \ "box" - 5 / Cache a remote - \ "cache" - 6 / Dropbox - \ "dropbox" - 7 / Encrypt/Decrypt a remote - \ "crypt" - 8 / FTP Connection - \ "ftp" - 9 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" -10 / Google Drive - \ "drive" -11 / Hubic - \ "hubic" -12 / Local Disk - \ "local" -13 / Microsoft Azure Blob Storage - \ "azureblob" -14 / Microsoft OneDrive - \ "onedrive" -15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) +[snip] +XX / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" -16 / Pcloud - \ "pcloud" -17 / QingCloud Object Storage - \ "qingstor" -18 / SSH/SFTP Connection - \ "sftp" -19 / Webdav - \ "webdav" -20 / Yandex Disk - \ "yandex" -21 / http Connection - \ "http" +[snip] Storage> swift Get swift credentials from environment variables in standard OpenStack form. Choose a number from below, or type in your own value @@ -10325,7 +11008,7 @@ rclone lsd myremote:As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.
For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is “dirty”. By using --update
along with --use-server-modtime
, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.
Here are the standard options specific to swift (Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
Get swift credentials from environment variables in standard OpenStack form.
@@ -10540,7 +11223,7 @@ rclone lsd myremote: -Here are the advanced options specific to swift (Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
Above this size files will be chunked into a _segments container.
@@ -10563,10 +11246,10 @@ rclone lsd myremote:The modified time is stored as metadata on the object as X-Object-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
-The Swift API doesn’t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won’t check or use the MD5SUM for these.
rclone ls remote:
To copy a local directory to an pCloud directory called backup
rclone copy /home/source remote:backup
-pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.
pCloud supports MD5 and SHA1 type hashes, so you can use the --checksum
flag.
Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup
can be used to empty the trash.
Here are the standard options specific to pcloud (Pcloud).
Pcloud App Client Id Leave blank normally.
@@ -10688,8 +11337,151 @@ y/e/d> yPaths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory
.
The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / premiumize.me
+ \ "premiumizeme"
+[snip]
+Storage> premiumizeme
+** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ **
+
+Remote config
+Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+y) Yes
+n) No
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+--------------------
+[remote]
+type = premiumizeme
+token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"}
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d>
+See the remote setup docs for how to set it up on a machine with no Internet browser available.
+Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and this it may require you to unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone
like this,
List directories in top level of your premiumize.me
+rclone lsd remote:
+List all the files in your premiumize.me
+rclone ls remote:
+To copy a local directory to an premiumize.me directory called backup
+rclone copy /home/source remote:backup
+premiumize.me does not support modification times or hashes, therefore syncing will default to --size-only
checking. Note that using --update
will work.
Here are the standard options specific to premiumizeme (premiumize.me).
+API Key.
+This is not normally used - use oauth instead.
+Note that premiumize.me is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
+premiumize.me file names can’t have the \
or "
characters in. rclone maps these to and from an identical looking unicode equivalents \
and "
premiumize.me only supports filenames up to 255 characters in length.
+Paths are specified as remote:path
put.io paths may be as deep as required, eg remote:directory/subdirectory
.
The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> putio
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / Put.io
+ \ "putio"
+[snip]
+Storage> putio
+** See help for putio backend at: https://rclone.org/putio/ **
+
+Remote config
+Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+y) Yes
+n) No
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+--------------------
+[putio]
+type = putio
+token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"}
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+putio putio
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> q
+Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.
You can then use it like this,
+List directories in top level of your put.io
+rclone lsd remote:
+List all the files in your put.io
+rclone ls remote:
+To copy a local directory to a put.io directory called backup
+rclone copy /home/source remote:backup
+
+
SFTP is the Secure (or SSH) File Transfer Protocol.
+The SFTP backend can be used with a number of different providers:
+SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.
Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user’s home directory.
"Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net, on the other hand, requires users to OMIT the leading /.
@@ -10704,36 +11496,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" -10 / Local Disk - \ "local" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection +[snip] +XX / SSH/SFTP Connection \ "sftp" -14 / Yandex Disk - \ "yandex" -15 / http Connection - \ "http" +[snip] Storage> sftp SSH host to connect to Choose a number from below, or type in your own value @@ -10743,22 +11509,22 @@ host> example.com SSH username, leave blank for current username, ncw user> sftpuser SSH port, leave blank to use default (22) -port> +port> SSH password, leave blank to use ssh-agent. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> n Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. -key_file> +key_file> Remote config -------------------- [remote] host = example.com user = sftpuser -port = -pass = -key_file = +port = +pass = +key_file = -------------------- y) Yes this is OK e) Edit this remote @@ -10791,12 +11557,12 @@ y/e/d> yAnd then at the end of the session
eval `ssh-agent -k`
These commands can be used in scripts of course.
-Modified times are stored on the server to 1 second precision.
Modified times are used in syncing and are fully supported.
Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false
in your RClone backend configuration to disable this behaviour.
Here are the standard options specific to sftp (SSH/SFTP Connection).
SSH host to connect to
@@ -10864,7 +11630,7 @@ y/e/d> yEnable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+Enable the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.
Here are the advanced options specific to sftp (SSH/SFTP Connection).
Allow asking for SFTP password when needed.
@@ -10921,8 +11687,24 @@ y/e/d> yThe command used to read md5 hashes. Leave blank for autodetect.
+The command used to read sha1 hashes. Leave blank for autodetect.
+SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote’s PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck
to true
to disable checksumming.
SFTP also supports about
if the same login has shell access and df
are in the remote’s PATH. about
will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. about
will fail if it does not have shell access or if df
is not in the remote’s PATH.
Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can’t be calculated properly. For them using disable_hashcheck
is a good idea.
SFTP isn’t supported under plan9 until this issue is fixed.
Note that since SFTP isn’t HTTP based the following flags don’t work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn’t supported (but --contimeout
is).
C14 is supported through the SFTP backend.
+ +rsync.net is supported through the SFTP backend.
+See rsync.net’s documentation of rclone examples.
The union
remote provides a unification similar to UnionFS using other remotes.
Paths may be as deep as required or a local path, eg remote:directory/subdirectory
or /directory/subdirectory
.
Copy another local directory to the union directory called source, which will be placed into C:\dir3
rclone copy C:\source remote:source
-Here are the standard options specific to union (A stackable unification remote, which can appear to merge the contents of several remotes).
+Here are the standard options specific to union (Union merges the contents of several remotes).
List of space separated remotes. Can be ‘remotea:test/dir remoteb:’, ‘“remotea:test/space dir” remoteb:’, etc. The last remote is used to write to.
rclone ls remote:
To copy a local directory to an WebDAV directory called backup
rclone copy /home/source remote:backup
-Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.
Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
-Here are the standard options specific to webdav (Webdav).
URL of http host to connect to
@@ -11193,6 +11933,16 @@ y/e/d> yHere are the advanced options specific to webdav (Webdav).
+Command to run to get a bearer token
+See below for notes on specific providers.
@@ -11201,18 +11951,6 @@ y/e/d> yOwncloud supports modified times using the X-OC-Mtime
header.
This is configured in an identical way to Owncloud. Note that Nextcloud does not support streaming of files (rcat
) whereas Owncloud does. This may be fixed in the future.
put.io can be accessed in a read only way using webdav.
-Configure the url
as https://webdav.put.io
and use your normal account username and password for user
and pass
. Set the vendor
to other
.
Your config file should end up looking like this:
-[putio]
-type = webdav
-url = https://webdav.put.io
-vendor = other
-user = YourUserName
-pass = encryptedpassword
-If you are using put.io
with rclone mount
then use the --read-only
flag to signal to the OS that it can’t write to the mount.
For more help see the put.io webdav docs.
Rclone can be used with Sharepoint provided by OneDrive for Business or Office365 Education Accounts. This feature is only needed for a few of these Accounts, mostly Office365 Education ones. These accounts are sometimes not verified by the domain owner github#1975
This means that these accounts can’t be added using the official API (other Accounts should work with the “onedrive” option). However, it is possible to access them using webdav.
@@ -11231,7 +11969,7 @@ vendor = other user = YourEmailAddress pass = encryptedpassworddCache is a storage system with WebDAV doors that support, beside basic and x509, authentication with Macaroons (bearer tokens).
+dCache is a storage system that supports many protocols and authentication/authorisation schemes. For WebDAV clients, it allows users to authenticate with username and password (BASIC), X.509, Kerberos, and various bearer tokens, including Macaroons and OpenID-Connect access tokens.
Configure as normal using the other
type. Don’t enter a username or password, instead enter your Macaroon as the bearer_token
.
The config will end up looking something like this.
[dcache]
@@ -11242,6 +11980,22 @@ user =
pass =
bearer_token = your-macaroon
There is a script that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file.
+Macaroons may also be obtained from the dCacheView web-browser/JavaScript client that comes with dCache.
+dCache also supports authenticating with OpenID-Connect access tokens. OpenID-Connect is a protocol (based on OAuth 2.0) that allows services to identify users who have authenticated with some central service.
+Support for OpenID-Connect in rclone is currently achieved using another software package called oidc-agent. This is a command-line tool that facilitates obtaining an access token. Once installed and configured, an access token is obtained by running the oidc-token
command. The following example shows a (shortened) access token obtained from the XDC OIDC Provider.
paul@celebrimbor:~$ oidc-token XDC
+eyJraWQ[...]QFXDt0
+paul@celebrimbor:~$
+Note Before the oidc-token
command will work, the refresh token must be loaded into the oidc agent. This is done with the oidc-add
command (e.g., oidc-add XDC
). This is typically done once per login session. Full details on this and how to register oidc-agent with your OIDC Provider are provided in the oidc-agent documentation.
The rclone bearer_token_command
configuration option is used to fetch the access token from oidc-agent.
Configure as a normal WebDAV endpoint, using the ‘other’ vendor, leaving the username and password empty. When prompted, choose to edit the advanced config and enter the command to get a bearer token (e.g., oidc-agent XDC
).
The following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from the XDC OIDC Provider.
+[dcache]
+type = webdav
+url = https://dcache.example.org/
+vendor = other
+bearer_token_command = oidc-token XDC
Yandex Disk is a cloud storage solution created by Yandex.
Yandex paths may be as deep as required, eg remote:directory/subdirectory
.
rclone ls remote:directory
Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync /home/local/directory remote:directory
-Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified
in RFC3339 with nanoseconds format.
MD5 checksums are natively supported by Yandex Disk.
@@ -11326,10 +12058,10 @@ y/e/d> yIf you wish to empty your trash you can use the rclone cleanup remote:
command which will permanently delete all your trashed files. This command does not take any path arguments.
To view your current quota you can use the rclone about remote:
command which will display your usage limit (quota) and the current usage.
When uploading very large files (bigger than about 5GB) you will need to increase the --timeout
parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you’ll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice the max size of file in GB should be enough, so if you want to upload a 30GB file set a timeout of 2 * 30 = 60m
, that is --timeout 60m
.
Here are the standard options specific to yandex (Yandex Disk).
Yandex Client Id Leave blank normally.
@@ -11347,7 +12079,7 @@ y/e/d> yHere are the advanced options specific to yandex (Yandex Disk).
Remove existing public link to file/folder with link command rather than creating. Default is false, meaning link command will create or retrieve public link.
@@ -11363,7 +12095,7 @@ y/e/d> yrclone sync /home/source /tmp/destination
Will sync /home/source
to /tmp/destination
These can be configured into the config file for consistencies sake, but it is probably easier not to.
-Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
Filenames are expected to be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.
@@ -11466,7 +12198,7 @@ $ tree /tmp/bNB Rclone (like most unix tools such as du
, rsync
and tar
) treats a bind mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where it isn’t supported (eg Windows) it will be ignored.
-Here are the standard options specific to local (Local Disk).
Disable UNC (long path names) conversion on Windows
@@ -11483,7 +12215,7 @@ $ tree /tmp/b -Here are the advanced options specific to local (Local Disk).
Follow symlinks and copy the pointed to item.
@@ -11536,8 +12268,204 @@ $ tree /tmp/bForce the filesystem to report itself as case sensitive.
+Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice.
+Force the filesystem to report itself as case insensitive
+Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice.
+--compare-dest
& --copy-dest
(yparitcher)--suffix
without --backup-dir
for backup to current dir (yparitcher)--use-json-log
for JSON logging (justinalin)config reconnect
, config userinfo
and config disconnect
subcommands. (Nick Craig-Wood)--ignore-checksum
(Nick Craig-Wood)--size-only
mode (Nick Craig-Wood)--baseurl
for rcd and web-gui (Chaitanya Bankanhal)--auth-proxy
(Nick Craig-Wood)--baseurl
(Nick Craig-Wood)--baseurl
(Nick Craig-Wood)--baseurl
(Nick Craig-Wood)--auth-proxy
(Nick Craig-Wood)--no-traverse
(buengese)--loopback
with rc/list and others (Nick Craig-Wood)--deamon-timout
to 15 minutes on macOS and FreeBSD (Nick Craig-Wood)--vfs-cache-mode minimal
and writes
ignoring cached files (Nick Craig-Wood)--local-case-sensitive
and --local-case-insensitive
(Nick Craig-Wood)--drive-trashed-only
(ginvine)--http-headers
flag for setting arbitrary headers (Nick Craig-Wood)--webdav-bearer-token-command
(Nick Craig-Wood)--webdav-bearer-token-command
(Nick Craig-Wood)--progress
update the stats correctly at the end (Nick Craig-Wood)--dry-run
(Nick Craig-Wood)--log-format
flag for more control over log output (dcpu)--config
(albertony)--progress
on windows (Nick Craig-Wood)--azureblob-list-chunk
parameter (Santiago Rodríguez)--drive-import-formats
- google docs can now be imported (Fabian Möller)
--drive-v2-download-min-size
a workaround for slow downloads (Fabian Möller)--fast-list
support (albertony)--jottacloud-hard-delete
(albertony)--s3-v2-auth
flag (Nick Craig-Wood)--backup-dir
on union backend (Nick Craig-Wood)--progress
and --stats 0
(Nick Craig-Wood)With remotes that have a concept of directory, eg Local and Drive, empty directories may be left behind, or not created when one was expected.
-This is because rclone doesn’t have a concept of a directory - it only works on objects. Most of the object storage systems can’t actually store a directory so there is nowhere for rclone to store anything about directories.
-You can work round this to some extent with thepurge
command which will delete everything under the path, inluding empty directories.
This may be fixed at some point in Issue #100
+For the same reason as the above, rclone doesn’t have a concept of a directory - it only works on objects, therefore it can’t preserve the timestamps of directories.
+Rclone doesn’t currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.
+Currently rclone loads each directory entirely into memory before using it. Since each Rclone object takes 0.5k-1k of memory this can take a very long time and use an extremely large amount of memory.
+Millions of files in a directory tend caused by software writing cloud storage (eg S3 buckets).
+Bucket based remotes (eg S3/GCS/Swift/B2) do not have a concept of directories. Rclone therefore cannot create directories in them which means that empty directories on a bucket based remote will tend to disappear.
+Some software creates empty keys ending in /
as directory markers. Rclone doesn’t do this as it potentially creates more objects and costs more. It may do in future (probably with a flag).
Bugs are stored in rclone’s Github project:
+Yes they do. All the rclone commands (eg sync
, copy
etc) will work on all the remote storage systems.
This is free software under the terms of MIT the license (check the COPYING file included with the source code).
-Copyright (C) 2012 by Nick Craig-Wood https://www.craig-wood.com/nick/
+Copyright (C) 2019 by Nick Craig-Wood https://www.craig-wood.com/nick/
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -14649,6 +15585,26 @@ THE SOFTWARE.
- forgems forgems@gmail.com
- Florian Apolloner florian@apolloner.eu
- Aleksandar Jankovic office@ajankovic.com
+- Maran maran@protonmail.com
+- nguyenhuuluan434 nguyenhuuluan434@gmail.com
+- Laura Hausmann zotan@zotan.pw laura@hausmann.dev
+- yparitcher y@paritcher.com
+- AbelThar abela.tharen@gmail.com
+- Matti Niemenmaa matti.niemenmaa+git@iki.fi
+- Russell Davis russelldavis@users.noreply.github.com
+- Yi FU yi.fu@tink.se
+- Paul Millar paul.millar@desy.de
+- justinalin justinalin@qnap.com
+- EliEron subanimehd@gmail.com
+- justina777 chiahuei.lin@gmail.com
+- Chaitanya Bankanhal bchaitanya15@gmail.com
+- Michał Matczuk michal@scylladb.com
+- Macavirus macavirus@zoho.com
+- Abhinav Sharma abhi18av@users.noreply.github.com
+- ginvine 34869051+ginvine@users.noreply.github.com
+- Patrick Wang mail6543210@yahoo.com.tw
+- Cenk Alti cenkalti@gmail.com
+- Andreas Chlupka andy@chlupka.com
Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood
+Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood. Please don’t email me requests for help - those are better directed to the forum - thanks!