diff --git a/MANUAL.html b/MANUAL.html index 751a82c2d..3dbf30c9f 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,7 +12,7 @@
Move files from source to dest.
Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap.
+Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server side directory move operation.
If no filters are in use and if possible this will server side move source:path
into dest:path
. After this source:path
will no longer longer exist.
Otherwise for each file in source:path
selected by the filters (if any) this will move it into dest:path
. If possible a server side move will be used, otherwise it will copy it (server side if possible) into dest:path
then delete the original (if no errors on copy) in source:path
.
Important: Since this can cause data loss, test first with the --dry-run flag.
@@ -325,9 +325,25 @@ two-3.txt: renamed from: two.txtOr like this to output any .txt files in dir or subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
rclone cat remote:path
+Copy files from source to dest, skipping already copied
+If source:path is a file or directory then it copies it to a file or directory named dest:path.
+This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.
+So
+rclone copyto src dst
+where src and dst are rclone paths, either remote:path or /path/to/local or C:.
+This will:
+if src is file
+ copy it to dst, overwriting an existing file if it exists
+if src is directory
+ copy it to dst, overwriting existing files if they exist
+ see copy command for full details
+This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.
+rclone copyto source:path dest:path
Output bash completion script for rclone.
-Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg
sudo rclone genautocomplete
@@ -337,12 +353,12 @@ two-3.txt: renamed from: two.txt
rclone genautocomplete [output_file]
Output markdown docs for rclone to the directory supplied.
-This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
rclone gendocs output_directory
List all the remotes in the config file.
-rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
rclone listremotes
@@ -350,7 +366,7 @@ two-3.txt: renamed from: two.txt
-l, --long Show the type as well as names.
Mount the remote as a mountpoint. EXPERIMENTAL
-rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's cloud storage systems as a file system with FUSE.
This is EXPERIMENTAL - use with care.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
Move file or directory from source to dest.
+If source:path is a file or directory then it moves it to a file or directory named dest:path.
+This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exacty like the move command.
+So
+rclone moveto src dst
+where src and dst are rclone paths, either remote:path or /path/to/local or C:.
+This will:
+if src is file
+ move it to dst, overwriting an existing file if it exists
+if src is directory
+ move it to dst, overwriting existing files if they exist
+ see move command for full details
+This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.
+Important: Since this can cause data loss, test first with the --dry-run flag.
+rclone moveto source:path dest:path
+Remove any empty directoryies under the path.
+This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.
+This is useful for tidying up remotes that rclone has left a lot of empty directories in.
+rclone rmdirs remote:path
rclone normally syncs or copies directories. However if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory
if it isn't.
For example, suppose you have a remote with a file in called test.jpg
, then you could copy just that file like this
Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.
This can be useful transferring files from dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.
Rclone will print stats at regular intervals to show its progress.
+Commands which transfer data (sync
, copy
, copyto
, move
, moveto
) will print data transfer stats at regular intervals to show their progress.
This sets the interval.
The default is 1m
. Use 0 to disable.
If you set the stats interval then all command can show stats. This can be useful when running other commands, check
or mount
for example.
By default data transfer rates will be printed in bytes/second.
+This option allows the data rate to be printed in bits/second.
+Data transfer volume will still be reported in bytes.
+The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s.
+The default is bytes
.
This option allows you to specify when files on your destination are deleted when you sync folders.
Specifying the value --delete-before
will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses extra memory as it has to store the source listing before proceeding.
rclone uses nacl secretbox which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored.
While this provides very good security, we do not recommend storing your encrypted rclone configuration in public if it contains sensitive information, maybe except if you use a very strong password.
If it is safe in your environment, you can set the RCLONE_CONFIG_PASS
environment variable to contain your password, in which case it will be used for decrypting the configuration.
You can set this for a session from a script. For unix like systems save this to a file called set-rclone-password
:
#!/bin/echo Source this file don't run it
+
+read -s RCLONE_CONFIG_PASS
+export RCLONE_CONFIG_PASS
+Then source the file when you want to use it. From the shell you would do source set-rclone-password
. It will then ask you for the password and set it in the envonment variable.
If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter --ask-password=false
to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS
doesn't contain a valid password.
These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option
- see the docs for the remote in question.
Rclone has a sophisticated set of include and exclude rules. Some of these are based on patterns and some on other things like file size.
The filters are applied for the copy
, sync
, move
, ls
, lsl
, md5sum
, sha1sum
, size
, delete
and check
operations. Note that purge
does not obey the filters.
Each path as it passes through rclone is matched against the include and exclude rules like --include
, --exclude
, --include-from
, --exclude-from
, --filter
, or --filter-from
. The simplest way to try them out is using the ls
command, or --dry-run
together with -v
.
Important Due to limitations of the command line parser you can only use any of these options once - if you duplicate them then rclone will use the last one only.
The patterns used to match files for inclusion or exclusion are based on "file globs" as used by the unix shell.
-If the pattern starts with a /
then it only matches at the top level of the directory tree, relative to the root of the remote. If it doesn't start with /
then it is matched starting at the end of the path, but it will only match a complete path element:
If the pattern starts with a /
then it only matches at the top level of the directory tree, relative to the root of the remote (not necessarily the root of the local drive). If it doesn't start with /
then it is matched starting at the end of the path, but it will only match a complete path element:
file.jpg - matches "file.jpg"
- matches "directory/file.jpg"
- doesn't match "afile.jpg"
@@ -722,9 +773,9 @@ y/e/d>
Rclone implements bash style {a,b,c}
glob matching which rsync doesn't.
Rclone always does a wildcard match so \
must always escape a \
.
Rclone maintains a list of include rules and exclude rules.
-Each file is matched in order against the list until it finds a match. The file is then included or excluded according to the rule type.
-If the matcher falls off the bottom of the list then the path is included.
+Rclone maintains a combined list of include rules and exclude rules.
+Each file is matched in order, starting from the top, against the rule in the list until it finds a match. The file is then included or excluded according to the rule type.
+If the matcher fails to find a match after testing against all the entries in the list then the path is included.
For example given the following rules, +
being include, -
being exclude,
- secret*.jpg
+ *.jpg
@@ -745,11 +796,26 @@ y/e/d>
A similar process is done on directory entries before recursing into them. This only works on remotes which have a concept of directory (Eg local, google drive, onedrive, amazon drive) and not on bucket based remotes (eg s3, swift, google compute storage, b2).
Filtering rules are added with the following command line flags.
+You can repeat the following options to add more than one rule of that type.
+--include
--include-from
--exclude
--exclude-from
--filter
--filter-from
Note that all the options of the same type are processed together in the order above, regardless of what order they were placed on the command line.
+So all --include
options are processed first in the order they appeared on the command line, then all --include-from
options etc.
To mix up the order includes and excludes, the --filter
flag can be used.
--exclude
- Exclude files matching patternAdd a single exclude rule with --exclude
.
This flag can be repeated. See above for the order the flags are processed in.
Eg --exclude *.bak
to exclude all bak files from the sync.
--exclude-from
- Read exclude patterns from fileAdd exclude rules from a file.
+This flag can be repeated. See above for the order the flags are processed in.
Prepare a file like this exclude-file.txt
# a sample exclude rule file
*.bak
@@ -758,10 +824,12 @@ file2.jpg
This is useful if you have a lot of rules.
--include
- Include files matching patternAdd a single include rule with --include
.
This flag can be repeated. See above for the order the flags are processed in.
Eg --include *.{png,jpg}
to include all png
and jpg
files in the backup and no others.
This adds an implicit --exclude *
at the very end of the filter list. This means you can mix --include
and --include-from
with the other filters (eg --exclude
) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from
.
--include-from
- Read include patterns from fileAdd include rules from a file.
+This flag can be repeated. See above for the order the flags are processed in.
Prepare a file like this include-file.txt
# a sample include rule file
*.jpg
@@ -772,9 +840,11 @@ file2.avi
This adds an implicit --exclude *
at the very end of the filter list. This means you can mix --include
and --include-from
with the other filters (eg --exclude
) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from
.
--filter
- Add a file-filtering ruleThis can be used to add a single include or exclude rule. Include rules start with +
and exclude rules start with -
. A special rule called !
can be used to clear the existing rules.
This flag can be repeated. See above for the order the flags are processed in.
Eg --filter "- *.bak"
to exclude all bak files from the sync.
--filter-from
- Read filtering patterns from a fileAdd include/exclude rules from a file.
+This flag can be repeated. See above for the order the flags are processed in.
Prepare a file like this filter-file.txt
# a sample exclude rule file
- secret*.jpg
@@ -787,6 +857,7 @@ file2.avi
This example will include all jpg
and png
files, exclude any files matching secret*.jpg
and include file2.avi
. Everything else will be excluded from the sync.
--files-from
- Read list of source-file namesThis reads a list of file names from the file passed in and only these files are transferred. The filtering rules are ignored completely if you use this option.
+This option can be repeated to read from more than one file. These are read in the order that they are placed on the command line.
Prepare a file like this files-from.txt
# comment
file1.jpg
@@ -1045,8 +1116,8 @@ The hashes are used when transferring data as an integrity check and can be spec
Amazon Drive
Yes
No
-No #721
-No #721
+Yes
+Yes
No #575
@@ -1204,7 +1275,7 @@ y/e/d> y
Google documents can only be exported from Google drive. When rclone downloads a Google doc it chooses a format to download depending upon this setting.
By default the formats are docx,xlsx,pptx,svg
which are a sensible default for an editable document.
When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list.
-If you prefer an archive copy then you might use --drive-formats pdf
, or if you prefer openoffice/libreoffice formats you might use --drive-formats ods,odt
.
+If you prefer an archive copy then you might use --drive-formats pdf
, or if you prefer openoffice/libreoffice formats you might use --drive-formats ods,odt,odp
.
Note that rclone adds the extension to the google doc, so if it is calles My Spreadsheet
on google docs, it will be exported as My Spreadsheet.xlsx
or My Spreadsheet.pdf
etc.
Here are the possible extensions with their corresponding mime types.
@@ -2042,7 +2113,7 @@ y/e/d> y
Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries
flag) which should hopefully work around this problem.
Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.
-Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50G
option to limit the maximum size of uploaded files.
+Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M
option to limit the maximum size of uploaded files. Note that --max-size
does not split files into segments, it only ignores files over this size.
Microsoft One Drive
Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory
.
@@ -2129,6 +2200,7 @@ y/e/d> y
Note that One Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Rclone only supports your default One Drive, and doesn't work with One Drive for business. Both these issues may be fixed at some point depending on user demand!
There are quite a few characters that can't be in One Drive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
+The largest allowed file size is 10GiB (10,737,418,240 bytes).
Hubic
Paths are specified as remote:path
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, eg remote:container/path/to/dir
.
@@ -2670,6 +2742,52 @@ nounc = true
NB This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will not appear as an valid flag.
Changelog
+- v1.35 - 2017-01-02
+
+- New Features
+- moveto and copyto commands for choosing a destination name on copy/move
+- rmdirs command to recursively delete empty directories
+- Allow repeated --include/--exclude/--filter options
+- Only show transfer stats on commands which transfer stuff
+
+- show stats on any command using the
--stats
flag
+
+- Allow overlapping directories in move when server side dir move is supported
+- Add --stats-unit option - thanks Scott McGillivray
+- Bug Fixes
+- Fix the config file being overwritten when two rclones are running
+- Make rclone lsd obey the filters properly
+- Fix compilation on mips
+- Fix not transferring files that don't differ in size
+- Fix panic on nil retry/fatal error
+- Mount
+- Retry reads on error - should help with reliability a lot
+- Report the modification times for directories from the remote
+- Add bandwidth accounting and limiting (fixes --bwlimit)
+- If --stats provided will show stats and which files are transferring
+- Support R/W files if truncate is set.
+- Implement statfs interface so df works
+- Note that write is now supported on Amazon Drive
+- Report number of blocks in a file - thanks Stefan Breunig
+- Crypt
+- Prevent the user pointing crypt at itself
+- Fix failed to authenticate decrypted block errors
+
+- these will now return the underlying unexpected EOF instead
+
+- Amazon Drive
+- Add support for server side move and directory move - thanks Stefan Breunig
+- Fix nil pointer deref on size attribute
+- B2
+- Use new prefix and delimiter parameters in directory listings
+
+- This makes --max-depth 1 dir listings as used in mount much faster
+
+- Reauth the account while doing uploads too - should help with token expiry
+- Drive
+- Make DirMove more efficient and complain about moving the root
+- Create destination directory on Move()
+
- v1.34 - 2016-11-06
- New Features
@@ -3567,6 +3685,36 @@ h='hotmail.com';a='@';
document.write(''+e+'<\/'+'a'+'>');
// -->
+- Stefan Breunig
+- Alishan Ladhani
+- 0xJAKE
+- Thibault Molleman
+- Scott McGillivray
Contact the rclone project
Forum
diff --git a/MANUAL.md b/MANUAL.md
index 7dfc5ac92..00eee2841 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Nov 06, 2016
+% Jan 02, 2017
Rclone
======
@@ -37,7 +37,8 @@ Links
* [Home page](http://rclone.org/)
* [Github project page for source and bug tracker](http://github.com/ncw/rclone)
- * Google+ page
+ * [Rclone Forum](https://forum.rclone.org)
+ * Google+ page
* [Downloads](http://rclone.org/downloads/)
# Install #
@@ -288,7 +289,8 @@ Move files from source to dest.
Moves the contents of the source directory to the destination
-directory. Rclone will error if the source and destination overlap.
+directory. Rclone will error if the source and destination overlap and
+the remote does not support a server side directory move operation.
If no filters are in use and if possible this will server side move
`source:path` into `dest:path`. After this `source:path` will no
@@ -652,6 +654,45 @@ Or like this to output any .txt files in dir or subdirectories.
rclone cat remote:path
```
+## rclone copyto
+
+Copy files from source to dest, skipping already copied
+
+### Synopsis
+
+
+
+If source:path is a file or directory then it copies it to a file or
+directory named dest:path.
+
+This can be used to upload single files to other than their current
+name. If the source is a directory then it acts exactly like the copy
+command.
+
+So
+
+ rclone copyto src dst
+
+where src and dst are rclone paths, either remote:path or
+/path/to/local or C:\windows\path\if\on\windows.
+
+This will:
+
+ if src is file
+ copy it to dst, overwriting an existing file if it exists
+ if src is directory
+ copy it to dst, overwriting existing files if they exist
+ see copy command for full details
+
+This doesn't transfer unchanged files, testing by size and
+modification time or MD5SUM. It doesn't delete files from the
+destination.
+
+
+```
+rclone copyto source:path dest:path
+```
+
## rclone genautocomplete
Output bash completion script for rclone.
@@ -774,7 +815,7 @@ mount won't do that, so will be less reliable than the rclone command.
### Bugs ###
* All the remotes should work for read, but some may not for write
- * those which need to know the size in advance won't - eg B2, Amazon Drive
+ * those which need to know the size in advance won't - eg B2
* maybe should pass in size as -1 to mean work it out
* Or put in an an upload cache to cache the files on disk first
@@ -808,6 +849,68 @@ rclone mount remote:path /path/to/mountpoint
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
```
+## rclone moveto
+
+Move file or directory from source to dest.
+
+### Synopsis
+
+
+
+If source:path is a file or directory then it moves it to a file or
+directory named dest:path.
+
+This can be used to rename files or upload single files to other than
+their existing name. If the source is a directory then it acts exacty
+like the move command.
+
+So
+
+ rclone moveto src dst
+
+where src and dst are rclone paths, either remote:path or
+/path/to/local or C:\windows\path\if\on\windows.
+
+This will:
+
+ if src is file
+ move it to dst, overwriting an existing file if it exists
+ if src is directory
+ move it to dst, overwriting existing files if they exist
+ see move command for full details
+
+This doesn't transfer unchanged files, testing by size and
+modification time or MD5SUM. src will be deleted on successful
+transfer.
+
+**Important**: Since this can cause data loss, test first with the
+--dry-run flag.
+
+
+```
+rclone moveto source:path dest:path
+```
+
+## rclone rmdirs
+
+Remove any empty directoryies under the path.
+
+### Synopsis
+
+
+This removes any empty directories (or directories that only contain
+empty directories) under the path that it finds, including the path if
+it has nothing in.
+
+This is useful for tidying up remotes that rclone has left a lot of
+empty directories in.
+
+
+
+```
+rclone rmdirs remote:path
+```
+
Copying single files
--------------------
@@ -1120,12 +1223,31 @@ modification times in the same way as rclone.
### --stats=TIME ###
-Rclone will print stats at regular intervals to show its progress.
+Commands which transfer data (`sync`, `copy`, `copyto`, `move`,
+`moveto`) will print data transfer stats at regular intervals to show
+their progress.
This sets the interval.
The default is `1m`. Use 0 to disable.
+If you set the stats interval then all command can show stats. This
+can be useful when running other commands, `check` or `mount` for
+example.
+
+### --stats-unit=bits|bytes ###
+
+By default data transfer rates will be printed in bytes/second.
+
+This option allows the data rate to be printed in bits/second.
+
+Data transfer volume will still be reported in bytes.
+
+The rate is reported as a binary unit, not SI unit. So 1 Mbit/s
+equals 1,048,576 bits/s and not 1,000,000 bits/s.
+
+The default is `bytes`.
+
### --delete-(before,during,after) ###
This option allows you to specify when files on your destination are
@@ -1254,6 +1376,20 @@ If it is safe in your environment, you can set the `RCLONE_CONFIG_PASS`
environment variable to contain your password, in which case it will be
used for decrypting the configuration.
+You can set this for a session from a script. For unix like systems
+save this to a file called `set-rclone-password`:
+
+```
+#!/bin/echo Source this file don't run it
+
+read -s RCLONE_CONFIG_PASS
+export RCLONE_CONFIG_PASS
+```
+
+Then source the file when you want to use it. From the shell you
+would do `source set-rclone-password`. It will then ask you for the
+password and set it in the envonment variable.
+
If you are running rclone inside a script, you might want to disable
password prompts. To do that, pass the parameter
`--ask-password=false` to rclone. This will make rclone fail instead
@@ -1489,19 +1625,16 @@ and exclude rules like `--include`, `--exclude`, `--include-from`,
try them out is using the `ls` command, or `--dry-run` together with
`-v`.
-**Important** Due to limitations of the command line parser you can
-only use any of these options once - if you duplicate them then rclone
-will use the last one only.
-
## Patterns ##
The patterns used to match files for inclusion or exclusion are based
on "file globs" as used by the unix shell.
If the pattern starts with a `/` then it only matches at the top level
-of the directory tree, relative to the root of the remote.
-If it doesn't start with `/` then it is matched starting at the
-**end of the path**, but it will only match a complete path element:
+of the directory tree, **relative to the root of the remote** (not
+necessarily the root of the local drive). If it doesn't start with `/`
+then it is matched starting at the **end of the path**, but it will
+only match a complete path element:
file.jpg - matches "file.jpg"
- matches "directory/file.jpg"
@@ -1590,13 +1723,14 @@ Rclone always does a wildcard match so `\` must always escape a `\`.
## How the rules are used ##
-Rclone maintains a list of include rules and exclude rules.
+Rclone maintains a combined list of include rules and exclude rules.
-Each file is matched in order against the list until it finds a match.
-The file is then included or excluded according to the rule type.
+Each file is matched in order, starting from the top, against the rule
+in the list until it finds a match. The file is then included or
+excluded according to the rule type.
-If the matcher falls off the bottom of the list then the path is
-included.
+If the matcher fails to find a match after testing against all the
+entries in the list then the path is included.
For example given the following rules, `+` being include, `-` being
exclude,
@@ -1627,16 +1761,44 @@ based remotes (eg s3, swift, google compute storage, b2).
Filtering rules are added with the following command line flags.
+### Repeating options ##
+
+You can repeat the following options to add more than one rule of that
+type.
+
+ * `--include`
+ * `--include-from`
+ * `--exclude`
+ * `--exclude-from`
+ * `--filter`
+ * `--filter-from`
+
+Note that all the options of the same type are processed together in
+the order above, regardless of what order they were placed on the
+command line.
+
+So all `--include` options are processed first in the order they
+appeared on the command line, then all `--include-from` options etc.
+
+To mix up the order includes and excludes, the `--filter` flag can be
+used.
+
### `--exclude` - Exclude files matching pattern ###
Add a single exclude rule with `--exclude`.
+This flag can be repeated. See above for the order the flags are
+processed in.
+
Eg `--exclude *.bak` to exclude all bak files from the sync.
### `--exclude-from` - Read exclude patterns from file ###
Add exclude rules from a file.
+This flag can be repeated. See above for the order the flags are
+processed in.
+
Prepare a file like this `exclude-file.txt`
# a sample exclude rule file
@@ -1652,6 +1814,9 @@ This is useful if you have a lot of rules.
Add a single include rule with `--include`.
+This flag can be repeated. See above for the order the flags are
+processed in.
+
Eg `--include *.{png,jpg}` to include all `png` and `jpg` files in the
backup and no others.
@@ -1665,6 +1830,9 @@ flexibility then you must use `--filter-from`.
Add include rules from a file.
+This flag can be repeated. See above for the order the flags are
+processed in.
+
Prepare a file like this `include-file.txt`
# a sample include rule file
@@ -1689,12 +1857,18 @@ This can be used to add a single include or exclude rule. Include
rules start with `+ ` and exclude rules start with `- `. A special
rule called `!` can be used to clear the existing rules.
+This flag can be repeated. See above for the order the flags are
+processed in.
+
Eg `--filter "- *.bak"` to exclude all bak files from the sync.
### `--filter-from` - Read filtering patterns from a file ###
Add include/exclude rules from a file.
+This flag can be repeated. See above for the order the flags are
+processed in.
+
Prepare a file like this `filter-file.txt`
# a sample exclude rule file
@@ -1718,6 +1892,9 @@ This reads a list of file names from the file passed in and **only**
these files are transferred. The filtering rules are ignored
completely if you use this option.
+This option can be repeated to read from more than one file. These
+are read in the order that they are placed on the command line.
+
Prepare a file like this `files-from.txt`
# comment
@@ -1950,7 +2127,7 @@ operations more efficient.
| Openstack Swift | Yes † | Yes | No | No | No |
| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) |
| Google Cloud Storage | Yes | Yes | No | No | No |
-| Amazon Drive | Yes | No | No [#721](https://github.com/ncw/rclone/issues/721) | No [#721](https://github.com/ncw/rclone/issues/721) | No [#575](https://github.com/ncw/rclone/issues/575) |
+| Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) |
| Microsoft One Drive | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) |
| Hubic | Yes † | Yes | No | No | No |
| Backblaze B2 | No | No | No | No | Yes |
@@ -2166,7 +2343,7 @@ then rclone will choose a format from the default list.
If you prefer an archive copy then you might use `--drive-formats
pdf`, or if you prefer openoffice/libreoffice formats you might use
-`--drive-formats ods,odt`.
+`--drive-formats ods,odt,odp`.
Note that rclone adds the extension to the google doc, so if it is
calles `My Spreadsheet` on google docs, it will be exported as `My
@@ -3280,8 +3457,9 @@ This means that larger files are likely to fail.
Unfortunatly there is no way for rclone to see that this failure is
because of file size, so it will retry the operation, as any other
-failure. To avoid this problem, use `--max-size 50G` option to limit
-the maximum size of uploaded files.
+failure. To avoid this problem, use `--max-size 50000M` option to limit
+the maximum size of uploaded files. Note that `--max-size` does not split
+files into segments, it only ignores files over this size.
Microsoft One Drive
-----------------------------------------
@@ -3427,6 +3605,8 @@ platforms they are common. Rclone will map these names to and from an
identical looking unicode equivalent. For example if a file has a `?`
in it will be mapped to `?` instead.
+The largest allowed file size is 10GiB (10,737,418,240 bytes).
+
Hubic
-----------------------------------------
@@ -4400,6 +4580,44 @@ flag.
Changelog
---------
+ * v1.35 - 2017-01-02
+ * New Features
+ * moveto and copyto commands for choosing a destination name on copy/move
+ * rmdirs command to recursively delete empty directories
+ * Allow repeated --include/--exclude/--filter options
+ * Only show transfer stats on commands which transfer stuff
+ * show stats on any command using the `--stats` flag
+ * Allow overlapping directories in move when server side dir move is supported
+ * Add --stats-unit option - thanks Scott McGillivray
+ * Bug Fixes
+ * Fix the config file being overwritten when two rclones are running
+ * Make rclone lsd obey the filters properly
+ * Fix compilation on mips
+ * Fix not transferring files that don't differ in size
+ * Fix panic on nil retry/fatal error
+ * Mount
+ * Retry reads on error - should help with reliability a lot
+ * Report the modification times for directories from the remote
+ * Add bandwidth accounting and limiting (fixes --bwlimit)
+ * If --stats provided will show stats and which files are transferring
+ * Support R/W files if truncate is set.
+ * Implement statfs interface so df works
+ * Note that write is now supported on Amazon Drive
+ * Report number of blocks in a file - thanks Stefan Breunig
+ * Crypt
+ * Prevent the user pointing crypt at itself
+ * Fix failed to authenticate decrypted block errors
+ * these will now return the underlying unexpected EOF instead
+ * Amazon Drive
+ * Add support for server side move and directory move - thanks Stefan Breunig
+ * Fix nil pointer deref on size attribute
+ * B2
+ * Use new prefix and delimiter parameters in directory listings
+ * This makes --max-depth 1 dir listings as used in mount much faster
+ * Reauth the account while doing uploads too - should help with token expiry
+ * Drive
+ * Make DirMove more efficient and complain about moving the root
+ * Create destination directory on Move()
* v1.34 - 2016-11-06
* New Features
* Stop single file and `--files-from` operations iterating through the source bucket.
@@ -5118,6 +5336,11 @@ Contributors
* Felix Bünemann
* Durval Menezes
* Luiz Carlos Rumbelsperger Viana
+ * Stefan Breunig
+ * Alishan Ladhani
+ * 0xJAKE <0xJAKE@users.noreply.github.com>
+ * Thibault Molleman
+ * Scott McGillivray
# Contact the rclone project #
diff --git a/MANUAL.txt b/MANUAL.txt
index dcc7340b7..f5a823384 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Nov 06, 2016
+Jan 02, 2017
@@ -40,6 +40,7 @@ Links
- Home page
- Github project page for source and bug tracker
+- Rclone Forum
- Google+ page
- Downloads
@@ -284,7 +285,8 @@ Move files from source to dest.
Synopsis
Moves the contents of the source directory to the destination directory.
-Rclone will error if the source and destination overlap.
+Rclone will error if the source and destination overlap and the remote
+does not support a server side directory move operation.
If no filters are in use and if possible this will server side move
source:path into dest:path. After this source:path will no longer longer
@@ -599,6 +601,40 @@ Or like this to output any .txt files in dir or subdirectories.
rclone cat remote:path
+rclone copyto
+
+Copy files from source to dest, skipping already copied
+
+Synopsis
+
+If source:path is a file or directory then it copies it to a file or
+directory named dest:path.
+
+This can be used to upload single files to other than their current
+name. If the source is a directory then it acts exactly like the copy
+command.
+
+So
+
+ rclone copyto src dst
+
+where src and dst are rclone paths, either remote:path or /path/to/local
+or C:.
+
+This will:
+
+ if src is file
+ copy it to dst, overwriting an existing file if it exists
+ if src is directory
+ copy it to dst, overwriting existing files if they exist
+ see copy command for full details
+
+This doesn't transfer unchanged files, testing by size and modification
+time or MD5SUM. It doesn't delete files from the destination.
+
+ rclone copyto source:path dest:path
+
+
rclone genautocomplete
Output bash completion script for rclone.
@@ -705,8 +741,7 @@ that, so will be less reliable than the rclone command.
Bugs
- All the remotes should work for read, but some may not for write
- - those which need to know the size in advance won't - eg B2,
- Amazon Drive
+ - those which need to know the size in advance won't - eg B2
- maybe should pass in size as -1 to mean work it out
- Or put in an an upload cache to cache the files on disk first
@@ -736,6 +771,59 @@ Options
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
+rclone moveto
+
+Move file or directory from source to dest.
+
+Synopsis
+
+If source:path is a file or directory then it moves it to a file or
+directory named dest:path.
+
+This can be used to rename files or upload single files to other than
+their existing name. If the source is a directory then it acts exacty
+like the move command.
+
+So
+
+ rclone moveto src dst
+
+where src and dst are rclone paths, either remote:path or /path/to/local
+or C:.
+
+This will:
+
+ if src is file
+ move it to dst, overwriting an existing file if it exists
+ if src is directory
+ move it to dst, overwriting existing files if they exist
+ see move command for full details
+
+This doesn't transfer unchanged files, testing by size and modification
+time or MD5SUM. src will be deleted on successful transfer.
+
+IMPORTANT: Since this can cause data loss, test first with the --dry-run
+flag.
+
+ rclone moveto source:path dest:path
+
+
+rclone rmdirs
+
+Remove any empty directoryies under the path.
+
+Synopsis
+
+This removes any empty directories (or directories that only contain
+empty directories) under the path that it finds, including the path if
+it has nothing in.
+
+This is useful for tidying up remotes that rclone has left a lot of
+empty directories in.
+
+ rclone rmdirs remote:path
+
+
Copying single files
rclone normally syncs or copies directories. However if the source
@@ -1043,12 +1131,29 @@ modification times in the same way as rclone.
--stats=TIME
-Rclone will print stats at regular intervals to show its progress.
+Commands which transfer data (sync, copy, copyto, move, moveto) will
+print data transfer stats at regular intervals to show their progress.
This sets the interval.
The default is 1m. Use 0 to disable.
+If you set the stats interval then all command can show stats. This can
+be useful when running other commands, check or mount for example.
+
+--stats-unit=bits|bytes
+
+By default data transfer rates will be printed in bytes/second.
+
+This option allows the data rate to be printed in bits/second.
+
+Data transfer volume will still be reported in bytes.
+
+The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals
+1,048,576 bits/s and not 1,000,000 bits/s.
+
+The default is bytes.
+
--delete-(before,during,after)
This option allows you to specify when files on your destination are
@@ -1173,6 +1278,18 @@ If it is safe in your environment, you can set the RCLONE_CONFIG_PASS
environment variable to contain your password, in which case it will be
used for decrypting the configuration.
+You can set this for a session from a script. For unix like systems save
+this to a file called set-rclone-password:
+
+ #!/bin/echo Source this file don't run it
+
+ read -s RCLONE_CONFIG_PASS
+ export RCLONE_CONFIG_PASS
+
+Then source the file when you want to use it. From the shell you would
+do source set-rclone-password. It will then ask you for the password and
+set it in the envonment variable.
+
If you are running rclone inside a script, you might want to disable
password prompts. To do that, pass the parameter --ask-password=false to
rclone. This will make rclone fail instead of asking for a password if
@@ -1404,10 +1521,6 @@ exclude rules like --include, --exclude, --include-from, --exclude-from,
--filter, or --filter-from. The simplest way to try them out is using
the ls command, or --dry-run together with -v.
-IMPORTANT Due to limitations of the command line parser you can only use
-any of these options once - if you duplicate them then rclone will use
-the last one only.
-
Patterns
@@ -1415,9 +1528,10 @@ The patterns used to match files for inclusion or exclusion are based on
"file globs" as used by the unix shell.
If the pattern starts with a / then it only matches at the top level of
-the directory tree, relative to the root of the remote. If it doesn't
-start with / then it is matched starting at the END OF THE PATH, but it
-will only match a complete path element:
+the directory tree, RELATIVE TO THE ROOT OF THE REMOTE (not necessarily
+the root of the local drive). If it doesn't start with / then it is
+matched starting at the END OF THE PATH, but it will only match a
+complete path element:
file.jpg - matches "file.jpg"
- matches "directory/file.jpg"
@@ -1504,13 +1618,14 @@ Rclone always does a wildcard match so \ must always escape a \.
How the rules are used
-Rclone maintains a list of include rules and exclude rules.
+Rclone maintains a combined list of include rules and exclude rules.
-Each file is matched in order against the list until it finds a match.
-The file is then included or excluded according to the rule type.
+Each file is matched in order, starting from the top, against the rule
+in the list until it finds a match. The file is then included or
+excluded according to the rule type.
-If the matcher falls off the bottom of the list then the path is
-included.
+If the matcher fails to find a match after testing against all the
+entries in the list then the path is included.
For example given the following rules, + being include, - being exclude,
@@ -1541,16 +1656,44 @@ Adding filtering rules
Filtering rules are added with the following command line flags.
+Repeating options
+
+You can repeat the following options to add more than one rule of that
+type.
+
+- --include
+- --include-from
+- --exclude
+- --exclude-from
+- --filter
+- --filter-from
+
+Note that all the options of the same type are processed together in the
+order above, regardless of what order they were placed on the command
+line.
+
+So all --include options are processed first in the order they appeared
+on the command line, then all --include-from options etc.
+
+To mix up the order includes and excludes, the --filter flag can be
+used.
+
--exclude - Exclude files matching pattern
Add a single exclude rule with --exclude.
+This flag can be repeated. See above for the order the flags are
+processed in.
+
Eg --exclude *.bak to exclude all bak files from the sync.
--exclude-from - Read exclude patterns from file
Add exclude rules from a file.
+This flag can be repeated. See above for the order the flags are
+processed in.
+
Prepare a file like this exclude-file.txt
# a sample exclude rule file
@@ -1566,6 +1709,9 @@ This is useful if you have a lot of rules.
Add a single include rule with --include.
+This flag can be repeated. See above for the order the flags are
+processed in.
+
Eg --include *.{png,jpg} to include all png and jpg files in the backup
and no others.
@@ -1579,6 +1725,9 @@ you must use --filter-from.
Add include rules from a file.
+This flag can be repeated. See above for the order the flags are
+processed in.
+
Prepare a file like this include-file.txt
# a sample include rule file
@@ -1603,12 +1752,18 @@ This can be used to add a single include or exclude rule. Include rules
start with + and exclude rules start with -. A special rule called ! can
be used to clear the existing rules.
+This flag can be repeated. See above for the order the flags are
+processed in.
+
Eg --filter "- *.bak" to exclude all bak files from the sync.
--filter-from - Read filtering patterns from a file
Add include/exclude rules from a file.
+This flag can be repeated. See above for the order the flags are
+processed in.
+
Prepare a file like this filter-file.txt
# a sample exclude rule file
@@ -1632,6 +1787,9 @@ This reads a list of file names from the file passed in and ONLY these
files are transferred. The filtering rules are ignored completely if you
use this option.
+This option can be repeated to read from more than one file. These are
+read in the order that they are placed on the command line.
+
Prepare a file like this files-from.txt
# comment
@@ -1867,7 +2025,7 @@ more efficient.
Openstack Swift Yes † Yes No No No
Dropbox Yes Yes Yes Yes No #575
Google Cloud Storage Yes Yes No No No
- Amazon Drive Yes No No #721 No #721 No #575
+ Amazon Drive Yes No Yes Yes No #575
Microsoft One Drive Yes Yes No #197 No #197 No #575
Hubic Yes † Yes No No No
Backblaze B2 No No No No Yes
@@ -2078,7 +2236,7 @@ rclone will choose a format from the default list.
If you prefer an archive copy then you might use --drive-formats pdf, or
if you prefer openoffice/libreoffice formats you might use
---drive-formats ods,odt.
+--drive-formats ods,odt,odp.
Note that rclone adds the extension to the google doc, so if it is
calles My Spreadsheet on google docs, it will be exported as
@@ -3229,8 +3387,9 @@ means that larger files are likely to fail.
Unfortunatly there is no way for rclone to see that this failure is
because of file size, so it will retry the operation, as any other
-failure. To avoid this problem, use --max-size 50G option to limit the
-maximum size of uploaded files.
+failure. To avoid this problem, use --max-size 50000M option to limit
+the maximum size of uploaded files. Note that --max-size does not split
+files into segments, it only ignores files over this size.
Microsoft One Drive
@@ -3372,6 +3531,8 @@ they are common. Rclone will map these names to and from an identical
looking unicode equivalent. For example if a file has a ? in it will be
mapped to ? instead.
+The largest allowed file size is 10GiB (10,737,418,240 bytes).
+
Hubic
@@ -4292,6 +4453,51 @@ it isn't supported (eg Windows) it will not appear as an valid flag.
Changelog
+- v1.35 - 2017-01-02
+ - New Features
+ - moveto and copyto commands for choosing a destination name on
+ copy/move
+ - rmdirs command to recursively delete empty directories
+ - Allow repeated --include/--exclude/--filter options
+ - Only show transfer stats on commands which transfer stuff
+ - show stats on any command using the --stats flag
+ - Allow overlapping directories in move when server side dir move
+ is supported
+ - Add --stats-unit option - thanks Scott McGillivray
+ - Bug Fixes
+ - Fix the config file being overwritten when two rclones are
+ running
+ - Make rclone lsd obey the filters properly
+ - Fix compilation on mips
+ - Fix not transferring files that don't differ in size
+ - Fix panic on nil retry/fatal error
+ - Mount
+ - Retry reads on error - should help with reliability a lot
+ - Report the modification times for directories from the remote
+ - Add bandwidth accounting and limiting (fixes --bwlimit)
+ - If --stats provided will show stats and which files are
+ transferring
+ - Support R/W files if truncate is set.
+ - Implement statfs interface so df works
+ - Note that write is now supported on Amazon Drive
+ - Report number of blocks in a file - thanks Stefan Breunig
+ - Crypt
+ - Prevent the user pointing crypt at itself
+ - Fix failed to authenticate decrypted block errors
+ - these will now return the underlying unexpected EOF instead
+ - Amazon Drive
+ - Add support for server side move and directory move - thanks
+ Stefan Breunig
+ - Fix nil pointer deref on size attribute
+ - B2
+ - Use new prefix and delimiter parameters in directory listings
+ - This makes --max-depth 1 dir listings as used in mount much
+ faster
+ - Reauth the account while doing uploads too - should help with
+ token expiry
+ - Drive
+ - Make DirMove more efficient and complain about moving the root
+ - Create destination directory on Move()
- v1.34 - 2016-11-06
- New Features
- Stop single file and --files-from operations iterating through
@@ -5070,6 +5276,11 @@ Contributors
- Felix Bünemann buenemann@louis.info
- Durval Menezes jmrclone@durval.com
- Luiz Carlos Rumbelsperger Viana maxd13_luiz_carlos@hotmail.com
+- Stefan Breunig stefan-github@yrden.de
+- Alishan Ladhani ali-l@users.noreply.github.com
+- 0xJAKE 0xJAKE@users.noreply.github.com
+- Thibault Molleman thibaultmol@users.noreply.github.com
+- Scott McGillivray scott.mcgillivray@gmail.com
diff --git a/RELEASE.md b/RELEASE.md
index 9cd1cfbc5..703e045e3 100644
--- a/RELEASE.md
+++ b/RELEASE.md
@@ -15,7 +15,8 @@ Making a release
* make tag
* edit docs/content/changelog.md
* make doc
- * git commit -a -v
+ * git status - to check for new man pages - git add them
+ * git commit -a -v -m "Version v1.XX"
* make retag
* # Set the GOPATH for a current stable go compiler
* make cross
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index 6886fd681..49fd4412e 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -7,6 +7,44 @@ date: "2016-11-06"
Changelog
---------
+ * v1.35 - 2017-01-02
+ * New Features
+ * moveto and copyto commands for choosing a destination name on copy/move
+ * rmdirs command to recursively delete empty directories
+ * Allow repeated --include/--exclude/--filter options
+ * Only show transfer stats on commands which transfer stuff
+ * show stats on any command using the `--stats` flag
+ * Allow overlapping directories in move when server side dir move is supported
+ * Add --stats-unit option - thanks Scott McGillivray
+ * Bug Fixes
+ * Fix the config file being overwritten when two rclones are running
+ * Make rclone lsd obey the filters properly
+ * Fix compilation on mips
+ * Fix not transferring files that don't differ in size
+ * Fix panic on nil retry/fatal error
+ * Mount
+ * Retry reads on error - should help with reliability a lot
+ * Report the modification times for directories from the remote
+ * Add bandwidth accounting and limiting (fixes --bwlimit)
+ * If --stats provided will show stats and which files are transferring
+ * Support R/W files if truncate is set.
+ * Implement statfs interface so df works
+ * Note that write is now supported on Amazon Drive
+ * Report number of blocks in a file - thanks Stefan Breunig
+ * Crypt
+ * Prevent the user pointing crypt at itself
+ * Fix failed to authenticate decrypted block errors
+ * these will now return the underlying unexpected EOF instead
+ * Amazon Drive
+ * Add support for server side move and directory move - thanks Stefan Breunig
+ * Fix nil pointer deref on size attribute
+ * B2
+ * Use new prefix and delimiter parameters in directory listings
+ * This makes --max-depth 1 dir listings as used in mount much faster
+ * Reauth the account while doing uploads too - should help with token expiry
+ * Drive
+ * Make DirMove more efficient and complain about moving the root
+ * Create destination directory on Move()
* v1.34 - 2016-11-06
* New Features
* Stop single file and `--files-from` operations iterating through the source bucket.
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index 619c52a07..f35f42359 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -1,12 +1,12 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone"
slug: rclone
url: /commands/rclone/
---
## rclone
-Sync files and directories to and from local and remote object stores - v1.34-DEV
+Sync files and directories to and from local and remote object stores - v1.35-DEV
### Synopsis
@@ -79,16 +79,16 @@ rclone
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -110,7 +110,8 @@ rclone
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -126,6 +127,7 @@ rclone
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied
+* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied
* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files delete/rename them.
* [rclone delete](/commands/rclone_delete/) - Remove the contents of path.
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output bash completion script for rclone.
@@ -138,11 +140,13 @@ rclone
* [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist.
* [rclone mount](/commands/rclone_mount/) - Mount the remote as a mountpoint. **EXPERIMENTAL**
* [rclone move](/commands/rclone_move/) - Move files from source to dest.
+* [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest.
* [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents.
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty.
+* [rclone rmdirs](/commands/rclone_rmdirs/) - Remove any empty directoryies under the path.
* [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path.
* [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path.
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
* [rclone version](/commands/rclone_version/) - Show the version number.
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md
index 5eab2c057..d14a023fd 100644
--- a/docs/content/commands/rclone_authorize.md
+++ b/docs/content/commands/rclone_authorize.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
@@ -52,16 +52,16 @@ rclone authorize
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -83,7 +83,8 @@ rclone authorize
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -92,6 +93,6 @@ rclone authorize
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md
index 925a6c113..12dd2a61f 100644
--- a/docs/content/commands/rclone_cat.md
+++ b/docs/content/commands/rclone_cat.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/
@@ -63,16 +63,16 @@ rclone cat remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -94,7 +94,8 @@ rclone cat remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -103,6 +104,6 @@ rclone cat remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md
index 54e2de935..5fcc203c0 100644
--- a/docs/content/commands/rclone_check.md
+++ b/docs/content/commands/rclone_check.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
@@ -55,16 +55,16 @@ rclone check source:path dest:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -86,7 +86,8 @@ rclone check source:path dest:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -95,6 +96,6 @@ rclone check source:path dest:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md
index 5cd97178d..fdbef818c 100644
--- a/docs/content/commands/rclone_cleanup.md
+++ b/docs/content/commands/rclone_cleanup.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
@@ -52,16 +52,16 @@ rclone cleanup remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -83,7 +83,8 @@ rclone cleanup remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -92,6 +93,6 @@ rclone cleanup remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md
index 948f48569..1531997be 100644
--- a/docs/content/commands/rclone_config.md
+++ b/docs/content/commands/rclone_config.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
@@ -49,16 +49,16 @@ rclone config
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -80,7 +80,8 @@ rclone config
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -89,6 +90,6 @@ rclone config
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md
index f1f1dee12..87d482e13 100644
--- a/docs/content/commands/rclone_copy.md
+++ b/docs/content/commands/rclone_copy.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
@@ -88,16 +88,16 @@ rclone copy source:path dest:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -119,7 +119,8 @@ rclone copy source:path dest:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -128,6 +129,6 @@ rclone copy source:path dest:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md
new file mode 100644
index 000000000..1f37ee13d
--- /dev/null
+++ b/docs/content/commands/rclone_copyto.md
@@ -0,0 +1,121 @@
+---
+date: 2017-01-02T15:29:14Z
+title: "rclone copyto"
+slug: rclone_copyto
+url: /commands/rclone_copyto/
+---
+## rclone copyto
+
+Copy files from source to dest, skipping already copied
+
+### Synopsis
+
+
+
+If source:path is a file or directory then it copies it to a file or
+directory named dest:path.
+
+This can be used to upload single files to other than their current
+name. If the source is a directory then it acts exactly like the copy
+command.
+
+So
+
+ rclone copyto src dst
+
+where src and dst are rclone paths, either remote:path or
+/path/to/local or C:\windows\path\if\on\windows.
+
+This will:
+
+ if src is file
+ copy it to dst, overwriting an existing file if it exists
+ if src is directory
+ copy it to dst, overwriting existing files if they exist
+ see copy command for full details
+
+This doesn't transfer unchanged files, testing by size and
+modification time or MD5SUM. It doesn't delete files from the
+destination.
+
+
+```
+rclone copyto source:path dest:path
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ --cpuprofile string Write cpu profile to file
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently.
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump-auth Dump HTTP headers with auth info
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-filters Dump the filters to the output
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --log-file string Log everything to this file
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --timeout duration IO idle timeout (default 5m0s)
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ -v, --verbose Print lots more stuff
+```
+
+### SEE ALSO
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md
index 01306ee1c..fe7a5656c 100644
--- a/docs/content/commands/rclone_dedupe.md
+++ b/docs/content/commands/rclone_dedupe.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/
@@ -130,16 +130,16 @@ rclone dedupe [mode] remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -161,7 +161,8 @@ rclone dedupe [mode] remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -170,6 +171,6 @@ rclone dedupe [mode] remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md
index 58cfb0b79..2545c2eee 100644
--- a/docs/content/commands/rclone_delete.md
+++ b/docs/content/commands/rclone_delete.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/
@@ -66,16 +66,16 @@ rclone delete remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -97,7 +97,8 @@ rclone delete remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -106,6 +107,6 @@ rclone delete remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md
index 3c02c27ed..a3942ad62 100644
--- a/docs/content/commands/rclone_genautocomplete.md
+++ b/docs/content/commands/rclone_genautocomplete.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone genautocomplete"
slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/
@@ -64,16 +64,16 @@ rclone genautocomplete [output_file]
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -95,7 +95,8 @@ rclone genautocomplete [output_file]
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -104,6 +105,6 @@ rclone genautocomplete [output_file]
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md
index 8e55d104d..fb5238489 100644
--- a/docs/content/commands/rclone_gendocs.md
+++ b/docs/content/commands/rclone_gendocs.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone gendocs"
slug: rclone_gendocs
url: /commands/rclone_gendocs/
@@ -52,16 +52,16 @@ rclone gendocs output_directory
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -83,7 +83,8 @@ rclone gendocs output_directory
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -92,6 +93,6 @@ rclone gendocs output_directory
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md
index 27302f02f..3744910b7 100644
--- a/docs/content/commands/rclone_listremotes.md
+++ b/docs/content/commands/rclone_listremotes.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone listremotes"
slug: rclone_listremotes
url: /commands/rclone_listremotes/
@@ -59,16 +59,16 @@ rclone listremotes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -90,7 +90,8 @@ rclone listremotes
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -99,6 +100,6 @@ rclone listremotes
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md
index 843b4a0c4..a52fa8b41 100644
--- a/docs/content/commands/rclone_ls.md
+++ b/docs/content/commands/rclone_ls.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone ls"
slug: rclone_ls
url: /commands/rclone_ls/
@@ -49,16 +49,16 @@ rclone ls remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -80,7 +80,8 @@ rclone ls remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -89,6 +90,6 @@ rclone ls remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md
index b6f7fee14..383a01881 100644
--- a/docs/content/commands/rclone_lsd.md
+++ b/docs/content/commands/rclone_lsd.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone lsd"
slug: rclone_lsd
url: /commands/rclone_lsd/
@@ -49,16 +49,16 @@ rclone lsd remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -80,7 +80,8 @@ rclone lsd remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -89,6 +90,6 @@ rclone lsd remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md
index 67426df0a..b324609a5 100644
--- a/docs/content/commands/rclone_lsl.md
+++ b/docs/content/commands/rclone_lsl.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone lsl"
slug: rclone_lsl
url: /commands/rclone_lsl/
@@ -49,16 +49,16 @@ rclone lsl remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -80,7 +80,8 @@ rclone lsl remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -89,6 +90,6 @@ rclone lsl remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md
index 17d4baebf..98b55ce99 100644
--- a/docs/content/commands/rclone_md5sum.md
+++ b/docs/content/commands/rclone_md5sum.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone md5sum"
slug: rclone_md5sum
url: /commands/rclone_md5sum/
@@ -52,16 +52,16 @@ rclone md5sum remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -83,7 +83,8 @@ rclone md5sum remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -92,6 +93,6 @@ rclone md5sum remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md
index e7410a46f..1d8a8f91b 100644
--- a/docs/content/commands/rclone_mkdir.md
+++ b/docs/content/commands/rclone_mkdir.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone mkdir"
slug: rclone_mkdir
url: /commands/rclone_mkdir/
@@ -49,16 +49,16 @@ rclone mkdir remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -80,7 +80,8 @@ rclone mkdir remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -89,6 +90,6 @@ rclone mkdir remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index c8ea7a74a..efabdf926 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone mount"
slug: rclone_mount
url: /commands/rclone_mount/
@@ -59,7 +59,7 @@ mount won't do that, so will be less reliable than the rclone command.
### Bugs ###
* All the remotes should work for read, but some may not for write
- * those which need to know the size in advance won't - eg B2, Amazon Drive
+ * those which need to know the size in advance won't - eg B2
* maybe should pass in size as -1 to mean work it out
* Or put in an an upload cache to cache the files on disk first
@@ -125,16 +125,16 @@ rclone mount remote:path /path/to/mountpoint
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -156,7 +156,8 @@ rclone mount remote:path /path/to/mountpoint
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -165,6 +166,6 @@ rclone mount remote:path /path/to/mountpoint
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md
index b010be4bb..db440f829 100644
--- a/docs/content/commands/rclone_move.md
+++ b/docs/content/commands/rclone_move.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone move"
slug: rclone_move
url: /commands/rclone_move/
@@ -13,7 +13,8 @@ Move files from source to dest.
Moves the contents of the source directory to the destination
-directory. Rclone will error if the source and destination overlap.
+directory. Rclone will error if the source and destination overlap and
+the remote does not support a server side directory move operation.
If no filters are in use and if possible this will server side move
`source:path` into `dest:path`. After this `source:path` will no
@@ -65,16 +66,16 @@ rclone move source:path dest:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -96,7 +97,8 @@ rclone move source:path dest:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -105,6 +107,6 @@ rclone move source:path dest:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md
new file mode 100644
index 000000000..f8456bd6f
--- /dev/null
+++ b/docs/content/commands/rclone_moveto.md
@@ -0,0 +1,124 @@
+---
+date: 2017-01-02T15:29:14Z
+title: "rclone moveto"
+slug: rclone_moveto
+url: /commands/rclone_moveto/
+---
+## rclone moveto
+
+Move file or directory from source to dest.
+
+### Synopsis
+
+
+
+If source:path is a file or directory then it moves it to a file or
+directory named dest:path.
+
+This can be used to rename files or upload single files to other than
+their existing name. If the source is a directory then it acts exacty
+like the move command.
+
+So
+
+ rclone moveto src dst
+
+where src and dst are rclone paths, either remote:path or
+/path/to/local or C:\windows\path\if\on\windows.
+
+This will:
+
+ if src is file
+ move it to dst, overwriting an existing file if it exists
+ if src is directory
+ move it to dst, overwriting existing files if they exist
+ see move command for full details
+
+This doesn't transfer unchanged files, testing by size and
+modification time or MD5SUM. src will be deleted on successful
+transfer.
+
+**Important**: Since this can cause data loss, test first with the
+--dry-run flag.
+
+
+```
+rclone moveto source:path dest:path
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ --cpuprofile string Write cpu profile to file
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently.
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump-auth Dump HTTP headers with auth info
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-filters Dump the filters to the output
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --log-file string Log everything to this file
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --timeout duration IO idle timeout (default 5m0s)
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ -v, --verbose Print lots more stuff
+```
+
+### SEE ALSO
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md
index 68eb81afc..08c44708f 100644
--- a/docs/content/commands/rclone_purge.md
+++ b/docs/content/commands/rclone_purge.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone purge"
slug: rclone_purge
url: /commands/rclone_purge/
@@ -53,16 +53,16 @@ rclone purge remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -84,7 +84,8 @@ rclone purge remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -93,6 +94,6 @@ rclone purge remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md
index 4a3f91455..76aa7ad6f 100644
--- a/docs/content/commands/rclone_rmdir.md
+++ b/docs/content/commands/rclone_rmdir.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone rmdir"
slug: rclone_rmdir
url: /commands/rclone_rmdir/
@@ -51,16 +51,16 @@ rclone rmdir remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -82,7 +82,8 @@ rclone rmdir remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -91,6 +92,6 @@ rclone rmdir remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md
new file mode 100644
index 000000000..2cceb364c
--- /dev/null
+++ b/docs/content/commands/rclone_rmdirs.md
@@ -0,0 +1,102 @@
+---
+date: 2017-01-02T15:29:14Z
+title: "rclone rmdirs"
+slug: rclone_rmdirs
+url: /commands/rclone_rmdirs/
+---
+## rclone rmdirs
+
+Remove any empty directoryies under the path.
+
+### Synopsis
+
+
+This removes any empty directories (or directories that only contain
+empty directories) under the path that it finds, including the path if
+it has nothing in.
+
+This is useful for tidying up remotes that rclone has left a lot of
+empty directories in.
+
+
+
+```
+rclone rmdirs remote:path
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ --cpuprofile string Write cpu profile to file
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently.
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump-auth Dump HTTP headers with auth info
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-filters Dump the filters to the output
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --log-file string Log everything to this file
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --timeout duration IO idle timeout (default 5m0s)
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ -v, --verbose Print lots more stuff
+```
+
+### SEE ALSO
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md
index b910cd4f1..7e2bfc9ea 100644
--- a/docs/content/commands/rclone_sha1sum.md
+++ b/docs/content/commands/rclone_sha1sum.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone sha1sum"
slug: rclone_sha1sum
url: /commands/rclone_sha1sum/
@@ -52,16 +52,16 @@ rclone sha1sum remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -83,7 +83,8 @@ rclone sha1sum remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -92,6 +93,6 @@ rclone sha1sum remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md
index e74c52b0d..fa766a540 100644
--- a/docs/content/commands/rclone_size.md
+++ b/docs/content/commands/rclone_size.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone size"
slug: rclone_size
url: /commands/rclone_size/
@@ -49,16 +49,16 @@ rclone size remote:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -80,7 +80,8 @@ rclone size remote:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -89,6 +90,6 @@ rclone size remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md
index ec2213808..b46d6111d 100644
--- a/docs/content/commands/rclone_sync.md
+++ b/docs/content/commands/rclone_sync.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone sync"
slug: rclone_sync
url: /commands/rclone_sync/
@@ -68,16 +68,16 @@ rclone sync source:path dest:path
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -99,7 +99,8 @@ rclone sync source:path dest:path
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -108,6 +109,6 @@ rclone sync source:path dest:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md
index e2cb227c4..fc80f19ee 100644
--- a/docs/content/commands/rclone_version.md
+++ b/docs/content/commands/rclone_version.md
@@ -1,5 +1,5 @@
---
-date: 2016-11-06T10:19:14Z
+date: 2017-01-02T15:29:14Z
title: "rclone version"
slug: rclone_version
url: /commands/rclone_version/
@@ -49,16 +49,16 @@ rclone version
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
- --exclude string Exclude files matching pattern
- --exclude-from string Read exclude patterns from file
- --files-from string Read list of source-file names from file
- -f, --filter string Add a file-filtering rule
- --filter-from string Read filtering patterns from a file
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --include string Include files matching pattern
- --include-from string Read include patterns from file
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
@@ -80,7 +80,8 @@ rclone version
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval to print stats (0 to disable) (default 1m0s)
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
@@ -89,6 +90,6 @@ rclone version
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
-###### Auto generated by spf13/cobra on 6-Nov-2016
+###### Auto generated by spf13/cobra on 2-Jan-2017
diff --git a/docs/content/downloads.md b/docs/content/downloads.md
index f4d28979d..1547d4ed0 100644
--- a/docs/content/downloads.md
+++ b/docs/content/downloads.md
@@ -2,41 +2,41 @@
title: "Rclone downloads"
description: "Download rclone binaries for your OS."
type: page
-date: "2016-11-06"
+date: "2017-01-02"
---
-Rclone Download v1.34
+Rclone Download v1.35
=====================
* Windows
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-windows-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-windows-amd64.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-windows-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-windows-amd64.zip)
* OSX
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-osx-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-osx-amd64.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-osx-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-osx-amd64.zip)
* Linux
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-linux-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-linux-amd64.zip)
- * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.34-linux-arm.zip)
- * [ARM - 64 Bit](http://downloads.rclone.org/rclone-v1.34-linux-arm64.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-linux-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-linux-amd64.zip)
+ * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.35-linux-arm.zip)
+ * [ARM - 64 Bit](http://downloads.rclone.org/rclone-v1.35-linux-arm64.zip)
* FreeBSD
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-freebsd-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-freebsd-amd64.zip)
- * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.34-freebsd-arm.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-freebsd-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-freebsd-amd64.zip)
+ * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.35-freebsd-arm.zip)
* NetBSD
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-netbsd-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-netbsd-amd64.zip)
- * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.34-netbsd-arm.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-netbsd-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-netbsd-amd64.zip)
+ * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.35-netbsd-arm.zip)
* OpenBSD
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-openbsd-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-openbsd-amd64.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-openbsd-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-openbsd-amd64.zip)
* Plan 9
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-plan9-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-plan9-amd64.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-plan9-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-plan9-amd64.zip)
* Solaris
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-solaris-amd64.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-solaris-amd64.zip)
-You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.34).
+You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.35).
You can also download [the releases using SSL](https://downloads-rclone-org-7d7d567e.cdn.memsites.com/).
diff --git a/fs/version.go b/fs/version.go
index 54c53c451..87503aaf3 100644
--- a/fs/version.go
+++ b/fs/version.go
@@ -1,4 +1,4 @@
package fs
// Version of rclone
-var Version = "v1.34-DEV"
+var Version = "v1.35-DEV"
diff --git a/rclone.1 b/rclone.1
index 8aa607410..ce166e099 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 1.16.0.2
.\"
-.TH "rclone" "1" "Nov 06, 2016" "User Manual" ""
+.TH "rclone" "1" "Jan 02, 2017" "User Manual" ""
.hy
.SH Rclone
.PP
@@ -63,9 +63,9 @@ Home page (http://rclone.org/)
Github project page for source and bug
tracker (http://github.com/ncw/rclone)
.IP \[bu] 2
+Rclone Forum (https://forum.rclone.org)
+.IP \[bu] 2
Google+ page
-.RS 2
-.RE
.IP \[bu] 2
Downloads (http://rclone.org/downloads/)
.SH Install
@@ -394,7 +394,8 @@ Move files from source to dest.
.SS Synopsis
.PP
Moves the contents of the source directory to the destination directory.
-Rclone will error if the source and destination overlap.
+Rclone will error if the source and destination overlap and the remote
+does not support a server side directory move operation.
.PP
If no filters are in use and if possible this will server side move
\f[C]source:path\f[] into \f[C]dest:path\f[].
@@ -785,6 +786,50 @@ rclone\ \-\-include\ "*.txt"\ cat\ remote:path/to/dir
rclone\ cat\ remote:path
\f[]
.fi
+.SS rclone copyto
+.PP
+Copy files from source to dest, skipping already copied
+.SS Synopsis
+.PP
+If source:path is a file or directory then it copies it to a file or
+directory named dest:path.
+.PP
+This can be used to upload single files to other than their current
+name.
+If the source is a directory then it acts exactly like the copy command.
+.PP
+So
+.IP
+.nf
+\f[C]
+rclone\ copyto\ src\ dst
+\f[]
+.fi
+.PP
+where src and dst are rclone paths, either remote:path or /path/to/local
+or C:.
+.PP
+This will:
+.IP
+.nf
+\f[C]
+if\ src\ is\ file
+\ \ \ \ copy\ it\ to\ dst,\ overwriting\ an\ existing\ file\ if\ it\ exists
+if\ src\ is\ directory
+\ \ \ \ copy\ it\ to\ dst,\ overwriting\ existing\ files\ if\ they\ exist
+\ \ \ \ see\ copy\ command\ for\ full\ details
+\f[]
+.fi
+.PP
+This doesn\[aq]t transfer unchanged files, testing by size and
+modification time or MD5SUM.
+It doesn\[aq]t delete files from the destination.
+.IP
+.nf
+\f[C]
+rclone\ copyto\ source:path\ dest:path
+\f[]
+.fi
.SS rclone genautocomplete
.PP
Output bash completion script for rclone.
@@ -920,8 +965,7 @@ won\[aq]t do that, so will be less reliable than the rclone command.
All the remotes should work for read, but some may not for write
.RS 2
.IP \[bu] 2
-those which need to know the size in advance won\[aq]t \- eg B2, Amazon
-Drive
+those which need to know the size in advance won\[aq]t \- eg B2
.IP \[bu] 2
maybe should pass in size as \-1 to mean work it out
.IP \[bu] 2
@@ -960,6 +1004,70 @@ rclone\ mount\ remote:path\ /path/to/mountpoint
\ \ \ \ \ \ \-\-write\-back\-cache\ \ \ \ \ \ \ \ \ \ Makes\ kernel\ buffer\ writes\ before\ sending\ them\ to\ rclone.\ Without\ this,\ writethrough\ caching\ is\ used.
\f[]
.fi
+.SS rclone moveto
+.PP
+Move file or directory from source to dest.
+.SS Synopsis
+.PP
+If source:path is a file or directory then it moves it to a file or
+directory named dest:path.
+.PP
+This can be used to rename files or upload single files to other than
+their existing name.
+If the source is a directory then it acts exacty like the move command.
+.PP
+So
+.IP
+.nf
+\f[C]
+rclone\ moveto\ src\ dst
+\f[]
+.fi
+.PP
+where src and dst are rclone paths, either remote:path or /path/to/local
+or C:.
+.PP
+This will:
+.IP
+.nf
+\f[C]
+if\ src\ is\ file
+\ \ \ \ move\ it\ to\ dst,\ overwriting\ an\ existing\ file\ if\ it\ exists
+if\ src\ is\ directory
+\ \ \ \ move\ it\ to\ dst,\ overwriting\ existing\ files\ if\ they\ exist
+\ \ \ \ see\ move\ command\ for\ full\ details
+\f[]
+.fi
+.PP
+This doesn\[aq]t transfer unchanged files, testing by size and
+modification time or MD5SUM.
+src will be deleted on successful transfer.
+.PP
+\f[B]Important\f[]: Since this can cause data loss, test first with the
+\-\-dry\-run flag.
+.IP
+.nf
+\f[C]
+rclone\ moveto\ source:path\ dest:path
+\f[]
+.fi
+.SS rclone rmdirs
+.PP
+Remove any empty directoryies under the path.
+.SS Synopsis
+.PP
+This removes any empty directories (or directories that only contain
+empty directories) under the path that it finds, including the path if
+it has nothing in.
+.PP
+This is useful for tidying up remotes that rclone has left a lot of
+empty directories in.
+.IP
+.nf
+\f[C]
+rclone\ rmdirs\ remote:path
+\f[]
+.fi
.SS Copying single files
.PP
rclone normally syncs or copies directories.
@@ -1304,12 +1412,30 @@ modified by the desktop sync client which doesn\[aq]t set checksums of
modification times in the same way as rclone.
.SS \-\-stats=TIME
.PP
-Rclone will print stats at regular intervals to show its progress.
+Commands which transfer data (\f[C]sync\f[], \f[C]copy\f[],
+\f[C]copyto\f[], \f[C]move\f[], \f[C]moveto\f[]) will print data
+transfer stats at regular intervals to show their progress.
.PP
This sets the interval.
.PP
The default is \f[C]1m\f[].
Use 0 to disable.
+.PP
+If you set the stats interval then all command can show stats.
+This can be useful when running other commands, \f[C]check\f[] or
+\f[C]mount\f[] for example.
+.SS \-\-stats\-unit=bits|bytes
+.PP
+By default data transfer rates will be printed in bytes/second.
+.PP
+This option allows the data rate to be printed in bits/second.
+.PP
+Data transfer volume will still be reported in bytes.
+.PP
+The rate is reported as a binary unit, not SI unit.
+So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s.
+.PP
+The default is \f[C]bytes\f[].
.SS \-\-delete\-(before,during,after)
.PP
This option allows you to specify when files on your destination are
@@ -1448,6 +1574,24 @@ If it is safe in your environment, you can set the
password, in which case it will be used for decrypting the
configuration.
.PP
+You can set this for a session from a script.
+For unix like systems save this to a file called
+\f[C]set\-rclone\-password\f[]:
+.IP
+.nf
+\f[C]
+#!/bin/echo\ Source\ this\ file\ don\[aq]t\ run\ it
+
+read\ \-s\ RCLONE_CONFIG_PASS
+export\ RCLONE_CONFIG_PASS
+\f[]
+.fi
+.PP
+Then source the file when you want to use it.
+From the shell you would do \f[C]source\ set\-rclone\-password\f[].
+It will then ask you for the password and set it in the envonment
+variable.
+.PP
If you are running rclone inside a script, you might want to disable
password prompts.
To do that, pass the parameter \f[C]\-\-ask\-password=false\f[] to
@@ -1710,17 +1854,14 @@ exclude rules like \f[C]\-\-include\f[], \f[C]\-\-exclude\f[],
\f[C]\-\-filter\f[], or \f[C]\-\-filter\-from\f[].
The simplest way to try them out is using the \f[C]ls\f[] command, or
\f[C]\-\-dry\-run\f[] together with \f[C]\-v\f[].
-.PP
-\f[B]Important\f[] Due to limitations of the command line parser you can
-only use any of these options once \- if you duplicate them then rclone
-will use the last one only.
.SS Patterns
.PP
The patterns used to match files for inclusion or exclusion are based on
"file globs" as used by the unix shell.
.PP
If the pattern starts with a \f[C]/\f[] then it only matches at the top
-level of the directory tree, relative to the root of the remote.
+level of the directory tree, \f[B]relative to the root of the remote\f[]
+(not necessarily the root of the local drive).
If it doesn\[aq]t start with \f[C]/\f[] then it is matched starting at
the \f[B]end of the path\f[], but it will only match a complete path
element:
@@ -1850,13 +1991,14 @@ Rclone always does a wildcard match so \f[C]\\\f[] must always escape a
\f[C]\\\f[].
.SS How the rules are used
.PP
-Rclone maintains a list of include rules and exclude rules.
+Rclone maintains a combined list of include rules and exclude rules.
.PP
-Each file is matched in order against the list until it finds a match.
+Each file is matched in order, starting from the top, against the rule
+in the list until it finds a match.
The file is then included or excluded according to the rule type.
.PP
-If the matcher falls off the bottom of the list then the path is
-included.
+If the matcher fails to find a match after testing against all the
+entries in the list then the path is included.
.PP
For example given the following rules, \f[C]+\f[] being include,
\f[C]\-\f[] being exclude,
@@ -1893,15 +2035,48 @@ google drive, onedrive, amazon drive) and not on bucket based remotes
.SS Adding filtering rules
.PP
Filtering rules are added with the following command line flags.
+.SS Repeating options
+.PP
+You can repeat the following options to add more than one rule of that
+type.
+.IP \[bu] 2
+\f[C]\-\-include\f[]
+.IP \[bu] 2
+\f[C]\-\-include\-from\f[]
+.IP \[bu] 2
+\f[C]\-\-exclude\f[]
+.IP \[bu] 2
+\f[C]\-\-exclude\-from\f[]
+.IP \[bu] 2
+\f[C]\-\-filter\f[]
+.IP \[bu] 2
+\f[C]\-\-filter\-from\f[]
+.PP
+Note that all the options of the same type are processed together in the
+order above, regardless of what order they were placed on the command
+line.
+.PP
+So all \f[C]\-\-include\f[] options are processed first in the order
+they appeared on the command line, then all \f[C]\-\-include\-from\f[]
+options etc.
+.PP
+To mix up the order includes and excludes, the \f[C]\-\-filter\f[] flag
+can be used.
.SS \f[C]\-\-exclude\f[] \- Exclude files matching pattern
.PP
Add a single exclude rule with \f[C]\-\-exclude\f[].
.PP
+This flag can be repeated.
+See above for the order the flags are processed in.
+.PP
Eg \f[C]\-\-exclude\ *.bak\f[] to exclude all bak files from the sync.
.SS \f[C]\-\-exclude\-from\f[] \- Read exclude patterns from file
.PP
Add exclude rules from a file.
.PP
+This flag can be repeated.
+See above for the order the flags are processed in.
+.PP
Prepare a file like this \f[C]exclude\-file.txt\f[]
.IP
.nf
@@ -1921,6 +2096,9 @@ This is useful if you have a lot of rules.
.PP
Add a single include rule with \f[C]\-\-include\f[].
.PP
+This flag can be repeated.
+See above for the order the flags are processed in.
+.PP
Eg \f[C]\-\-include\ *.{png,jpg}\f[] to include all \f[C]png\f[] and
\f[C]jpg\f[] files in the backup and no others.
.PP
@@ -1936,6 +2114,9 @@ If this doesn\[aq]t provide enough flexibility then you must use
.PP
Add include rules from a file.
.PP
+This flag can be repeated.
+See above for the order the flags are processed in.
+.PP
Prepare a file like this \f[C]include\-file.txt\f[]
.IP
.nf
@@ -1969,12 +2150,18 @@ Include rules start with \f[C]+\f[] and exclude rules start with
A special rule called \f[C]!\f[] can be used to clear the existing
rules.
.PP
+This flag can be repeated.
+See above for the order the flags are processed in.
+.PP
Eg \f[C]\-\-filter\ "\-\ *.bak"\f[] to exclude all bak files from the
sync.
.SS \f[C]\-\-filter\-from\f[] \- Read filtering patterns from a file
.PP
Add include/exclude rules from a file.
.PP
+This flag can be repeated.
+See above for the order the flags are processed in.
+.PP
Prepare a file like this \f[C]filter\-file.txt\f[]
.IP
.nf
@@ -2002,6 +2189,9 @@ This reads a list of file names from the file passed in and
\f[B]only\f[] these files are transferred.
The filtering rules are ignored completely if you use this option.
.PP
+This option can be repeated to read from more than one file.
+These are read in the order that they are placed on the command line.
+.PP
Prepare a file like this \f[C]files\-from.txt\f[]
.IP
.nf
@@ -2511,9 +2701,9 @@ Yes
T}@T{
No
T}@T{
-No #721 (https://github.com/ncw/rclone/issues/721)
+Yes
T}@T{
-No #721 (https://github.com/ncw/rclone/issues/721)
+Yes
T}@T{
No #575 (https://github.com/ncw/rclone/issues/575)
T}
@@ -2805,7 +2995,7 @@ rclone will choose a format from the default list.
If you prefer an archive copy then you might use
\f[C]\-\-drive\-formats\ pdf\f[], or if you prefer
openoffice/libreoffice formats you might use
-\f[C]\-\-drive\-formats\ ods,odt\f[].
+\f[C]\-\-drive\-formats\ ods,odt,odp\f[].
.PP
Note that rclone adds the extension to the google doc, so if it is
calles \f[C]My\ Spreadsheet\f[] on google docs, it will be exported as
@@ -4188,8 +4378,10 @@ This means that larger files are likely to fail.
Unfortunatly there is no way for rclone to see that this failure is
because of file size, so it will retry the operation, as any other
failure.
-To avoid this problem, use \f[C]\-\-max\-size\ 50G\f[] option to limit
-the maximum size of uploaded files.
+To avoid this problem, use \f[C]\-\-max\-size\ 50000M\f[] option to
+limit the maximum size of uploaded files.
+Note that \f[C]\-\-max\-size\f[] does not split files into segments, it
+only ignores files over this size.
.SS Microsoft One Drive
.PP
Paths are specified as \f[C]remote:path\f[]
@@ -4350,6 +4542,8 @@ Rclone will map these names to and from an identical looking unicode
equivalent.
For example if a file has a \f[C]?\f[] in it will be mapped to
\f[C]?\f[] instead.
+.PP
+The largest allowed file size is 10GiB (10,737,418,240 bytes).
.SS Hubic
.PP
Paths are specified as \f[C]remote:path\f[]
@@ -5439,6 +5633,93 @@ On systems where it isn\[aq]t supported (eg Windows) it will not appear
as an valid flag.
.SS Changelog
.IP \[bu] 2
+v1.35 \- 2017\-01\-02
+.RS 2
+.IP \[bu] 2
+New Features
+.IP \[bu] 2
+moveto and copyto commands for choosing a destination name on copy/move
+.IP \[bu] 2
+rmdirs command to recursively delete empty directories
+.IP \[bu] 2
+Allow repeated \-\-include/\-\-exclude/\-\-filter options
+.IP \[bu] 2
+Only show transfer stats on commands which transfer stuff
+.RS 2
+.IP \[bu] 2
+show stats on any command using the \f[C]\-\-stats\f[] flag
+.RE
+.IP \[bu] 2
+Allow overlapping directories in move when server side dir move is
+supported
+.IP \[bu] 2
+Add \-\-stats\-unit option \- thanks Scott McGillivray
+.IP \[bu] 2
+Bug Fixes
+.IP \[bu] 2
+Fix the config file being overwritten when two rclones are running
+.IP \[bu] 2
+Make rclone lsd obey the filters properly
+.IP \[bu] 2
+Fix compilation on mips
+.IP \[bu] 2
+Fix not transferring files that don\[aq]t differ in size
+.IP \[bu] 2
+Fix panic on nil retry/fatal error
+.IP \[bu] 2
+Mount
+.IP \[bu] 2
+Retry reads on error \- should help with reliability a lot
+.IP \[bu] 2
+Report the modification times for directories from the remote
+.IP \[bu] 2
+Add bandwidth accounting and limiting (fixes \-\-bwlimit)
+.IP \[bu] 2
+If \-\-stats provided will show stats and which files are transferring
+.IP \[bu] 2
+Support R/W files if truncate is set.
+.IP \[bu] 2
+Implement statfs interface so df works
+.IP \[bu] 2
+Note that write is now supported on Amazon Drive
+.IP \[bu] 2
+Report number of blocks in a file \- thanks Stefan Breunig
+.IP \[bu] 2
+Crypt
+.IP \[bu] 2
+Prevent the user pointing crypt at itself
+.IP \[bu] 2
+Fix failed to authenticate decrypted block errors
+.RS 2
+.IP \[bu] 2
+these will now return the underlying unexpected EOF instead
+.RE
+.IP \[bu] 2
+Amazon Drive
+.IP \[bu] 2
+Add support for server side move and directory move \- thanks Stefan
+Breunig
+.IP \[bu] 2
+Fix nil pointer deref on size attribute
+.IP \[bu] 2
+B2
+.IP \[bu] 2
+Use new prefix and delimiter parameters in directory listings
+.RS 2
+.IP \[bu] 2
+This makes \-\-max\-depth 1 dir listings as used in mount much faster
+.RE
+.IP \[bu] 2
+Reauth the account while doing uploads too \- should help with token
+expiry
+.IP \[bu] 2
+Drive
+.IP \[bu] 2
+Make DirMove more efficient and complain about moving the root
+.IP \[bu] 2
+Create destination directory on Move()
+.RE
+.IP \[bu] 2
v1.34 \- 2016\-11\-06
.RS 2
.IP \[bu] 2
@@ -6875,6 +7156,16 @@ Felix Bünemann
Durval Menezes
.IP \[bu] 2
Luiz Carlos Rumbelsperger Viana
+.IP \[bu] 2
+Stefan Breunig
+.IP \[bu] 2
+Alishan Ladhani
+.IP \[bu] 2
+0xJAKE <0xJAKE@users.noreply.github.com>
+.IP \[bu] 2
+Thibault Molleman
+.IP \[bu] 2
+Scott McGillivray
.SH Contact the rclone project
.SS Forum
.PP